Python牛奶库:对象权重问题

时间:2011-10-10 09:58:15

标签: python machine-learning classification

我正在尝试使用one_vs_one组合的决策树进行多类分类。问题是,当我将不同的对象权重传递给分类器时,结果保持不变。

我是否误解了有重量的东西,或者他们只是错误地工作了?

感谢您的回复!

这是我的代码:

class AdaLearner(object):
    def __init__(self, in_base_type, in_multi_type):
        self.base_type = in_base_type
        self.multi_type = in_multi_type

    def train(self, in_features, in_labels):
        model = AdaBoost(self.base_type, self.multi_type)
        model.learn(in_features, in_labels)

        return model

class AdaBoost(object):
    CLASSIFIERS_NUM = 100
    def __init__(self, in_base_type, in_multi_type):
        self.base_type = in_base_type
        self.multi_type = in_multi_type
        self.classifiers = []
        self.weights = []

    def learn(self, in_features, in_labels):
        labels_number = len(set(in_labels))
        self.weights = self.get_initial_weights(in_labels)

        for iteration in xrange(AdaBoost.CLASSIFIERS_NUM):
            classifier = self.multi_type(self.base_type())
            self.classifiers.append(classifier.train(in_features,
                                                     in_labels,
                                                     weights=self.weights))
            answers = []
            for obj in in_features:
                answers.append(self.classifiers[-1].apply(obj))
            err = self.compute_weighted_error(in_labels, answers)
            print err
            if abs(err - 0.) < 1e-6:
            break

            alpha = 0.5 * log((1 - err)/err)

            self.update_weights(in_labels, answers, alpha)
            self.normalize_weights()

    def apply(self, in_features):
        answers = {}
        for classifier in self.classifiers:
            answer = classifier.apply(in_features)
            if answer in answers:
                answers[answer] += 1
            else:
                answers[answer] = 1
        ranked_answers = sorted(answers.iteritems(),
                                key=lambda (k,v): (v,k),
                                reverse=True)
        return ranked_answers[0][0]

    def compute_weighted_error(self, in_labels, in_answers):
        error = 0.
        w_sum = sum(self.weights)
        for ind in xrange(len(in_labels)):
            error += (in_answers[ind] != in_labels[ind]) * self.weights[ind] / w_sum
        return error

    def update_weights(self, in_labels, in_answers, in_alpha):
        for ind in xrange(len(in_labels)):
            self.weights[ind] *= exp(in_alpha * (in_answers[ind] != in_labels[ind]))

    def normalize_weights(self):
        w_sum = sum(self.weights)
        for ind in xrange(len(self.weights)):
            self.weights[ind] /= w_sum

    def get_initial_weights(self, in_labels):
        weight = 1 / float(len(in_labels))
        result = []
        for i in xrange(len(in_labels)):
            result.append(weight)
        return result

正如您所看到的,它只是一个简单的AdaBoost(我使用in_base_type = tree_learner,in_multi_type = one_against_one实例化它),无论使用了多少个基本分类器,它都以相同的方式工作。它只是作为一个多类决策树。 然后我做了一个黑客。我在每次迭代中选择了一个随机的对象样本,其权重和训练的分类器与一个没有任何权重的随机对象子集。并且它按照预期的方式起作用。

1 个答案:

答案 0 :(得分:0)

默认树标准,即信息增益,不考虑权重。如果你知道一个可以做到的公式,我会实现它。

同时,使用neg_z1_loss可以正确完成。顺便说一句,该实现中存在一个小错误,因此您需要使用最新的github master