如何在xgboost中为不平衡数据设置多类分类权重?

时间:2017-08-22 07:15:57

标签: xgboost multiclass-classification

我知道你可以为不平衡的数据集设置scale_pos_weight。但是,如何处理不平衡数据集中的多分类问题。我已经完成了https://datascience.stackexchange.com/questions/16342/unbalanced-multiclass-data-with-xgboost/18823但不太了解如何在Dmatrix中设置权重参数。

任何人都可以详细解释一下吗?

1 个答案:

答案 0 :(得分:0)

对于不平衡的数据集,我在Xgboost中使用了“ weights”参数,其中weights是根据数据所属类分配的权重数组。

def CreateBalancedSampleWeights(y_train, largest_class_weight_coef):
classes = np.unique(y_train, axis = 0)
classes.sort()
class_samples = np.bincount(y_train)
total_samples = class_samples.sum()
n_classes = len(class_samples)
weights = total_samples / (n_classes * class_samples * 1.0)
class_weight_dict = {key : value for (key, value) in zip(classes, weights)}
class_weight_dict[classes[1]] = class_weight_dict[classes[1]] * 
largest_class_weight_coef
sample_weights = [class_weight_dict[y] for y in y_train]
return sample_weights

只需通过目标列和最频繁出现的类别的发生率(如果最频繁出现的类别中有100个样本中有75个,则为0.75)

largest_class_weight_coef = 
max(df_copy['Category'].value_counts().values)/df.shape[0]

#pass y_train as numpy array
weight = CreateBalancedSampleWeights(y_train, largest_class_weight_coef)

#And then use it like this
xg = XGBClassifier(n_estimators=1000, weights = weight, max_depth=20)

就这样:)