无法重现Xgb.cv交叉验证结果

时间:2017-04-06 14:31:56

标签: python machine-learning classification xgboost

我正在使用Python 3.5和XGBoost的python实现,版本0.6

我在Python中构建了一个前向特征选择例程,它迭代地构建最佳特征集(导致最佳分数,此处度量是二进制分类错误)。

在我的数据集上,使用xgb.cv例程,我可以通过将max_depth(树木)增加到40来降低大约0.21的错误率...

但是如果我进行自定义交叉验证,使用相同的XG Boost参数,相同的折叠,相同的指标和相同的数据集,我的最高得分为0.70,max_depth为4 ...如果我使用最优我的xgb.cv例程获得max_depth,我的分数降到0.65 ......我只是不明白发生了什么......

我最好的猜测是xgb.cv使用不同的折叠(即在分区之前对数据进行洗牌),但我也认为我提交折叠作为xgb.cv的输入(使用选项Shuffle = False)...所以,它可能是完全不同的东西......

这是forward_feature_selection的代码(使用xgb.cv):

def Forward_Feature_Selection(train, y_train, params, num_round=30, threshold=0, initial_score=0.5, to_exclude = [], nfold = 5):

    k_fold = KFold(n_splits=13)
    selected_features = []
    gain = threshold + 1
    previous_best_score = initial_score
    train = train.drop(train.columns[to_exclude], axis=1)  # df.columns is zero-based pd.Index 
    features = train.columns.values
    selected = np.zeros(len(features))
    scores = np.zeros(len(features))
    while (gain > threshold):    # we start a add-a-feature loop
        for i in range(0,len(features)):
            if (selected[i]==0):   # take only features not yet selected
                selected_features.append(features[i])
                new_train = train.iloc[:][selected_features]
                selected_features.remove(features[i])
                dtrain = xgb.DMatrix(new_train, y_train, missing = None)
            #    dtrain = xgb.DMatrix(pd.DataFrame(new_train), y_train, missing = None)
                if (i % 10 == 0):
                    print("Launching XGBoost for feature "+ str(i))
                xgb_cv = xgb.cv(params, dtrain, num_round, nfold=13, folds=k_fold, shuffle=False) 
                if params['objective'] == 'binary:logistic':
                    scores[i] = xgb_cv.tail(1)["test-error-mean"]   #classification
                else:
                    scores[i] = xgb_cv.tail(1)["test-rmse-mean"]    #regression
            else:
                scores[i] = initial_score    # discard already selected variables from candidates
        best = np.argmin(scores)
        gain = previous_best_score - scores[best]
        if (gain > 0):        
            previous_best_score = scores[best]  
            selected_features.append(features[best])
            selected[best] = 1

        print("Adding feature: " + features[best] + " increases score by " + str(gain) + ". Final score is now: " + str(previous_best_score)) 
    return (selected_features, previous_best_score)

这是我的“自定义”交叉验证:

mean_error_rate = 0
for train, test in k_fold.split(ds):
    dtrain =  xgb.DMatrix(pd.DataFrame(ds.iloc[train]), dc.iloc[train]["bin_spread"], missing = None)
    gbm = xgb.train(params, dtrain, 30)
    dtest =  xgb.DMatrix(pd.DataFrame(ds.iloc[test]), dc.iloc[test]["bin_spread"], missing = None)
    res.ix[test,"pred"] = gbm.predict(dtest)

    cv_reg = reg.fit(pd.DataFrame(ds.iloc[train]), dc.iloc[train]["bin_spread"])
    res.ix[test,"lasso"] = cv_reg.predict(pd.DataFrame(ds.iloc[test]))

    res.ix[test,"y_xgb"] = res.loc[test,"pred"] > 0.5
    res.ix[test, "xgb_right"] = (res.loc[test,"y_xgb"]==res.loc[test,"bin_spread"]) 
    print (str(100*np.sum(res.loc[test, "xgb_right"])/(N/13)))
    mean_error_rate += 100*(np.sum(res.loc[test, "xgb_right"])/(N/13))
print("mean_error_rate is : " + str(mean_error_rate/13))  

使用以下参数:

params = {"objective": "binary:logistic", 
          "booster":"gbtree",
          "max_depth":4, 
          "eval_metric" : "error",
          "eta" : 0.15}
res = pd.DataFrame(dc["bin_spread"]) 
k_fold = KFold(n_splits=13)
N = dc.shape[0]
num_trees = 30

最后调用我的前进功能选择:

selfeat = Forward_Feature_Selection(dc, 
                                    dc["bin_spread"], 
                                    params, 
                                    num_round = num_trees,
                                    threshold = 0,
                                    initial_score=999,
                                    to_exclude = [0,1,5,30,31],
                                    nfold = 13)

任何有助于了解正在发生的事情的人都将不胜感激!提前感谢任何提示!

1 个答案:

答案 0 :(得分:1)

这很正常。我也经历过同样的经历。首先,Kfold每次都有不同的分裂。您已在XGBoost中指定了折叠,但KFold未一致地分割,这是正常的。 接下来,模型的初始状态每次都不同。 有XGBoost的内部随机状态也可能导致这种情况,尝试更改eval指标以查看方差是否减少。如果某个特定指标符合您的需求,请尝试平均最佳参数并将其作为最佳参数。