为什么我不能匹配LGBM的简历分数?

时间:2019-02-15 12:53:28

标签: python machine-learning scikit-learn cross-validation lightgbm

我无法手动匹配LGBM的简历分数。

这是MCVE:

from sklearn.datasets import load_breast_cancer
import pandas as pd
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import roc_auc_score
import lightgbm as lgb
import numpy as np

data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)

folds = KFold(5, random_state=42)

params = {'random_state': 42}

results = lgb.cv(params, lgb.Dataset(X_train, y_train), folds=folds, num_boost_round=1000, early_stopping_rounds=100, metrics=['auc'])
print('LGBM\'s cv score: ', results['auc-mean'][-1])

clf = lgb.LGBMClassifier(**params, n_estimators=len(results['auc-mean']))

val_scores = []
for train_idx, val_idx in folds.split(X_train):
    clf.fit(X_train.iloc[train_idx], y_train.iloc[train_idx])
    val_scores.append(roc_auc_score(y_train.iloc[val_idx], clf.predict_proba(X_train.iloc[val_idx])[:,1]))
print('Manual score: ', np.mean(np.array(val_scores)))

我期望两个CV分数相同-我设置了随机种子,并且做的完全一样。但是它们有所不同。

这是我得到的输出:

LGBM's cv score:  0.9851513530737058
Manual score:  0.9903622177441328

为什么?我不是在正确使用LGMB的cv模块吗?

1 个答案:

答案 0 :(得分:3)

您要将X分为X_train和X_test。 对于cv,将X_train拆分为5折,而手动将X拆分为5折。也就是说,您手动使用的点数比使用简历的点数多。

results = lgb.cv(params, lgb.Dataset(X_train, y_train)更改为results = lgb.cv(params, lgb.Dataset(X, y)

还有,可能有不同的参数。例如,lightgbm使用的线程数会更改结果。在简历期间,模型并行安装。因此,使用的线程数可能与您的手动顺序训练有所不同。

第一次更正后的

编辑:

您可以使用以下代码通过手动分割/ cv获得相同的结果:

from sklearn.datasets import load_breast_cancer
import pandas as pd
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import roc_auc_score
import lightgbm as lgb
import numpy as np

data = load_breast_cancer()
X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)

folds = KFold(5, random_state=42)


params = {
        'task': 'train',
        'boosting_type': 'gbdt',
        'objective':'binary',
        'metric':'auc',
        }

data_all = lgb.Dataset(X_train, y_train)

results = lgb.cv(params, data_all, 
                 folds=folds.split(X_train), 
                 num_boost_round=1000, 
                 early_stopping_rounds=100)

print('LGBM\'s cv score: ', results['auc-mean'][-1])

val_scores = []
for train_idx, val_idx in folds.split(X_train):

    data_trd = lgb.Dataset(X_train.iloc[train_idx], 
                           y_train.iloc[train_idx], 
                           reference=data_all)

    gbm = lgb.train(params,
                    data_trd,
                    num_boost_round=len(results['auc-mean']),
                    verbose_eval=100)

    val_scores.append(roc_auc_score(y_train.iloc[val_idx], gbm.predict(X_train.iloc[val_idx])))
print('Manual score: ', np.mean(np.array(val_scores)))

收益

LGBM's cv score:  0.9914524426410262
Manual score:  0.9914524426410262

与众不同的是此行reference=data_all。在cv期间,使用整个数据集(X_train)构造变量的分箱(请参阅lightgbm doc),而在您的for loop手册中,变量的分箱则建立在训练子集(X_train.iloc [train_idx])上。通过将引用传递给包含所有数据的数据集,lightGBM将重复使用相同的装仓,得到相同的结果。