如何使用LightGBM保存每次GridSearchCV迭代中的每个预测结果

时间:2019-03-29 19:00:17

标签: python machine-learning grid-search lightgbm gridsearchcv

我正在尝试使用GridSearchCV来调整LightGBM模型中的参数,但是我对如何在GridSearchCV的每次迭代中保存每个预测结果还不熟悉。
但是可悲的是,我只知道如何将结果保存到特定参数中。
这是代码:

param = {
    'bagging_freq': 5,
    'bagging_fraction': 0.4,
    'boost_from_average':'false',
    'boost': 'gbdt',
    'feature_fraction': 0.05,
    'learning_rate': 0.01,
    'max_depth': -1,  
    'metric':'auc',
    'min_data_in_leaf': 80,
    'min_sum_hessian_in_leaf': 10.0,
    'num_leaves': 13,
    'num_threads': 8,
    'tree_learner': 'serial',
    'objective': 'binary', 
    'verbosity': 1
}
features = [c for c in train_df.columns if c not in ['ID_code', 'target']]
target = train_df['target']
folds = StratifiedKFold(n_splits=10, shuffle=False, random_state=44000)
oof = np.zeros(len(train_df))
predictions = np.zeros(len(test_df))

for fold_, (trn_idx, val_idx) in enumerate(folds.split(train_df.values, target.values)):
    print("Fold {}".format(fold_))
    trn_data = lgb.Dataset(train_df.iloc[trn_idx][features], label=target.iloc[trn_idx])
    val_data = lgb.Dataset(train_df.iloc[val_idx][features], label=target.iloc[val_idx])    
    num_round = 1000000
    clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=1000, early_stopping_rounds = 3000)
    oof[val_idx] = clf.predict(train_df.iloc[val_idx][features], num_iteration=clf.best_iteration)        
    predictions += clf.predict(test_df[features], num_iteration=clf.best_iteration) / folds.n_splits

print("CV score: {:<8.5f}".format(roc_auc_score(target, oof)))
print('Saving the Result File')
res= pd.DataFrame({"ID_code": test.ID_code.values})
res["target"] = predictions
res.to_csv('result_10fold{}.csv'.format(num_sub), index=False)

以下是数据:

train_df.head(3)

         ID_code    target    var_0    var_1    ...  var_199
0        train_0     0        8.9255   -6.7863       -9.2834     
1        train_1     1        11.5006  -4.1473        7.0433  
2        train_2     0        8.6093   -2.7457       -9.0837 


train_df.head(3)

         ID_code    var_0   var_1    ... var_199
0        test_0     9.4292  11.4327      -2.3805          
1        test_1     5.0930  11.4607      -9.2834      
2        train_2    7.8928  10.5825      -9.0837      

我想保存GridSearchCV的每个迭代的每个predictions,我已经搜索了几个类似的问题以及在LightGBM中使用 GridSearchCV的其他相关信息
但是我仍然无法正确编码。
所以,如果不介意的话,谁能帮助我并提供一些相关的教程?
真诚的感谢。

1 个答案:

答案 0 :(得分:2)

您可以使用sklearn中的ParameterGridParameterSampler进行参数采样-它分别对应于GridSearchCVRandomSearchCV。例如,

def train_lgb(num_folds=11, param=param_original):
    ...
    return predictions, sub

params = {
# your base parameters
}

# define the grid for parameter sampling
from sklearn.model_selection import ParameterGrid
par_grid = ParameterGrid([{'bagging_freq':[6,7]},
                          {'num_leaves': [13,15]}
                         ])

prediction_list = {}
sub_list = {}

import copy
for i, ps in enumerate(par_grid):
    print('This is param{}'.format(i))
    # copy the base params dictionary and update with sampled values
    val = copy.deepcopy(params)
    val.update(ps)
    # main training loop
    prediction, sub = train_lgb(param=val) 
    prediction_list.update({key: prediction})
    sub_list.update({key: sub})

编辑:顺便说一句,我意识到我最近正在调查同一问题,并且正在学习如何使用某些ML工具进行处理。我创建了一个页面,概述了如何使用MLflow来完成此任务:https://mlisovyi.github.io/KaggleSantander2019/(以及实际代码的关联github页面)。请注意,它偶然基于您正在处理的相同数据:)。我希望它会有用。