如何继续对熊猫块进行XGBoost模型训练?

时间:2019-04-17 07:59:08

标签: python pandas machine-learning xgboost

我有一个大型数据集,无法完全加载到单个pandas数据框中。我认为可以使用chunksize参数将整个文件逐步读取为较小的块。

现在,我想训练从第一个块开始的xgboost模型,并在每次迭代中使用新块继续其训练过程。我尝试使用xgb_model模型参数来使用先前训练过的模型,并使用较新的块继续进行训练。但是,我认为我的实现无法正常工作,因为新块的分数没有更新。这是我实现的代码。有人可以帮助我正确实施吗?

import xgboost as xgb
from sklearn.model_selection import train_test_split

i = 1
model_xgb = None

for clsf_data in pd.read_csv(train_file_name, iterator=True, chunksize=20000):

  clsf_data.drop(['ID_code'], axis=1, inplace=True)

  X = clsf_data.loc[:, clsf_data.columns!='target']
  y = clsf_data.loc[:, clsf_data.columns=='target']

  X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234)
  X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=1234)

  tr_data = xgb.DMatrix(X_train, y_train)
  va_data = xgb.DMatrix(X_val, y_val)

  evalset = [(tr_data,'train'), (va_data,'valid')]

  model_xgb = xgb.train(params = {'objective':'binary:logistic', 
                                  'eval_metric':'auc'}, 
                        dtrain = tr_data, 
                        num_boost_round = 2000,
                        evals = evalset, 
                        maximize = False, 
                        xgb_model = model_xgb,
                        early_stopping_rounds = 100,
                        verbose_eval=0)

  print("Chunk: {}\nShape:{}\nBest Score {}\n".format(i, clsf_data.shape, model_xgb.best_score))
  i+=1

0 个答案:

没有答案