使用folds.split(train.values,target.values)时如何在交叉验证中使用tqdm

时间:2019-06-05 11:18:46

标签: python jupyter-notebook tqdm

我正在对我的lightgbm模型进行交叉验证,如下所示。
而且我想在for循环中使用function clearTicket() { this.setState({ ticket: {} }) } ,以便检查过程。

tqdm

我尝试使用folds = KFold(n_splits=num_folds, random_state=2319) oof = np.zeros(len(train)) getVal = np.zeros(len(train)) predictions = np.zeros(len(target)) feature_importance_df = pd.DataFrame() print('Light GBM Model') for fold_, (trn_idx, val_idx) in enumerate(folds.split(train.values, target.values)): X_train, y_train = train.iloc[trn_idx][features], target.iloc[trn_idx] X_valid, y_valid = train.iloc[val_idx][features], target.iloc[val_idx] print("Fold idx:{}".format(fold_ + 1)) trn_data = lgb.Dataset(X_train, label=y_train, categorical_feature=categorical_features) val_data = lgb.Dataset(X_valid, label=y_valid, categorical_feature=categorical_features) clf = lgb.train(param, trn_data, 1000000, valid_sets = [trn_data, val_data], verbose_eval=5000, early_stopping_rounds = 4000) oof[val_idx] = clf.predict(train.iloc[val_idx][features], num_iteration=clf.best_iteration) getVal[val_idx]+= clf.predict(train.iloc[val_idx][features], num_iteration=clf.best_iteration) / folds.n_splits fold_importance_df = pd.DataFrame() fold_importance_df["feature"] = features fold_importance_df["importance"] = clf.feature_importance() fold_importance_df["fold"] = fold_ + 1 feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0) predictions += clf.predict(test[features], num_iteration=clf.best_iteration) / folds.n_splits print("CV score: {:<8.5f}".format(roc_auc_score(target, oof))) tqdm(enumerate(folds.split(train.values, target.values)),但不起作用。
我猜它们之所以不能工作的原因是enumerate(tqdm(folds.split(train.values, target.values)))没有长度。
但我想知道在这种情况下如何使用tqdm。
有人可以帮我吗?
预先感谢。

1 个答案:

答案 0 :(得分:1)

要在k倍迭代中执行进度条( desc 参数是可选的):

from tqdm import tqdm
for train, test in tqdm(kfold.split(x, y), total=kfold.get_n_splits(), desc="k-fold"):
   # Your code here

输出将如下所示:

k-fold: 100%|██████████| 10/10 [02:26<00:00, 16.44s/it]