我正在使用以下代码在苏打水中运行GBM。我已经设置了seed和score_each_iteration,但是每次,即使我已经设置了seed和score_each_iteration = True,每次检查AUC仍然会产生不同的结果。
from h2o.grid.grid_search import H2OGridSearch
from h2o.estimators.gbm import H2OGradientBoostingEstimator
# initialize the estimator
gbm_cov = H2OGradientBoostingEstimator(sample_rate = 0.7, col_sample_rate = 0.7, ntrees = 1000, balance_classes=True , score_each_iteration=True, nfolds=5, seed = 1234)
# set up hyper parameter search space
gbm_hyper_params = {'learn_rate': [0.01, 0.015, 0.025, 0.05, 0.1],
'max_depth': [3, 5, 7, 9, 12],
#'sample_rate': [i * 0.1 for i in range(6, 11)],
#'col_sample_rate': [i * 0.1 for i in range(6, 11)],
#'ntrees': [i * 100 for i in range(1, 11)]
}
# define Search criteria
gbm_search_criteria = {'strategy': "RandomDiscrete",
'max_models': 10,
'max_runtime_secs': 1800,
'stopping_metric': eval_metric,
'stopping_tolerance': 0.001,
'stopping_rounds': 3,
'seed': 1
}
# build grid search
gbm_grid = H2OGridSearch(model = gbm_cov,
hyper_params = gbm_hyper_params,
search_criteria = gbm_search_criteria # we can use "Cartesian" if search space is small
)
# train using the grid
gbm_grid.train(x = top_feature, y = y, training_frame =htrain)
答案 0 :(得分:0)
评论 'max_runtime_secs':1800 可以解决可重复性问题。我发现了另外一件事,但我不知道为什么,如果我们将早期停止的代码从搜索条件移至H2OGradientBoostingEstimator,则代码将运行得更快。
'stopping_metric': eval_metric,
'stopping_tolerance': 0.001,
'stopping_rounds': 3,