如何在h2o中查看堆叠整体中各种基本模型的系数或重要性?例如,如果我有GBM,GLM和RF,我如何知道每个人在堆叠中的重要性?这可能吗?
例如使用python代码......这里......
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/stacked-ensembles.html
答案 0 :(得分:3)
H2O中的Stacked Ensemble算法使用GLM作为元学习算法,因此您可以将GLM metalearner系数的大小解释为" important"每个基础学习者在整体中进行预测的
在Stacked Ensemble文档中的简单example中,我们训练了一个2模型(GBM,RF)集合。这就是你如何在Python中检查metalearner GLM的系数:
import h2o
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.estimators.stackedensemble import H2OStackedEnsembleEstimator
h2o.init()
# Import a sample binary outcome train/test set into H2O
train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
test = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_test_5k.csv")
# Identify predictors and response
x = train.columns
y = "response"
x.remove(y)
# For binary classification, response should be a factor
train[y] = train[y].asfactor()
test[y] = test[y].asfactor()
# Number of CV folds (to generate level-one data for stacking)
nfolds = 5
# Generate a 2-model ensemble (GBM + RF)
# Train and cross-validate a GBM
my_gbm = H2OGradientBoostingEstimator(distribution="bernoulli",
ntrees=10,
max_depth=3,
min_rows=2,
learn_rate=0.2,
nfolds=nfolds,
fold_assignment="Modulo",
keep_cross_validation_predictions=True,
seed=1)
my_gbm.train(x=x, y=y, training_frame=train)
# Train and cross-validate a RF
my_rf = H2ORandomForestEstimator(ntrees=50,
nfolds=nfolds,
fold_assignment="Modulo",
keep_cross_validation_predictions=True,
seed=1)
my_rf.train(x=x, y=y, training_frame=train)
# Train a stacked ensemble using the GBM and RF above
ensemble = H2OStackedEnsembleEstimator(base_models=[my_gbm.model_id, my_rf.model_id])
ensemble.train(x=x, y=y, training_frame=train)
# Grab the metalearner GLM fit & print normalized coefficients
metafit = h2o.get_model(ensemble.metalearner()['name'])
metafit.coef_norm()
这将打印以下内容:
{u'DRF_model_python_1502159734743_250': 0.6967886117663271,
u'GBM_model_python_1502159734743_1': 0.48518914691349374,
u'Intercept': 0.1466358030144971}
因此,在这种情况下,随机森林的预测比GBM更有助于集合预测。
如果您在测试集上评估基本模型,您可以看到随机森林的性能略好于GBM,因此有意义的是,整体更喜欢RF预测略高于GBM(尽管并不总是如此)测试集性能和metalearner变量重要性之间的直接1-1对应关系,如此)。
my_gbm.model_performance(test).auc() # 0.7522498803447679
my_rf.model_performance(test).auc() # 0.7698039263004212
有计划expose the metalearner as an argument,以便用户可以在将来使用任何H2O的监督ML算法作为元学习者,在这种情况下,您可以查看算法的变量重要性获得相同的信息,因为所有H2O算法计算变量重要性。