也许我想念一些东西,我想不通,需要您的帮助。
我正在使用这些行来训练xgb模型
XGB = xgb.XGBClassifier(objective ='multi:softprob',
learning_rate = 0.3,
max_depth = 1,
n_estimators = 3,
n_jobs = 5)
clf = XGB.fit(X_train, Y_train)
当我打印XGB模型时,它说我确实训练了3棵树:
XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=1, gamma=0,
learning_rate=0.3, max_delta_step=0, max_depth=1,
min_child_weight=1, missing=None, n_estimators=3, n_jobs=5,
nthread=None, objective='multi:softprob', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=None, subsample=1, verbosity=1)
但是当我运行此行以查看功能拆分位置
dump_list = clf.get_booster().get_dump()
我得到9行
['0:[f0<0.942677855] yes=1,no=2,missing=1\n\t1:leaf=-0.103566267\n\t2:leaf=0.43779847\n',
'0:[f0<0.954393268] yes=1,no=2,missing=1\n\t1:leaf=0.200365365\n\t2:leaf=-0.216199294\n',
'0:[f13<0.651464462] yes=1,no=2,missing=1\n\t1:leaf=0.276390254\n\t2:leaf=-0.219127133\n',
'0:[f0<0.917573214] yes=1,no=2,missing=1\n\t1:leaf=-0.110939182\n\t2:leaf=0.292450339\n',
'0:[f0<0.966108799] yes=1,no=2,missing=1\n\t1:leaf=0.135595635\n\t2:leaf=-0.194633663\n',
'0:[f11<0.6690377] yes=1,no=2,missing=1\n\t1:leaf=0.202725366\n\t2:leaf=-0.196870551\n',
'0:[f0<0.899163187] yes=1,no=2,missing=1\n\t1:leaf=-0.107093893\n\t2:leaf=0.230380043\n',
'0:[f0<0.974476993] yes=1,no=2,missing=1\n\t1:leaf=0.10007298\n\t2:leaf=-0.180789232\n',
'0:[f13<0.588702917] yes=1,no=2,missing=1\n\t1:leaf=0.235898077\n\t2:leaf=-0.177840069\n']
这是否意味着可以安装9棵树?
我注意到,我在这里得到的行数与数据集中的类数相关。在这里,我使用具有3个类的数据集。当我使用具有2个类的数据集时,我得到6行。这暗示了类数与拟合树数之间的关系,这实际上没有任何意义。所以我的另一个问题是clf.get_booster()。get_dump()的输出的解释是什么。
谢谢。
答案 0 :(得分:0)
'get_dumps()`的格式有点混乱,我相信(虽然不能100%确定)您的树可以被视为:
tree_1 = ['0:[f0<0.942677855] yes=1,no=2,missing=1\n\t1:leaf=-0.103566267\n\t2:leaf=0.43779847\n',
'0:[f0<0.954393268] yes=1,no=2,missing=1\n\t1:leaf=0.200365365\n\t2:leaf=-0.216199294\n',
'0:[f13<0.651464462] yes=1,no=2,missing=1\n\t1:leaf=0.276390254\n\t2:leaf=-0.219127133\n',]
tree_2= ['0:[f0<0.917573214] yes=1,no=2,missing=1\n\t1:leaf=-0.110939182\n\t2:leaf=0.292450339\n',
'0:[f0<0.966108799] yes=1,no=2,missing=1\n\t1:leaf=0.135595635\n\t2:leaf=-0.194633663\n',
'0:[f11<0.6690377] yes=1,no=2,missing=1\n\t1:leaf=0.202725366\n\t2:leaf=-0.196870551\n',]
tree_3 = ['0:[f0<0.899163187] yes=1,no=2,missing=1\n\t1:leaf=-0.107093893\n\t2:leaf=0.230380043\n',
'0:[f0<0.974476993] yes=1,no=2,missing=1\n\t1:leaf=0.10007298\n\t2:leaf=-0.180789232\n',
'0:[f13<0.588702917] yes=1,no=2,missing=1\n\t1:leaf=0.235898077\n\t2:leaf=-0.177840069\n']]
抱歉,我在手机上,将尝试尽快修复它。
答案 1 :(得分:0)
Ivan Libedinsky回答了我的部分问题。评论中对以下问题的回答是:xgboost中的树以一种与其他方式生长在一起,这就是为什么该算法的每次迭代看起来都有3棵树的原因,因为我的数据库中有3个类。 >