为什么 xgboost 的节点增益输出与手动计算的不同?

时间:2021-03-08 09:06:44

标签: python machine-learning xgboost

我们可以从trees_to_dataframe()得到xgboost树结构:

import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn.datasets import load_boston

data = load_boston()

X = pd.DataFrame(data.data, columns=data.feature_names)
y = pd.Series(data.target)

model = xgb.XGBRegressor(random_state=1,
                         n_estimators=1,  # 只有一棵树
                         max_depth=2,
                         learning_rate=0.1
                         )
model.fit(X, y)

tree_frame = model._Booster.trees_to_dataframe()
tree_frame

enter image description here

其中,根据SO线程How is xgboost quality calculated?,增益应计算为:

enter image description here

然而它与此代码不同:

def mse_obj(preds, labels):
    grad = labels-preds
    hess = np.ones_like(labels)
    return grad, hess

Gain,Hessian = mse_obj(y.mean(),y)

L = X[tree_frame['Feature'][0]] < tree_frame['Split'][0]
R = X[tree_frame['Feature'][0]] >= tree_frame['Split'][0]

GL = Gain[L].sum()
GR = Gain[R].sum()
HL = Hessian[L].sum()
HR = Hessian[R].sum()

reg_lambda = 1.0
gain = (GL**2/(HL+reg_lambda)+GR**2/(HR+reg_lambda)-(GL+GR)**2/(HL+HR+reg_lambda))
gain # 18817.811191871013


L = (X[tree_frame['Feature'][0]] < tree_frame['Split'][0])&((X[tree_frame['Feature'][1]] < tree_frame['Split'][1]))
R = (X[tree_frame['Feature'][0]] < tree_frame['Split'][0])&((X[tree_frame['Feature'][1]] >= tree_frame['Split'][1]))

GL = Gain[L].sum()
GR = Gain[R].sum()
HL = Hessian[L].sum()
HR = Hessian[R].sum()

reg_lambda = 1.0
gain = (GL**2/(HL+reg_lambda)+GR**2/(HR+reg_lambda)-(GL+GR)**2/(HL+HR+reg_lambda))
gain # 7841.627971119211


L = (X[tree_frame['Feature'][0]] > tree_frame['Split'][0])&((X[tree_frame['Feature'][2]] < tree_frame['Split'][2]))
R = (X[tree_frame['Feature'][0]] > tree_frame['Split'][0])&((X[tree_frame['Feature'][2]] >= tree_frame['Split'][2]))

GL = Gain[L].sum()
GR = Gain[R].sum()
HL = Hessian[L].sum()
HR = Hessian[R].sum()

reg_lambda = 1.0
gain = (GL**2/(HL+reg_lambda)+GR**2/(HR+reg_lambda)-(GL+GR)**2/(HL+HR+reg_lambda))
gain # 2634.409414953051

我错过了什么吗?

1 个答案:

答案 0 :(得分:1)

最后我发现我错了。 base_score定义的默认预测值是0.5,在计算每个样本的梯度时,我们应该使用base_score作为模型在构建任何树之前的预测值。

Gain,Hessian = mse_obj(model.get_params()['base_score'], y)

在此之后,一切似乎都很好。