了解xgb.dump

时间:2016-09-20 17:49:33

标签: r gradient-descent xgboost

我试图理解关于在交互深度为1的二进制分类的xgb.dump中发生了什么的直觉。特别是如何在一行中使用相同的分割(f38 <2.5) )(代码行2和6)

结果输出如下:

 xgb.dump(model_2,with.stats=T) 
   [1] "booster[0]" 
   [2] "0:[f38<2.5] yes=1,no=2,missing=1,gain=173.793,cover=6317" 
   [3] "1:leaf=-0.0366182,cover=3279.75" 
   [4] "2:leaf=-0.0466305,cover=3037.25" 
   [5] "booster[1]" 
   [6] "0:[f38<2.5] yes=1,no=2,missing=1,gain=163.887,cover=6314.25" 
   [7]    "1:leaf=-0.035532,cover=3278.65" 
   [8] "2:leaf=-0.0452568,cover=3035.6"

第一次使用f38和第二次使用f38之间的区别仅仅是剩余拟合吗?起初它对我来说似乎很奇怪,并试图准确理解这里发生了什么!

谢谢!

1 个答案:

答案 0 :(得分:1)

首次使用f38和第二次使用f38之间的区别仅仅是剩余拟合吗?

很可能是 - 它在第一轮之后更新渐变并在示例中找到具有分割点的相同特征

这是一个可重复的例子。

请注意我在第二个例子中如何降低学习率并且它找到相同的特征,对于所有三轮再次发现相同的分裂点。在第一个例子中,它在所有3轮中使用不同的特征。

require(xgboost)
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(data = train$data, label=train$label)

#high learning rate, finds different first split feature (f55,f28,f66) in each tree
bst <- xgboost(data = train$data, label = train$label, max_depth = 2, eta = 1, nrounds = 3,nthread = 2, objective = "binary:logistic")
xgb.dump(model = bst)
# [1] "booster[0]"                                 "0:[f28<-9.53674e-07] yes=1,no=2,missing=1" 
# [3] "1:[f55<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=1.71218"                            
# [5] "4:leaf=-1.70044"                            "2:[f108<-9.53674e-07] yes=5,no=6,missing=5"
# [7] "5:leaf=-1.94071"                            "6:leaf=1.85965"                            
# [9] "booster[1]"                                 "0:[f59<-9.53674e-07] yes=1,no=2,missing=1" 
# [11] "1:[f28<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.784718"                           
# [13] "4:leaf=-0.96853"                            "2:leaf=-6.23624"                           
# [15] "booster[2]"                                 "0:[f101<-9.53674e-07] yes=1,no=2,missing=1"
# [17] "1:[f66<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.658725"                           
# [19] "4:leaf=5.77229"                             "2:[f110<-9.53674e-07] yes=5,no=6,missing=5"
# [21] "5:leaf=-0.791407"                           "6:leaf=-9.42142"      

## changed eta to lower learning rate, finds same feature(f55) in first split of each tree 
bst2 <- xgboost(data = train$data, label = train$label, max_depth = 2, eta = .01, nrounds = 3,nthread = 2, objective = "binary:logistic")
xgb.dump(model = bst2)
# [1] "booster[0]"                                 "0:[f28<-9.53674e-07] yes=1,no=2,missing=1" 
# [3] "1:[f55<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.0171218"                          
# [5] "4:leaf=-0.0170044"                          "2:[f108<-9.53674e-07] yes=5,no=6,missing=5"
# [7] "5:leaf=-0.0194071"                          "6:leaf=0.0185965"                          
# [9] "booster[1]"                                 "0:[f28<-9.53674e-07] yes=1,no=2,missing=1" 
# [11] "1:[f55<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.016952"                           
# [13] "4:leaf=-0.0168371"                          "2:[f108<-9.53674e-07] yes=5,no=6,missing=5"
# [15] "5:leaf=-0.0192151"                          "6:leaf=0.0184251"                          
# [17] "booster[2]"                                 "0:[f28<-9.53674e-07] yes=1,no=2,missing=1" 
# [19] "1:[f55<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.0167863"                          
# [21] "4:leaf=-0.0166737"                          "2:[f108<-9.53674e-07] yes=5,no=6,missing=5"
# [23] "5:leaf=-0.0190286"                          "6:leaf=0.0182581"