带插入符号的神经网络轮数的问题

时间:2018-01-30 02:20:05

标签: r r-caret xgboost mxnet ensemble-learning

我正在创建一个由两个xgboost和mxnet模型组成的简单集合。数据框为A3n.df,分类变量为A3n.df [,1]。两个模型都可以自行运行并获得可信的准确性。所有数据都归一化0-1,洗牌并将类变量转换为因子(对于插入符号)。我已经运行了网格搜索最好的超参数,但需要为caretEnsemble包含一个网格。

ds.withColumn("uniqueId", monotonically_increasing_id()+last uniqueId of previous dataframe)

XGboost似乎训练得很好:

#training grid for xgboost
xgb_grid_A3 = expand.grid(
  nrounds = 1200,   
  eta = 0.01,
  max_depth = 20,
  gamma = 1,
  colsample_bytree = 0.6,
  min_child_weight = 2,
  subsample = 0.8)

#training grid for mxnet
mxnet_grid_A3 = expand.grid(layer1 = 12,
                            layer2 = 2,
                            layer3 = 0,
                            learningrate = 0.001,
                            dropout = 0
                            beta1 = .9,
                            beta2 = 0.999,
                            activation = 'relu')

Ensemble_control_A4 <- trainControl(
  method = "cv",
  number = 5,
  verboseIter = TRUE,
  returnData = TRUE,
  returnResamp = "all",                                                        
  classProbs = TRUE,                                                           
  summaryFunction = twoClassSummary,
  allowParallel = TRUE,
  sampling = "up",
  index=createResample(yEf, 20))

yE = A4n.df[,1]
xE = data.matrix(A4n.df[,-1])
yf <- yE
yEf <- ifelse(yE == 0, "no", "yes") 
yEf <- factor(yEf)

Ensemble_list_A4 <- caretList(
  x=xE,
  y=yEf,
  trControl=Ensemble_control_A4,
  metric="ROC",
  methodList=c("glm", "rpart"),
  tuneList=list(
    xgbA4=caretModelSpec(method="xgbTree", tuneGrid=xgb_grid_A4),
    mxA4=caretModelSpec(method="mxnetAdam", tuneGrid=mxnet_grid_A4)))

然而,mxnet似乎只运行了10轮,当1或2千更有意义时,似乎缺少参数:

+ Resample01: eta=0.01, max_depth=20, gamma=1, colsample_bytree=0.6, min_child_weight=2, subsample=0.8, nrounds=1200 
....
+ Resample20: eta=0.01, max_depth=20, gamma=1, colsample_bytree=0.6, min_child_weight=2, subsample=0.8, nrounds=1200 
- Resample20: eta=0.01, max_depth=20, gamma=1, colsample_bytree=0.6, min_child_weight=2, subsample=0.8, nrounds=1200 
Aggregating results
Selecting tuning parameters
Fitting nrounds = 1200, max_depth = 20, eta = 0.01, gamma = 1, colsample_bytree = 0.6, min_child_weight = 2, subsample = 0.8 on full training set

警告(1-40):

+ Resample01: layer1=12, layer2=2, layer3=0, learningrate=0.001, dropout=0, beta1=0.9, beta2=0.999, activation=relu 
Start training with 1 devices
[1] Train-accuracy=0.487651209677419
[2] Train-accuracy=0.624751984126984
[3] Train-accuracy=0.599082341269841
[4] Train-accuracy=0.651909722222222
[5] Train-accuracy=0.662202380952381
[6] Train-accuracy=0.671006944444444
[7] Train-accuracy=0.676463293650794
[8] Train-accuracy=0.683407738095238
[9] Train-accuracy=0.691964285714286
[10] Train-accuracy=0.698660714285714
- Resample01: layer1=12, layer2=2, layer3=0, learningrate=0.001, dropout=0, beta1=0.9, beta2=0.999, activation=relu

+ Resample01: parameter=none 
- Resample01: parameter=none 
+ Resample02: parameter=none 
Aggregating results
Selecting tuning parameters
Fitting cp = 0.0243 on full training set
There were 40 warnings (use warnings() to see them)

我希望mxnet能够进行数千轮训练,并且训练准确性最终会像预先集合模型一样,达到60-70% *第二个想法,20个mxnet运行中的一些达到60-70%,但似乎不一致。也许它正常运作?

1 个答案:

答案 0 :(得分:0)

在插入符号文档中有一条注释,需要由tune_grid外的用户设置num.round:http://topepo.github.io/caret/train-models-by-tag.html

Ensemble_list_A2 <- caretList(
  x=xE,
  y=yEf,
  trControl=Ensemble_control_A2,
  metric="ROC",
  methodList=c("glm", "rpart", "bayesglm"),
  tuneList=list(
    xgbA2=caretModelSpec(method="xgbTree", tuneGrid=xgb_grid_A2),
    mxA2=caretModelSpec(method="mxnetAdam", tuneGrid=mxnet_grid_A2, num.round=1500, ctx=mx.gpu()),
    svmA2=caretModelSpec(method="svmLinear2", tuneGrid=svm_grid_A2),
    rfA2=caretModelSpec(method="rf", tuneGrid=rf_grid_A2)))