xgboost中eval_metric和feval有什么区别?

时间:2016-10-20 16:08:14

标签: r xgboost kaggle

xgb.train中fevaleval_metric之间的区别是什么,这两个参数仅用于评估目的。

Kaggle的帖子提供了一些见解:

https://www.kaggle.com/c/prudential-life-insurance-assessment/forums/t/18473/custom-objective-for-xgboost

2 个答案:

答案 0 :(得分:6)

他们都做了大致相同的事情。

Eval_metri c可以采用字符串(使用其内部函数)或用户定义的函数

feval只接受一项功能

如您所述,两者都是出于评估目的。

在下面的示例中,您可以看到它们的使用非常相似。

## A simple xgb.train example:
param <- list(max_depth = 2, eta = 1, silent = 1, nthread = 2, 
              objective = "binary:logistic", eval_metric = "auc")
bst <- xgb.train(param, dtrain, nrounds = 2, watchlist)


## An xgb.train example where custom objective and evaluation metric are used:
logregobj <- function(preds, dtrain) {
   labels <- getinfo(dtrain, "label")
   preds <- 1/(1 + exp(-preds))
   grad <- preds - labels
   hess <- preds * (1 - preds)
   return(list(grad = grad, hess = hess))
}
evalerror <- function(preds, dtrain) {
  labels <- getinfo(dtrain, "label")
  err <- as.numeric(sum(labels != (preds > 0)))/length(labels)
  return(list(metric = "error", value = err))
}

# These functions could be used by passing them either:
#  as 'objective' and 'eval_metric' parameters in the params list:
param <- list(max_depth = 2, eta = 1, silent = 1, nthread = 2, 
              objective = logregobj, eval_metric = evalerror)
bst <- xgb.train(param, dtrain, nrounds = 2, watchlist)

#  or through the ... arguments:
param <- list(max_depth = 2, eta = 1, silent = 1, nthread = 2)
bst <- xgb.train(param, dtrain, nrounds = 2, watchlist,
                 objective = logregobj, eval_metric = evalerror)

#  or as dedicated 'obj' and 'feval' parameters of xgb.train:
bst <- xgb.train(param, dtrain, nrounds = 2, watchlist,
                 obj = logregobj, feval = evalerror)

https://github.com/dmlc/xgboost/blob/72451457120ac9d59573cf7580ccd2ad178ef908/R-package/R/xgb.train.R#L176

答案 1 :(得分:4)

feval是创建您自己的自定义评估指标。 eval_metric适用于xgboost包正在实施的内置指标。