为什么在训练插入符号模型时,食谱比手工预处理慢20倍?

时间:2019-04-25 06:04:29

标签: r r-caret r-recipes

为了构建堆栈模型,我在同一数据集上使用了不同的预处理方法训练了许多基本模型。为了跟踪构建设计矩阵的方式,我使用了配方包并定义了自己的步骤。但是,使用带有自定义步骤的配方进入插入符号训练模型显示,与应用相同的预处理和使用手工设计矩阵训练插入符号模型相比,速度要慢20倍。知道为什么以及如何改善这一点吗?

我在下面提供了一个可复制的示例:

# Loading libraries
packs <- c("tidyverse", "caret", "e1071", "wavelets", "recipes")
InstIfNec<-function (pack) {
    if (!do.call(require,as.list(pack))) {
        do.call(install.packages,as.list(pack)) }
    do.call(require,as.list(pack)) }
lapply(packs, InstIfNec)

# Getting data
data(biomass)
biomass <- select(biomass,-dataset,-sample)

# Defining custom pretreatment algorithm
HaarTransform <- function(DF1) {
    w <- function(k) {
        s1 = dwt(k, filter = "haar")
        return (s1@V[[1]])
    }
    Smt = as.matrix(DF1)
    Smt = t(base::apply(Smt, 1, w))
    return (data.frame(Smt))
}

# Creating the custom step function
step_Haar_new <- function(terms, role, trained, skip, columns, id) {
    step(subclass = "Haar",  terms = terms, role = role, 
         trained = trained, skip = skip, columns = columns, id = id)
}

step_Haar<-function(recipe, ..., role="predictor", trained=FALSE, skip=FALSE,  
                    columns=NULL, id=rand_id("Harr")) {
    terms=ellipse_check(...)
    add_step(recipe, step_Haar_new(terms=terms, role=role, trained=trained, 
                               skip=skip, columns=columns, id=id))
}

prep.step_Haar <- function(x, training, info = NULL, ...) {
    col_names <- terms_select(terms = x$terms, info = info)
    step_Haar_new(terms = x$terms, role = x$role, trained = TRUE,
        skip = x$skip, columns = col_names, id = x$id)
}

bake.step_Haar <- function(object, new_data, ...) {
    predictors <- HaarTransform(dplyr::select(new_data, object$columns))
    new_data[, object$columns] <- NULL
    bind_cols(new_data, predictors)
}

# Fiting the caret model using recipe
system.time({
    Haar_recipe<-recipe(carbon ~ ., biomass) %>% 
        step_Haar(all_predictors()) 
    set.seed(1)
    fit <- caret::train(Haar_recipe, data = biomass, method = "svmLinear")  
})


# Fiting the caret model with hand made pretreatment
system.time({
    df<-HaarTransform(biomass[,-1])
    set.seed(1)
    fit2<-caret::train(x=df, y=biomass[, 1], method="svmLinear")
})

# Comparing results
fit; fit2

# Both way provide the same result but the recipes way take ~20 seconds while hand made pretreatment take ~1.5 seconds

使用profvis,看来配方方法使用try()和eval()函数的不同运行进行了多次尝试(即27次)来完成相同的工作。

1 个答案:

答案 0 :(得分:0)

train通过重新执行配方within each resample来进行正确的预处理。当您的预处理方法从数据中得出一些估算值或统计量以应用预处理时,这是必需的。 PCA,插补和其他方法应以这种方式应用,否则您将获得非常乐观的性能观。

对于某些技术,例如空间符号,没有什么可估计的,可以在重采样之前完成。否则,它应该放在里面(这就是我们这样做的原因)。