插入符号(R)中的summary()和print()有什么区别

时间:2020-04-07 13:01:26

标签: r r-caret

在R中的插入符号包中进行建模时,summary()和print()函数之间有什么区别?对于具有4个成分的28.52%或21.4%的模型,此处的方差到底是什么?

> summary(model)
Data:   X dimension: 261 130 
    Y dimension: 261 1
Fit method: oscorespls
Number of components considered: 4
TRAINING: % variance explained
          1 comps  2 comps  3 comps  4 comps
X         90.1526    92.91    94.86    96.10
.outcome   0.8772    17.17    23.99    28.52

vs

> print(model)
Partial Least Squares 

261 samples
130 predictors

No pre-processing
Resampling: Cross-Validated (5 fold, repeated 50 times) 
Summary of sample sizes: 209, 209, 209, 208, 209, 209, ... 
Resampling results across tuning parameters:

  ncomp  RMSE      Rsquared    MAE     
  1      5.408986  0.03144022  4.129525
  2      5.124799  0.14263362  3.839493
  3      4.976591  0.19114791  3.809596
  4      4.935419  0.21415260  3.799365
  5      5.054086  0.19887704  3.886382

RMSE was used to select the optimal model using the smallest value.
The final value used for the model was ncomp = 4.

1 个答案:

答案 0 :(得分:1)

有两个组成部分,第一个是您拟合/训练的模型的类型,由于使用了偏最小二乘回归,summary(model)返回有关插入符号选择的最佳模型的信息。

library(caret)
library(pls)

model = train(mpg ~ .,data=mtcars,
trControl=trainControl(method="cv",number=5),
method="pls")

Partial Least Squares 

32 samples
10 predictors

No pre-processing
Resampling: Cross-Validated (5 fold) 
Summary of sample sizes: 25, 27, 26, 24, 26 
Resampling results across tuning parameters:

  ncomp  RMSE      Rsquared   MAE     
  1      3.086051  0.8252487  2.571524
  2      3.129871  0.8122175  2.650973
  3      3.014511  0.8582197  2.519962

RMSE was used to select the optimal model using the smallest value.
The final value used for the model was ncomp = 3.

执行print(model)时,您正在查看训练模型并选择最佳参数的结果。使用pls,您可以从caret中选择组件的数量,对于其他方法,外观可能相同。在上面,测试了具有1,2,3个组件的模型,并选择了具有3个组件的模型,因为它具有最小的RMSE。最终存储的模型是 在model $ finalModel下,您可以查看它:

class(model$finalModel)
[1] "mvr"

pls:::summary.mvr(model$finalModel)
Data:   X dimension: 32 10 
    Y dimension: 32 1
Fit method: oscorespls
Number of components considered: 3
TRAINING: % variance explained
          1 comps  2 comps  3 comps
X           92.73    99.98    99.99
.outcome    74.54    74.84    83.22

从上面可以看到,从包pls调用的摘要函数专用于这种类型的模型,下面的summary(model)会提供相同的输出:

summary(model)
Data:   X dimension: 32 10 
    Y dimension: 32 1
Fit method: oscorespls
Number of components considered: 3
TRAINING: % variance explained
          1 comps  2 comps  3 comps
X           92.73    99.98    99.99
.outcome    74.54    74.84    83.22

partial least sqaure regression类似于主成分分析,不同之处在于分解(或降维)是在tranpose(X)* Y上完成的,并且这些成分被称为潜在变量。因此,在摘要中,您看到的是X(所有预测变量)和.outcome(您的因变量)中的方差所占的比例,由潜在变量解释。