Predict function in R's MLR yielding results inconsistent with predict

时间:2015-07-31 20:11:54

标签: r machine-learning predict mlr

I'm using the mlr package's framework to build a svm model to predict landcover classes in an image. I used the raster package's predict function and also converted the raster to a dataframe and then predicted on that dataframe using the "learner.model" as input. These methods gave me realistic results.

Work well:

> predict(raster, mod$learner.model)

or

> xy <- as.data.frame(raster, xy = T)

> C <- predict(mod$learner.model, xy)

However, if I predict on the dataframe derived from the raster without specifying the learner.model, the results are not the same.

> C2 <- predict(mod, newdata=xy)

C2$data$response is not identical to C. Why?


Here is a reproducible example that demonstrates the problem:

> library(mlr)
 > library(kernlab)
 > x1 <- rnorm(50)
 > x2 <- rnorm(50, 3)
 > x3 <- rnorm(50, -20, 3)
 > C <- sample(c("a","b","c"), 50, T)
 > d <-  data.frame(x1, x2, x3, C)
 > classif <- makeClassifTask(id = "example", data = d, target = "C")
 > lrn <- makeLearner("classif.ksvm", predict.type = "prob", fix.factors.prediction = T)
 > t <- train(lrn, classif)

 Using automatic sigma estimation (sigest) for RBF or laplace kernel

 > res1 <- predict(t, newdata = data.frame(x2,x1,x3))
 > res1

 Prediction: 50 observations
 predict.type: prob
 threshold: a=0.33,b=0.33,c=0.33
 time: 0.01
      prob.a    prob.b    prob.c response
 1 0.2110131 0.3817773 0.4072095        c
 2 0.1551583 0.4066868 0.4381549        c
 3 0.4305353 0.3092737 0.2601910        a
 4 0.2160050 0.4142465 0.3697485        b
 5 0.1852491 0.3789849 0.4357659        c
 6 0.5879579 0.2269832 0.1850589        a

 > res2 <- predict(t$learner.model, data.frame(x2,x1,x3))
 > res2
  [1] c c a b c a b a c c b c b a c b c a a b c b c c a b b b a a b a c b a c c c
 [39] c a a b c b b b b a b b
 Levels: a b c
!> res1$data$response == res2
  [1]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE FALSE
 [13]  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE
 [25]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE
 [37]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
 [49]  TRUE FALSE

The predictions are not identical. Following mlr's tutorial page on prediction, I don't see why the results would differ. Thanks for your help.

-----

Update: When I do the same with a random forest model, the two vectors are equal. Is this because SVM is scale dependent and random forest is not?

 > library(randomForest)

 > classif <- makeClassifTask(id = "example", data = d, target = "C")
 > lrn <- makeLearner("classif.randomForest", predict.type = "prob", fix.factors.prediction = T)
 > t <- train(lrn, classif)
 >
 > res1 <- predict(t, newdata = data.frame(x2,x1,x3))
 > res1
 Prediction: 50 observations
 predict.type: prob
 threshold: a=0.33,b=0.33,c=0.33
 time: 0.00
   prob.a prob.b prob.c response
 1  0.654  0.228  0.118        a
 2  0.742  0.090  0.168        a
 3  0.152  0.094  0.754        c
 4  0.092  0.832  0.076        b
 5  0.748  0.100  0.152        a
 6  0.680  0.098  0.222        a
 >
 > res2 <- predict(t$learner.model, data.frame(x2,x1,x3))
 > res2
  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
  a  a  c  b  a  a  a  c  a  b  b  b  b  c  c  a  b  b  a  c  b  a  c  c  b  c
 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
  a  a  b  a  c  c  c  b  c  b  c  a  b  c  c  b  c  b  c  a  c  c  b  b
 Levels: a b c
 >
 > res1$data$response == res2
  [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
 [16] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
 [31] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
 [46] TRUE TRUE TRUE TRUE TRUE

----

Another Update: If I change predict.type to "response" from "prob", the two svm prediction vectors agree with each other. I'm going to look into the differences of these types, I had thought that "prob" gave the same results but also gave probabilities. Maybe this isn't the case?

2 个答案:

答案 0 :(得分:1)

正如您所知,&#34;错误的来源&#34;是mlrkernlab对预测类型有不同的默认值。

mlr维持着相当多的内部&#34;状态&#34;并检查每个学习者的学习者参数以及如何处理培训和测试。您可以使用lrn$predict.type获取学习者将进行的预测类型,在您的情况下,"prob"会给出mlr。如果您想了解所有血腥细节,请查看the implementation of classif.ksvm

不建议混合mlr - 包裹的学习者和&#34; raw&#34;像你这样的学习者在这个例子中做,并且不应该这样做。如果你混合了它们,就会发现像你这样的事情 - 所以当使用mlr时,只使用 mlr构造来训练模型,做出预测,等

friend_with?确实有测试以确保&#34; raw&#34;并且包装的学习者产生相同的输出,参见例如the one for classif.ksvm

答案 1 :(得分:0)

答案就在这里:

Why are probabilities and response in ksvm in R not consistent?

简而言之,ksvm type =“probabilities”给出的结果与type =“response”不同。

如果我跑

 > res2 <- predict(t$learner.model, data.frame(x2,x1,x3), type = "probabilities")
 > res2

然后我得到与上面res1相同的结果(type =“response”是默认值)。

不幸的是,似乎基于概率对图像进行分类并不像使用“响应”那样好。也许这仍然是估算分类确定性的最佳方法?