R不能用mxnet 0.94预测

时间:2017-03-09 13:16:15

标签: r neural-network predict mxnet

我已经能够使用nnet和neuralnet来预测传统的backprop网络中的值,但是由于很多原因,我们一直在努力对MXNET和R做同样的事情。

这是文件(带标题的简单CSV,列已标准化) https://files.fm/u/cfhf3zka

这是我使用的代码:

filedata <- read.csv("example.csv")

require(mxnet)

datain <- filedata[,1:3]
dataout <- filedata[,4]

lcinm <- data.matrix(datain, rownames.force = "NA")
lcoutm <- data.matrix(dataout, rownames.force = "NA")
lcouta <- as.numeric(lcoutm)

data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=3)
act1 <- mx.symbol.Activation(fc1, name="sigm1", act_type="sigmoid")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
act2 <- mx.symbol.Activation(fc2, name="sigm2", act_type="sigmoid")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=3)
act3 <- mx.symbol.Activation(fc3, name="sigm3", act_type="sigmoid")
fc4 <- mx.symbol.FullyConnected(act3, name="fc4", num_hidden=1)
softmax <- mx.symbol.LogisticRegressionOutput(fc4, name="softmax")

mx.set.seed(0)
mxn <- mx.model.FeedForward.create(array.layout = "rowmajor", softmax, X = lcinm, y = lcouta, learning.rate=0.01, eval.metric=mx.metric.rmse)

preds <- predict(mxn, lcinm)

predsa <-array(preds)

predsa

控制台输出是:

Start training with 1 devices
[1] Train-rmse=0.0852988247858687
[2] Train-rmse=0.068769514264606
[3] Train-rmse=0.0687647380075881
[4] Train-rmse=0.0687647164103567
[5] Train-rmse=0.0687647161066822
[6] Train-rmse=0.0687647160828069
[7] Train-rmse=0.0687647161241598
[8] Train-rmse=0.0687647160882147
[9] Train-rmse=0.0687647160594508
[10] Train-rmse=0.068764716079949
> preds <- predict(mxn, lcinm)
Warning message:
In mx.model.select.layout.predict(X, model) :
  Auto detect layout of input matrix, use rowmajor..

> predsa <-array(preds)
> predsa
   [1] 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764
  [10] 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764 0.6776764

所以它得到了平均值&#34;但是无法预测价值观,尝试了其他方法和学习以避免过度预测,但从未达到过均匀的变量。

1 个答案:

答案 0 :(得分:2)

我试过你的例子,看起来你似乎正试图用LogisticRegressionOutput预测连续输出。我相信你应该使用LinearRegressionOutput。您可以看到此here和Julia示例here的示例。此外,由于您预测连续输出,因此最好使用其他激活功能,例如ReLu,请在this question查看一些原因。

通过这些更改,我生成了以下代码:

data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=3)
act1 <- mx.symbol.Activation(fc1, name="sigm1", act_type="softrelu")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
act2 <- mx.symbol.Activation(fc2, name="sigm2", act_type="softrelu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=3)
act3 <- mx.symbol.Activation(fc3, name="sigm3", act_type="softrelu")
fc4 <- mx.symbol.FullyConnected(act3, name="fc4", num_hidden=1)
softmax <- mx.symbol.LinearRegressionOutput(fc4, name="softmax")

mx.set.seed(0)
mxn <- mx.model.FeedForward.create(array.layout = "rowmajor",
                                   softmax,
                                   X = lcinm,
                                   y = lcouta,
                                   learning.rate=1,
                                   eval.metric=mx.metric.rmse,
                                   num.round = 100)

preds <- predict(mxn, lcinm)

predsa <-array(preds)
require(ggplot2)
qplot(x = dataout, y = predsa, geom = "point", alpha = 0.6) +
  geom_abline(slope = 1)

这让我的错误率不断下降:

Start training with 1 devices
[1] Train-rmse=0.0725415842873665
[2] Train-rmse=0.0692660343340093
[3] Train-rmse=0.0692562284995407
...
[97] Train-rmse=0.048629236911287
[98] Train-rmse=0.0486272021266279
[99] Train-rmse=0.0486251858007309
[100] Train-rmse=0.0486231872849457

预测的输出开始与实际输出一致,如此图所示:enter image description here