使用神经网络功能时出错

时间:2016-07-16 09:53:07

标签: r machine-learning statistics neural-network

我在R波士顿数据集中尝试了神经网络。

data("Boston",package="MASS") 
data <- Boston

仅保留我们想要使用的变量:

keeps <- c("crim", "indus", "nox", "rm" , "age", "dis", "tax" ,"ptratio", "lstat" ,"medv" ) 
data <- data[keeps]

在这种情况下,公式存储在名为f的R对象中。 响应变量medv将对其余九个属性进行“回归”。我做了如下:

f <- medv ~ crim + indus + nox + rm + age + dis + tax + ptratio + lstat

使用样本方法收集506行数据中没有替换的列车样本400:

set.seed(2016) 
n = nrow(data) 
train <- sample(1:n, 400, FALSE)

适合R的神经网络功能。

 fit<- neuralnet(f, data = data[train ,], hidden=c(10 ,12 ,20), 
             algorithm = "rprop+", err.fct = "sse", act.fct = "logistic", 
             threshold =0.1, linear.output=TRUE)

但警告信息显示为算法未收敛。

警告讯息: 算法没有收敛于stepmax中的1次重复中的1次

尝试使用compute进行预测,

 pred <- compute(fit,data[-train, 1:9])

显示以下错误信息

Error in nrow[w] * ncol[w] : non-numeric argument to binary operator
In addition: Warning message:
In is.na(weights) : is.na() applied to non-(list or vector) of type 'NULL'

为什么会出现错误以及如何从中进行预测以进行预测。我想在该数据集上使用神经网络功能。

2 个答案:

答案 0 :(得分:3)

neuralnet不收敛时,生成的神经网络不完整。你可以通过致电attributes(fit)$names来判断。当训练收敛时,它将如下所示:

 [1] "call"                "response"            "covariate"           "model.list"          "err.fct"  
 [6] "act.fct"             "linear.output"       "data"                "net.result"          "weights"  
[11] "startweights"        "generalized.weights" "result.matrix"

如果没有,则不会定义某些属性:

[1] "call"          "response"      "covariate"     "model.list"    "err.fct"       "act.fct"       "linear.output"
[8] "data"   

这解释了为什么compute不起作用。

当训练没有收敛时,第一个可能的解决方案可能是增加stepmax(默认为100000)。您还可以添加lifesign = "full",以便更好地了解培训流程。

另外,看看你的代码,我会说有10层,12层和20层神经元的三层太多了。我将从一个具有与输入数量相同数量的神经元的层开始,在您的情况下为9。

编辑:

通过缩放(记住缩放训练和测试数据,以及'缩小'compute结果),它收敛得更快。另请注意,我减少了层数和神经元数量,仍然降低了误差阈值。

data("Boston",package="MASS") 
data <- Boston

keeps <- c("crim", "indus", "nox", "rm" , "age", "dis", "tax" ,"ptratio", "lstat" ,"medv" ) 
data <- data[keeps]

f <- medv ~ crim + indus + nox + rm + age + dis + tax + ptratio + lstat

set.seed(2016) 
n = nrow(data) 
train <- sample(1:n, 400, FALSE)

# Scale data. Scaling parameters are stored in this matrix for later.
scaledData <- scale(data)

fit<- neuralnet::neuralnet(f, data = scaledData[train ,], hidden=9, 
                algorithm = "rprop+", err.fct = "sse", act.fct = "logistic", 
                threshold = 0.01, linear.output=TRUE, lifesign = "full")

pred <- neuralnet::compute(fit,scaledData[-train, 1:9])

scaledResults <- pred$net.result * attr(scaledData, "scaled:scale")["medv"] 
                                 + attr(scaledData, "scaled:center")["medv"]

cleanOutput <- data.frame(Actual = data$medv[-train], 
                          Prediction = scaledResults, 
                          diff = abs(scaledResults - data$medv[-train]))

# Show some results
summary(cleanOutput)

答案 1 :(得分:0)

问题似乎出现在你的论证中linear.output = TRUE

使用您的数据,但稍微改变代码(不定义公式并添加一些解释性注释):

library(neuralnet)               
fit <- neuralnet(formula = medv ~ crim + indus + nox + rm + age + dis + tax + ptratio + lstat,
                 data = data[train,],
                 hidden=c(10, 12, 20),   # number of vertices (neurons) in each hidden layer
                 algorithm = "rprop+",   # resilient backprop with weight backtracking,
                 err.fct = "sse",        # calculates error based on the sum of squared errors
                 act.fct = "logistic",  # smoothing the cross product of neurons and weights with logistic function
                 threshold = 0.1,        # of the partial derivatives for error function, stopping
                 linear.output=FALSE)     # act.fct applied to output neurons

print(net)

Call: neuralnet(formula = medv ~ crim + indus + nox + rm + age + dis +     tax + ptratio + lstat, data = data[train, ], hidden = c(10,     12, 20), threshold = 0.1, rep = 10, algorithm = "rprop+",     err.fct = "sse", act.fct = "logistic", linear.output = FALSE)

10 repetitions were calculated.

         Error Reached Threshold Steps
1  108955.0318     0.03436116236     4
5  108955.0339     0.01391790099     8
3  108955.0341     0.02193379592     3
9  108955.0371     0.01705056758     6
8  108955.0398     0.01983134293     8
4  108955.0450     0.02500006437     5
6  108955.0569     0.03689097762     5
7  108955.0677     0.04765829189     5
2  108955.0705     0.05052776877     5
10 108955.1103     0.09031966778     7
10 108955.1103     0.09031966778     7

# now compute will work
pred <- compute(fit, data[-train, 1:9])