训练具有似然函数的神经网络

时间:2017-06-24 20:44:02

标签: r neural-network backpropagation mle

我想训练一个具有似然函数而不是典型平方误差的神经网络,我在 Rstudio 中编写了一些具有以下特征的代码:

  • 神经网络没有隐藏单元(这只是为了简单起见),具有单个输入和单个输出单元。
  • 用于学习简单的线性方程(y = a + bx)
  • 我认为数据具有正态分布(因此我可以识别似然函数)。

代码如下:

# This code implements the basic backpropagation of  
# error learning algorithm. The network has not hidden  
# neurons and just a linear output neuron.  
# The purpose is to estimate a and b in linear equation y=a+bx
# and also to estimate the variance component of residuals 

#------LOAD DATA------------------------

x=rnorm(100,0,1)
y=10-3*x

#user specified values
Size=1
Lrate=0.001
epochs = 1000

#read how many patterns
n = length(x)

# ---------- set weights ---------------

#set initial random weights
# b is coeficient, a is constant and s is variance
weight = runif(3)
a=weight[1]
b=weight[2]
s=var(x)


#--- Learning Starts Here! ---------


#  Errors vectors
err=numeric(0)
L=numeric(0)

# do a number of epochs
for (iter in 1:epochs){

# --- calculate derivatives of -2*log(likelihood) with respect to a,b and s 

  Da=-2*sum((y-a-b*x)/s)
  Db=-2*sum(x*(y-a-b*x)/s)
  Ds=-2*(sum((y-a-b*x)^2)/(2*s^2)-n/(2*s))

 # adjust weights
  a=a-Lrate*Da
  b=b-Lrate*Db
  s=s-Lrate*Ds

  #calculate overall network error at end of each epoch
  pred = a+b*x
  error = pred-y
  err[iter] = (sum(error^2))^0.5
  L[iter]=-2*(s^(-n/2)-1/(2*s)*sum((y-b*x-a)^2))

  #stop if error is small

  if (err[iter] < 0.00001) {
    cat('converged at epoch:',iter)
    break 
  }


  if (abs(L[iter]) < 0.00001) {
    cat('converged at epoch:',iter,"L=",L[iter])
    break
  }

}

#-----FINISHED--------- 

#display actual,predicted & error
plot(err,type="b")
plot(L,type="b")
cat('state after',iter,"\n")
# res=cbind(y,pred,pred-y)
plot(x,y)
lines(x,pred,col="red")
cat("min Error:",min(err),"\n")
cat("a=",a,"b=",b,"Sigma2=",s,"\n")  

它有效,但我有一些问题:

  • 这个算法是否正确?
  • 我知道方差分量(用s表示)是残差的方差(这里,预测减去y)。但是他们在这里不一样,我的错误是什么?

0 个答案:

没有答案