回声状态网络?

时间:2012-10-30 06:02:22

标签: r neural-network time-series

我最近听到几个人说回声状态网络适合时间序列建模。所以我觉得值得一试。

http://en.wikipedia.org/wiki/Echo_state_network

这是一种循环网络,只学习输出层中的权重,其他权重随机化。

他们在R中的库/包在多大程度上可用于创建回声状态网络?

(注意:有一个问题:Neural net package in R,这可能是相关的,但是它要求“递归”网络,而我正在寻找“周期性”或“回声状态”网络。

2 个答案:

答案 0 :(得分:8)

我知道问题已经过时了,但这可能对其他人有用。

在这里,您可以找到有效的演示source code of a minimalistic Echo State Network in R。它不是一个成熟的图书馆,但我希望它易于理解并适应您的应用程序。

# A minimalistic Echo State Networks demo with Mackey-Glass (delay 17) data 
# in "plain" R.
# by Mantas Lukosevicius 2012
# http://minds.jacobs-university.de/mantas

# load the data
trainLen = 2000
testLen = 2000
initLen = 100

data = as.matrix(read.table('MackeyGlass_t17.txt'))

# plot some of it
while( dev.cur() != 1 ) dev.off() # close all previous plots
dev.new()
plot(data[1:1000],type='l')
title(main='A sample of data')

# generate the ESN reservoir
inSize = outSize = 1
resSize = 1000
a = 0.3 # leaking rate

set.seed(42)
Win = matrix(runif(resSize*(1+inSize),-0.5,0.5),resSize)
W = matrix(runif(resSize*resSize,-0.5,0.5),resSize)
# Option 1 - direct scaling (quick&dirty, reservoir-specific):
#W = W * 0.135 
# Option 2 - normalizing and setting spectral radius (correct, slow):
cat('Computing spectral radius...')
rhoW = abs(eigen(W,only.values=TRUE)$values[1])
print('done.')
W = W * 1.25 / rhoW

# allocated memory for the design (collected states) matrix
X = matrix(0,1+inSize+resSize,trainLen-initLen)
# set the corresponding target matrix directly
Yt = matrix(data[(initLen+2):(trainLen+1)],1)

# run the reservoir with the data and collect X
x = rep(0,resSize)
for (t in 1:trainLen){
    u = data[t]
    x = (1-a)*x + a*tanh( Win %*% rbind(1,u) + W %*% x )
    if (t > initLen)
        X[,t-initLen] = rbind(1,u,x)
}

# train the output
reg = 1e-8  # regularization coefficient
X_T = t(X)
Wout = Yt %*% X_T %*% solve( X %*% X_T + reg*diag(1+inSize+resSize) )

# run the trained ESN in a generative mode. no need to initialize here, 
# because x is initialized with training data and we continue from there.
Y = matrix(0,outSize,testLen)
u = data[trainLen+1]
for (t in 1:testLen){
    x = (1-a)*x + a*tanh( Win %*% rbind(1,u) + W %*% x )
    y = Wout %*% rbind(1,u,x)
    Y[,t] = y
    # generative mode:
    u = y
    ## this would be a predictive mode:
    #u = data[trainLen+t+1] 
}

# compute MSE for the first errorLen time steps
errorLen = 500
mse = ( sum( (data[(trainLen+2):(trainLen+errorLen+1)] - Y[1,1:errorLen])^2 )
    / errorLen )
print( paste( 'MSE = ', mse ) )

# plot some signals
dev.new() 
plot( data[(trainLen+1):(trainLen+testLen+1)], type='l', col='green' )
lines( c(Y), col='blue' )
title(main=expression(paste('Target and generated signals ', bold(y)(italic(n)), 
    ' starting at ', italic(n)==0 )))
legend('bottomleft',legend=c('Target signal', 'Free-running predicted signal'), 
    col=c('green','blue'), lty=1, bty='n' )

dev.new()
matplot( t(X[(1:20),(1:200)]), type='l' )
title(main=expression(paste('Some reservoir activations ', bold(x)(italic(n)))))

dev.new()
barplot( Wout )
title(main=expression(paste('Output weights ', bold(W)^{out})))

答案 1 :(得分:1)

虽然这不能回答你关于R的问题,但我几乎可以肯定你可以自己轻松实现ESN(除非你需要更高级/更深奥的功能)。

看看at the definition of the ESN made by Jaeger:您需要的是内部状态和输出的等式(1)和(2),以及学习的等式(3)或(4)。实现非常简单,只需矩阵乘法,范数和伪逆,你就可以了。

P.S。实际上“循环”和“递归”神经网络并没有太大的不同。术语“递归”通常 - 但不总是 - 指的是那些处理图形的神经网络,而“循环”网络处理序列/时间序列(这是图的特殊情况)。 “循环”和“递归”网络都在其隐藏层中具有循环,因此它们的内部状态是递归定义的。语言混乱的一部分,关键是你可以尝试使用现有的库并根据你的需要进行调整。