基本RNN上的Theano奇怪错误

时间:2015-11-17 00:59:51

标签: python theano recurrent-neural-network

我是Theano / RNN编程的初学者,我根据代码here进行了第一个简单的实现

在我的代码中,我创建了一个简单的自回归过程作为数据集,其中输出序列的每个项都是输入序列的前4个值的总和(正态分布值)

完整的代码是

import numpy as np
import theano
import theano.tensor as TT
#theano.config.exception_verbosity = "high"
import matplotlib.pyplot as plt

SERIES_LEN = 1000;
REGR = 4;
LEARN_RATE = 0.01;
x_train = np.random.randn(SERIES_LEN)
y_train = np.asarray([sum(x_train[range(max(i-REGR,0),i+1)]) for i in range(SERIES_LEN)])
mval = max(y_train)
y_train /= mval
x_train /= mval

x_test = np.random.randn(SERIES_LEN)
y_test = np.asarray([sum(x_test[range(max(i-REGR,0),i+1)]) for i in range(SERIES_LEN)])
mval = max(y_test)
y_test /= mval
x_test /= mval


# number of hidden units
n = 10
# number of input units
nin = 1
# number of output units
nout = 1

# input (where first dimension is time)
u = TT.matrix()
# target (where first dimension is time)
t = TT.matrix()
# initial hidden state of the RNN
h0 = TT.vector()
# learning rate
lr = TT.scalar()
# recurrent weights as a shared variable
W = theano.shared(np.random.uniform(size=(n, n), low=-.01, high=.01))
# input to hidden layer weights
W_in = theano.shared(np.random.uniform(size=(nin, n), low=-.01, high=.01))
# hidden to output layer weights
W_out = theano.shared(np.random.uniform(size=(n, nout), low=-.01, high=.01))


# recurrent function (using tanh activation function) and linear output
# activation function
def step(u_t, h_tm1, W, W_in, W_out):
    h_t = TT.tanh(TT.dot(u_t, W_in) + TT.dot(h_tm1, W))
    y_t = TT.dot(h_t, W_out)
    return h_t, y_t

# the hidden state `h` for the entire sequence, and the output for the
# entrie sequence `y` (first dimension is always time)
[h, y], _ = theano.scan(step,
                        sequences=u,
                        outputs_info=[dict(initial=TT.zeros(n)), None],
                        non_sequences=[W, W_in, W_out],
                        truncate_gradient=REGR)
# error between output and target
error = ((y - t) ** 2).sum()
# gradients on the weights using BPTT
gW, gW_in, gW_out = TT.grad(error, [W, W_in, W_out])
# training function, that computes the error and updates the weights using
# SGD.
fn = theano.function([u, t, lr],
                     error,
                     updates=((W, W - lr * gW),
                              (W_in, W_in - lr * gW_in),
                              (W_out, W_out - lr * gW_out)))


fn(np.atleast_2d(x_train).T, np.atleast_2d(y_train).T, LEARN_RATE)

eval_output = theano.function([u],y)
pred_out = eval_output(x_test)

print pred_out
plt.plot(pred_out)
plt.plot(y_test)
plt.show()

对链接页面中定义的基本行为进行的修改很少。我收到一个奇怪的错误,阻止程序在行

之后运行进一步的指令
fn(np.atleast_2d(x_train).T, np.atleast_2d(y_train).T, LEARN_RATE)

我得到的错误是

  

/Library/Python/2.7/site-packages/theano/scan_module/scan_perform_ext.py:133:运行时警告:numpy.ndarray大小已更改,可能表示二进制不兼容     来自scan_perform.scan_perform import *   BLAS错误:传递给cblas_dgemv的参数incX为0,这是无效的。

我真的不明白如何解决这个问题并且没有发现任何关于此错误的信息

0 个答案:

没有答案