在theano中的扫描功能,递归神经网络

时间:2016-05-31 16:39:34

标签: python theano recurrent-neural-network

我一直试图在theano中使用scan来实现RNN(这个例子改编自这里:https://github.com/valentin012/conspeech/blob/master/rnn_theano.py

def forward_prop_step(x_t, s_t_prev, U, V, W):
    u = T.dot(x_t,U)
    s_t = T.tanh(u+T.dot(s_t_prev,W)) 
    o_t = T.nnet.softmax(T.dot(s_t,V))
    return [o_t[0], s_t]
Q = np.zeros(self.hidden_dim)
init = theano.shared(Q)
[o,s], updates = theano.scan(
    forward_prop_step,
    sequences=x,
    outputs_info=[None, dict(initial=init)],
    non_sequences=[U, V, W],
    truncate_gradient=self.bptt_truncate,
    strict=False)

现在,我尝试做的是实现RNN,其中输出变量直接相互影响(o_{t-1}o_t通过权重链接)。我试图像这样实现它:

def forward_prop_step(x_t, s_t_prev, o_t_prev, U, V, W, Q):
    u = T.dot(x_t,U)
    s_t = T.tanh(u+T.dot(s_t_prev,W)) 
    o_t = T.nnet.softmax(T.dot(o_t_prev,Q)+T.dot(s_t,V))
    return [o_t[0], s_t, o_t[0]]
R = np.zeros(self.hidden_dim)
init = theano.shared(R)
S = np.zeros(self.word_dim)
init_S = theano.shared(S)
[o,s,op], updates = theano.scan(
    forward_prop_step,
    sequences=x,
    outputs_info=[None, dict(initial=init), dict(initial=init_S)],
    non_sequences=[U, V, W, Q],
    truncate_gradient=self.bptt_truncate,
    strict=False)

然而,它不起作用,我不知道如何解决它。

错误消息是:

  

文件“theano / scan_module / scan_perform.pyx”,第397行,在theano.scan_module.scan_perform.perform中(/home/mertens/.theano/compiledir_Linux-3.2--amd64-x86_64-with-debian-7.6-- 2.7.9-64 / scan_perform / mod.cpp:4193)   ValueError:形状不匹配:A.shape [1]!= x.shape [0]   应用导致错误的节点:CGemv {inplace}(AllocEmpty {dtype ='float64'}。0,TensorConstant {1.0},Q_copy.T ,, TensorConstant {0.0})   Toposort指数:10

修改 这是确切的代码:

word_dim=3
hidden_dim=4

U = np.random.uniform(-np.sqrt(1./word_dim), np.sqrt(1./word_dim), (word_dim,hidden_dim))
V = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (hidden_dim,word_dim))
W = np.random.uniform(-np.sqrt(1./hidden_dim), np.sqrt(1./hidden_dim), (hidden_dim, hidden_dim))
Q = np.random.uniform(-np.sqrt(1./word_dim), np.sqrt(1./word_dim), (word_dim, word_dim))

U = theano.shared(name='U', value=U.astype(theano.config.floatX))
V = theano.shared(name='V', value=V.astype(theano.config.floatX))
W = theano.shared(name='W', value=W.astype(theano.config.floatX))
Q = theano.shared(name='Q', value=W.astype(theano.config.floatX))

def forward_prop_step(x_t, o_t_prev, s_t_prev, U, V, W, Q):
        u = T.dot(x_t,U)
        s_t = T.tanh(u+T.dot(s_t_prev,W))
        m = T.dot(o_t_prev,Q)
        mm = T.dot(s_t,V)
        SSS = mm
        o_t = T.nnet.softmax(SSS)
        q_t = o_t[0]
        return [q_t, s_t, m]

R = np.zeros(self.hidden_dim)
init = theano.shared(R)
S = np.zeros(self.word_dim)
init_S = theano.shared(S)
[o,s,loorky], updates = theano.scan(
        forward_prop_step,
        sequences=x,
        outputs_info=[dict(initial=init_S),dict(initial=init),None],
        non_sequences=[U, V, W, Q],
        truncate_gradient=self.bptt_truncate,
        strict=False)

self.my_forward_propagation = theano.function([x], [o,s,loorky])
aaa = np.zeros((1,3))+1
print self.my_forward_propagation(aaa)

当我从return语句中省略输出m时(相应地loorky变量加上None中的最后一个outputs_info),一切都很好。如果包含这个,我收到一条错误消息ValueError:Shape mismatch:A.shape [1]!= x.shape [0]

1 个答案:

答案 0 :(得分:0)

表单的实现并不清楚你的代码中有什么问题。 你能查一下这条线吗

o_t = T.nnet.softmax(T.dot(o_t_prev,Q)+T.dot(s_t,V))

什么是Q维度以及是否适用于添加到s_t