我一直在反对这一点,并且无法弄清楚我在实施这些RNN时做错了什么(如果有的话)。为了让你们处于前进阶段,我可以告诉你两个实现计算相同的输出,因此正向阶段是正确的。这个问题处于倒退阶段。
这是我的python反向代码。它非常接近karpathy的神经传递风格,但并不完全符合:
def backward(self, cache, target,c=leastsquares_cost, dc=leastsquares_dcost):
'''
cache is from forward pass
c is a cost function
dc is a function used as dc(output, target) which gives the gradient dc/doutput
'''
XdotW = cache['XdotW'] #num_time_steps x hidden_size
Hin = cache['Hin'] # num_time_steps x hidden_size
T = Hin.shape[0]
Hout = cache['Hout']
Xin = cache['Xin']
Xout = cache['Xout']
Oin = cache['Oin'] # num_time_steps x output_size
Oout=cache['Oout']
dcdOin = dc(Oout, target) # this will be num_time_steps x num_outputs. these are dc/dO_j
dcdWho = np.dot(Hout.transpose(), dcdOin) # this is the sum of outer products for all time
# bias term is added at the end with coefficient 1 hence the dot product is just the sum
dcdbho = np.sum(dcdOin, axis=0, keepdims=True) #this sums all the time steps
dcdHout = np.dot(dcdOin, self.Who.transpose()) #reflects dcdHout_ij should be the dot product of dcdoin and the i'th row of Who; this is only for the outputs
# now go back in time
dcdHin = np.zeros(dcdHout.shape)
# for t=T we can ignore the other term (error from the next timestep). self.df is derivative of activation function (here, tanh):
dcdHin[T-1] = self.df(Hin[T-1]) * dcdHout[T-1] # because we don't need to worry about the next timestep, dcdHout is already corrent for t=T
for t in reversed(xrange(T-1)):
# we need to add to dcdHout[t] the error from the next timestep
dcdHout[t] += np.dot(dcdHin[t], self.Whh.transpose())
# now we have the correct form for dcdHout[t]
dcdHin[t] = self.df(Hin[t]) * dcdHout[t]
# now we've gone through all t, and we can continue
dcdWhh = np.zeros(self.Whh.shape)
for t in range(T-1): #skip T bc dHdin[T+1] doesn't exist
dcdWhh += np.outer(Hout[t], dcdHin[t+1])
# and we can do bias as well
dcdbhh = np.sum(dcdHin,axis=0, keepdims=True)
# now we need to go back to the embeddings
dcdWxh = np.dot(Xout.transpose(), dcdHin)
return {'dcdOout': dcdOout, 'dcdWxh': dcdWxh, 'dcdWhh': dcdWhh, 'dcdWho': dcdWho, 'dcdbhh': dcdbhh, 'dcdbho': dcdbho, 'cost':c(Oout, target)}
这里是theano代码(主要是从我在网上找到的另一个实现中复制而来。我将权重初始化为我的纯python rnn的随机权重,以便一切都相同。):
# input (where first dimension is time)
u = TT.matrix()
# target (where first dimension is time)
t = TT.matrix()
# initial hidden state of the RNN
h0 = TT.vector()
# learning rate
lr = TT.scalar()
# recurrent weights as a shared variable
W = theano.shared(rnn.Whh)
# input to hidden layer weights
W_in = theano.shared(rnn.Wxh)
# hidden to output layer weights
W_out = theano.shared(rnn.Who)
# bias 1
b_h = theano.shared(rnn.bhh[0])
# bias 2
b_o = theano.shared(rnn.bho[0])
# recurrent function (using tanh activation function) and linear output
# activation function
def step(u_t, h_tm1, W, W_in, W_out):
h_t = TT.tanh(TT.dot(u_t, W_in) + TT.dot(h_tm1, W) + b_h)
y_t = TT.dot(h_t, W_out) + b_o
return h_t, y_t
# the hidden state `h` for the entire sequence, and the output for the
# entrie sequence `y` (first dimension is always time)
[h, y], _ = theano.scan(step,
sequences=u,
outputs_info=[h0, None],
non_sequences=[W, W_in, W_out])
# error between output and target
error = (.5*(y - t) ** 2).sum()
# gradients on the weights using BPTT
gW, gW_in, gW_out, gb_h, gb_o = TT.grad(error, [W, W_in, W_out, b_h, b_o])
# training function, that computes the error and updates the weights using
# SGD.
现在,这是疯狂的事情。如果我运行以下内容:
fn = theano.function([h0, u, t, lr],
[error, y, h, gW, gW_in, gW_out, gb_h, gb_o],
updates={W: W - lr * gW,
W_in: W_in - lr * gW_in,
W_out: W_out - lr * gW_out})
er, yout, hout, gWhh, gWhx, gWho, gbh, gbo =fn(numpy.zeros((n,)), numpy.eye(5), numpy.eye(5),.01)
cache = rnn.forward(np.eye(5))
bc = rnn.backward(cache, np.eye(5))
print "sum difference between gWho (theano) and bc['dcdWho'] (pure python):"
print np.sum(gWho - bc['dcdWho'])
print "sum differnce between gWhh(theano) and bc['dcdWho'] (pure python):"
print np.sum(gWhh - bc['dcdWhh'])
print "sum difference between gWhx (theano) and bc['dcdWxh'] (pure pyython):"
print np.sum(gWhx - bc['dcdWxh'])
print "sum different between the last row of gWhx (theano) and the last row of bc['dcdWxh'] (pure python):"
print np.sum(gWhx[-1] - bc['dcdWxh'][-1])
我得到以下输出:
sum difference between gWho (theano) and bc['dcdWho'] (pure python):
-4.59268040265e-16
sum differnce between gWhh(theano) and bc['dcdWhh'] (pure python):
0.120527063611
sum difference between gWhx (theano) and bc['dcdWxh'] (pure pyython):
-0.332613468652
sum different between the last row of gWhx (theano) and the last row of bc['dcdWxh'] (pure python):
4.33680868994e-18
因此,我得到隐藏层和输出权之间的权重矩阵的导数,但不是隐藏权重矩阵的导数 - >隐藏或输入 - >隐。但这个疯狂的事情是我总是得到重量矩阵输入的最后一行 - >隐藏正确。这对我来说是疯狂的。我不知道这里发生了什么。注意权重矩阵输入的最后一行 - >隐藏不对应于最后的时间步或任何事情(例如,我将通过正确计算最后时间步的导数但不能正确地传回时间来解释)。 dcdWxh是dcdWxh所有时间步长的总和 - 所以我怎样才能得到一行正确但没有其他的???
有人可以帮忙吗?我在这里完全没有想法。
答案 0 :(得分:0)
您应该计算两个矩阵之差的逐点绝对值的总和。由于具体的学习任务,简单的总和可能接近于零(你是否模仿零函数?:),无论哪一个。
最后一行可能是通过神经元上的常数来实现权重,即偏差,所以你 - 似乎 - 总是得到正确的偏差(但是,检查绝对值的总和)。
它看起来像矩阵的行主要和列主要表示法混淆,如
gWhx - bc['dcdWxh']
从“隐藏到x”中读取的重量与“x到隐藏”相反。
我宁愿将此作为评论发布,但我在这方面缺乏声誉。遗憾!