具有相同数据,相同初始状态,相同递归神经网络的不同损耗值

时间:2018-08-13 02:58:23

标签: python numpy neural-network lstm recurrent-neural-network

我正在编写一个递归神经网络(特别是ConvLSTM)。最近,我注意到一个有趣的矛盾之处,我无法弄清楚。我从头开始使用numpy(对于gpu来说,技术上是cupy)和几条Chainer线(特别是针对其F.convolution_2D函数)编写了该神经网络。

在同一网络上运行两次时,对于前4个左右的训练示例,损失完全相同。但是,在第5个训练示例附近,损失的价值开始波动。

我已确保每次运行此网络时,它们都从相同的初始状态文本文件读取(因此具有相同的初始权重和偏差)。我还确保他们输入的数据完全相同。

与Numpy是否存在某些不一致之处,这是此问题的根源?我唯一能想到的与第四个训练示例不同的是梯度裁剪的首次使用。 numpy的linalg函数有问题吗?我不熟悉一些舍入错误吗?我已经扫描了我的代码,没有使用随机数的实例。

我在下面添加了反向传播功能:

def bptt(x2, y2, iteration):
x = cp.asarray(x2)
y = cp.asarray(y2)

global connected_weights
global main_kernel
global bias_i
global bias_f
global bias_c
global bias_o
global bias_y
global learning_rate

# Perform forward prop
prediction, pre_sigmoid_prediction, hidden_prediction, i, f, a, c, o, h = forward_prop(x)
loss = calculate_loss(prediction, y)
print("LOSS BEFORE: ")
print(loss)
# Calculate loss with respect to final layer
dLdy_2 = loss_derivative(prediction, y)
# Calculate loss with respect to pre sigmoid layer
dLdy_1 = cp.multiply(sigmoid_derivative(pre_sigmoid_prediction), dLdy_2)

# Calculate loss with respect to last layer of lstm
dLdh = cp.zeros([T + 1, channels_hidden, M, N])
dLdh[T - 1] = cp.reshape(cp.matmul(cp.transpose(connected_weights), dLdy_1.reshape(1, M * N)), (channels_hidden, M, N)) # reshape dLdh to the appropriate size
dLdw_0 = cp.matmul(dLdy_1.reshape(1, M*N), hidden_prediction.transpose(1,0))
# Calculate loss with respect to bias y
dLdb_y = dLdy_1

#--------------------fully connected------------------
bias_y = bias_y - learning_rate*dLdb_y
connected_weights = connected_weights - learning_rate*dLdw_0

# Initialize corresponding matrices
dLdo = cp.zeros([T, channels_hidden, M, N])
dLdc = cp.zeros([T + 1, channels_hidden, M, N])
dLda = cp.zeros([T, channels_hidden, M, N])
dLdf = cp.zeros([T, channels_hidden, M, N])
dLdi = cp.zeros([T, channels_hidden, M, N])
dLdI = cp.zeros([T, channels_hidden+ channels_img, M, N])
dLdW = cp.zeros([4*channels_hidden, channels_img + channels_hidden, kernel_dimension, kernel_dimension])


# Initialize other stuff
dLdo_hat = cp.zeros([T, channels_hidden, M, N])
dLda_hat = cp.zeros([T, channels_hidden, M, N])
dLdf_hat = cp.zeros([T, channels_hidden, M, N])
dLdi_hat = cp.zeros([T, channels_hidden, M, N])

# initialize biases
dLdb_c = cp.empty([channels_hidden, M, N])
dLdb_i = cp.empty([channels_hidden, M, N])
dLdb_f = cp.empty([channels_hidden, M, N])
dLdb_o = cp.empty([channels_hidden, M, N])

for t in cp.arange(T - 1, -1, -1):
    dLdo[t] = cp.multiply(dLdh[t], tanh(c[t]))
    dLdc[t] += cp.multiply(cp.multiply(dLdh[t], o[t]), (cp.ones((channels_hidden, M, N)) - cp.multiply(tanh(c[t]), tanh(c[t]))))
    dLdi[t] = cp.multiply(dLdc[t], a[t])
    dLda[t] = cp.multiply(dLdc[t], i[t])
    dLdf[t] = cp.multiply(dLdc[t], c[t - 1])
    dLdc[t - 1] = cp.multiply(dLdc[t], f[t])

    dLda_hat[t] = cp.multiply(dLda[t], (cp.ones((channels_hidden, M, N)) - cp.multiply(a[t], a[t])))
    dLdi_hat[t] = cp.multiply(cp.multiply(dLdi[t], i[t]), cp.ones((channels_hidden, M, N)) - i[t])
    dLdf_hat[t] = cp.multiply(cp.multiply(dLdf[t], f[t]), cp.ones((channels_hidden, M, N)) - f[t])
    dLdo_hat[t] = cp.multiply(cp.multiply(dLdo[t], o[t]), cp.ones((channels_hidden, M, N)) - o[t])

    dLdb_c += dLda_hat[t]
    dLdb_i += dLdi_hat[t]
    dLdb_f += dLdf_hat[t]
    dLdb_o += dLdo_hat[t]

    # CONCATENATE Z IN THE RIGHT ORDER SAME ORDER AS THE WEIGHTS
    dLdz_hat = cp.concatenate((dLdi_hat[t], dLdf_hat[t], dLda_hat[t], dLdo_hat[t]), axis = 0) 
    #determine convolution derivatives
    #here we will use the fact that in z = w * I, dLdW = dLdz * I
    temporary = cp.concatenate((x[t], h[t - 1]), axis=0).reshape(channels_hidden + channels_img, 1, M, N)
    dLdI[t] = cp.asarray(F.convolution_2d(dLdz_hat.reshape(1, 4*channels_hidden, M, N), main_kernel.transpose(1, 0, 2, 3), b=None, pad=1)[0].data) # reshape into flipped kernel dimensions
    dLdW += cp.asarray((F.convolution_2d(temporary, dLdz_hat.reshape(4*channels_hidden, 1, M, N), b=None, pad=1).data).transpose(1,0,2,3)) #reshape into kernel dimensions
    #gradient clipping
    if cp.amax(dLdW) > 1 or cp.amin(dLdW) < -1:
        dLdW = dLdW/cp.linalg.norm(dLdW)
    if cp.amax(dLdb_c) > 1 or cp.amin(dLdb_c) < -1:
        dLdb_c = dLdb_c/cp.linalg.norm(dLdb_c)
    if cp.amax(dLdb_i) > 1 or cp.amin(dLdb_i) < -1:
        dLdb_i = dLdb_i/cp.linalg.norm(dLdb_i)
    if cp.amax(dLdb_f) > 1 or cp.amin(dLdb_f) < -1:
        dLdb_f = dLdb_f/cp.linalg.norm(dLdb_f)
    if cp.amax(dLdb_o) > 1 or cp.amin(dLdb_o) < -1:
        dLdb_o = dLdb_o/cp.linalg.norm(dLdb_o)
    if cp.amax(dLdw_0) > 1 or cp.amin(dLdw_0) < -1:
        dLdw_0 = dLdw_0/cp.linalg.norm(dLdw_0)
    if cp.amax(dLdb_y) > 1 or cp.amin(dLdb_y) < -1:
        dLdb_y = dLdb_y/cp.linalg.norm(dLdb_y)

    print("dLdW on step: " + str(t) + " is this: " + str(dLdW[0][0][0][0]))
    #print("dLdw_0")
    #print("dLdW")
    #print(dLdW)
    #print(str(cp.amax(dLdw_0)) + " : " + str(cp.amin(dLdw_0)))
    #print("dLdW")
    #print(str(cp.amax(dLdW)) + " : " + str(cp.amin(dLdW)))
    #print("dLdb_c")
    #print(str(cp.amax(dLdb_c)) + " : " + str(cp.amin(dLdb_c)))

    dLdh[t-1] = dLdI[t][channels_img: channels_img+channels_hidden] 
    #.reshape(4*channels_hidden, channels_hidden+channels_img, kernel_dimension, kernel_dimension)
    #update weights with convolution derivatives

#----------------------------adam optimizer code-----------------------------------
#---------------------update main kernel---------
main_kernel = main_kernel - learning_rate*dLdW
#--------------------update bias c-----------------------
bias_c = bias_c - learning_rate*dLdb_c
#--------------------update bias i-----------------------
bias_i = bias_i - learning_rate*dLdb_i
#--------------------update bias f-----------------------
bias_f = bias_f - learning_rate*dLdb_f
#--------------------update bias c-----------------------
bias_o = bias_o - learning_rate*dLdb_o

prediction2, pre_sigmoid_prediction2, hidden_prediction2, i2, f2, a2, c2, o2, h2 = forward_prop(x)

print("dLdW is: " + str(dLdW[0][0][0][0]))       
loss2 = calculate_loss(prediction2, y)
print("LOSS AFTER: ")
print(loss2)


print("backpropagation complete")

1 个答案:

答案 0 :(得分:1)

哇,花了一些时间。

如果您查看反向传播代码,请仔细查看以下几行:

dLdb_c = cp.empty([channels_hidden, M, N])
dLdb_i = cp.empty([channels_hidden, M, N])
dLdb_f = cp.empty([channels_hidden, M, N])
dLdb_o = cp.empty([channels_hidden, M, N])

但是,请注意代码如何继续在这些空数组上使用+ =运算符。只需将数组更改为cp.zeros,代码就会产生一致的损失。