解释张量流中lstm网络的重用变量

时间:2017-09-25 13:03:51

标签: python-3.x tensorflow lstm

我已经编写了一个代码来设计Tensorflow中的长期短期内存网络。在我做了很多更改并在本网站上阅读了一些评论之后,代码已经运行了。 这部分代码很重要:

tf.reset_default_graph()
with tf.variable_scope("conv1"): 
    cell= tf.contrib.rnn.LSTMCell(n_hidden_units, forget_bias=0.0,    state_is_tuple=True,reuse=tf.get_variable_scope().reuse)
    outputs, final_state = tf.nn.dynamic_rnn(cell, X_in, initial_state=init_state, time_major=False,scope="conv1")

los函数图总是这样: enter image description here

在某些时候训练损失函数减少并且验证损失函数增加。我认为一定不能这样。我想问一下我的代码中的变量重用是否正确?如果你知道损失函数图有什么问题?

提前谢谢。

编辑:

我想也许我应该发布我的代码,因此更容易理解,因为我找不到解决方案:

tf.reset_default_graph()
x = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_classes])
weights = {
# (147, 128),tf.random_normal_initializer(mean=0.0, stddev=0.1)
'in': tf.get_variable('W_in',shape=[n_inputs, n_hidden_units],initializer=tf.truncated_normal_initializer(stddev=0.5)),
# (128, 5)
'out': tf.get_variable('W_out',shape=[n_hidden_units, n_classes],initializer=tf.truncated_normal_initializer(stddev=0.5))}
biases = {
# (128, )
'in': tf.get_variable('b_in',shape=[n_hidden_units, ],initializer=tf.random_normal_initializer(stddev=0.5)),
# (5, )
'out': tf.get_variable('b_out', shape=[n_classes, ],initializer=tf.random_normal_initializer(stddev=0.5))}
 def lstm_cell():
       return tf.contrib.rnn.BasicLSTMCell(n_hidden_units, forget_bias=0.0, state_is_tuple=True,reuse=tf.get_variable_scope().reuse)

def RNN(X, weights, biases):
       X = tf.reshape(X, [-1, n_inputs])
       X_in = tf.nn.tanh(tf.matmul(X, weights['in']) + biases['in'])
       X_in=tf.nn.dropout(X_in,keep_prob=0.5)
       X_in = tf.reshape(X_in, [-1, n_steps, n_hidden_units])
       with tf.variable_scope("conv1"):
             cell = lstm_cell()
             cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=0.5)
             init_state = cell.zero_state(tf.shape(X_in)[0], dtype=tf.float32)
             outputs, final_state = tf.nn.dynamic_rnn(cell, X_in, initial_state=init_state, time_major=False,scope="conv1")
       outputs = tf.unstack(tf.transpose(outputs, [1,0,2]))
       results = tf.matmul(outputs[-1], weights['out']) + biases['out']    

       return results

pred = RNN(x, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
train_op = tf.train.AdamOptimizer(0.0025).minimize(cost)

1 个答案:

答案 0 :(得分:0)

通常,如果您的验证损失开始增加,则意味着您的网络过度适应训练数据。您可以通过正则化,丢失或其他方法来减少这种情况。您还应该尝试减少网络中神经元/图层的总数。