TensorFlow多个丢失值

时间:2018-01-16 21:47:13

标签: python tensorflow rnn

我正在通过此RNN tutorial来了解如何使用较低级别的TensorFlow API编写RNN。虽然我已经完成了所有工作,但我得到的total_loss值不同,具体取决于我在会话中如何评估它。

以下损失的计算方法有何不同?为什么在图中运行列车步骤与其他节点(即在相同的运行语句中)导致不同的损失值,然后分别运行列车步骤和其他节点(即在不同的运行语句中)?

这是图表:

X = tf.placeholder(tf.int32, [batch_size, num_steps], name = 'X')
Y = tf.placeholder(tf.int32, [batch_size, num_steps], name = 'Y')
initial_state = tf.zeros([batch_size, state_size])

X_one_hot = tf.one_hot(X, num_classes)
rnn_inputs = tf.unstack(X_one_hot, axis = 1)

Y_one_hot = tf.one_hot(Y, num_classes)
Y_one_hot_list = tf.unstack(Y_one_hot, axis = 1)

with tf.variable_scope('RNN_cell'):
    W = tf.get_variable('W', [num_classes + state_size, state_size])
    b = tf.get_variable('b', [state_size], initializer = tf.constant_initializer(0.0))

tf.summary.histogram('RNN_cell/weights', W)

# define the RNN cell
def RNNCell(rnn_input, state, activation = tf.tanh):
    with tf.variable_scope('RNN_cell', reuse = True):
        W = tf.get_variable('W', [num_classes + state_size, state_size])
        b = tf.get_variable('b', [state_size], initializer = tf.constant_initializer(0))
        H = activation(tf.matmul(tf.concat([rnn_input, state], axis = 1), W) + b)
    return H

# add RNN cells to the computational graph
state = initial_state
rnn_outputs = []
for rnn_input in rnn_inputs:
    state = RNNCell(rnn_input, state, tf.tanh)
    rnn_outputs.append(state)
final_state = rnn_outputs[-1]

# set up the softmax output layer
with tf.variable_scope('softmax_output'):
    W = tf.get_variable('W', [state_size, num_classes])
    b = tf.get_variable('b', [num_classes], initializer = tf.constant_initializer(0.0))

tf.summary.histogram('softmax_output/weights', W)

logits = [tf.matmul(rnn_output, W) + b for rnn_output in rnn_outputs]
probabilties = [tf.nn.softmax(logit) for logit in logits]
predictions = [tf.argmax(logit, 1) for logit in logits]

# set up loss function
losses = [tf.nn.softmax_cross_entropy_with_logits(labels = label, logits = logit) for 
         logit, label in zip(logits, Y_one_hot_list)]
total_loss = tf.reduce_mean(losses)

# set up the optimizer
train_step = tf.train.AdamOptimizer(learning_rate).minimize(total_loss)

tf.summary.scalar('loss', total_loss)

此版本的会话评估训练损失,采用train_step,然后再次评估损失。

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    train_writer = tf.summary.FileWriter( './RNN_Tutorial/temp1', sess.graph)
    summary = tf.summary.merge_all()

    for index, epoch in enumerate(gen_epochs(num_epochs, num_steps)):
        training_state = np.zeros((batch_size, state_size))
        for step, (x, y) in enumerate(epoch):
            training_loss1 = sess.run(total_loss, feed_dict = {X: x, Y: y, initial_state: training_state})
            sess.run(train_step, feed_dict = {X: x, Y: y, initial_state: training_state})
            training_loss2 = sess.run(total_loss, feed_dict = {X: x, Y: y, initial_state: training_state})

            if step % 1 == 0:
                train_writer.add_summary(summary_str, global_step = step)
                print(step, training_loss1, training_loss2)

输出看起来模型并没有真正学习。这是(部分)输出,在所有1000次迭代中并没有真正改变。它只是坚持0.65 - 0.7

0 0.6757775 0.66556937
1 0.6581067 0.6867344
2 0.70850086 0.66878074
3 0.67115635 0.68184483
4 0.67868954 0.6858209
5 0.6853568 0.66989964
6 0.672376 0.6554015
7 0.66563135 0.6655373
8 0.660332 0.6666234
9 0.6514224 0.6536864
10 0.65912485 0.6518013

这是我用train_step运行total_loss,loss和final_state的会话:

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    train_writer = tf.summary.FileWriter( './RNN_Tutorial/temp1', sess.graph)
    summary = tf.summary.merge_all()

    for index, epoch in enumerate(gen_epochs(num_epochs, num_steps)):
        training_state = np.zeros((batch_size, state_size))
        for step, (x, y) in enumerate(epoch):
            training_loss1 = sess.run(total_loss, feed_dict = {X: x, Y: y, initial_state: training_state})
            tr_losses, training_loss_, training_state, _, summary_str = \
            sess.run([losses,
                      total_loss,
                      final_state,
                      train_step,
                      summary], feed_dict={X:x, Y:y, initial_state:training_state})
            training_loss2 = sess.run(total_loss, feed_dict = {X: x, Y: y, initial_state: training_state})

            if step % 1 == 0:
                train_writer.add_summary(summary_str, global_step = step)
                print(step, training_loss1, training_loss_, training_loss2)

然而,在此输出中,在列车步骤之前计算的total_loss和用列车步骤计算的总损失稳定下降然后在0.53左右稳定,而在列车步骤(training_loss2)之后计算的损失仍然在0.65-0.7附近波动。和第一次会议一样。下面是另一个部分输出:

900 0.50464576 0.50464576 0.6973026
901 0.51603603 0.51603603 0.7115394
902 0.5465342 0.5465342 0.74994177
903 0.50591564 0.50591564 0.69172275
904 0.54837495 0.54837495 0.7333309
905 0.51697487 0.51697487 0.674438
906 0.5259896 0.5259896 0.70118546
907 0.5242365 0.5242365 0.71549624
908 0.50699174 0.50699174 0.7007787
909 0.5292892 0.5292892 0.7045353
910 0.49432433 0.49432433 0.73515224

我认为两个版本的会话块的培训损失都是相同的。为什么单独使用sess.run(total_loss,...)和sess.run(train_step,...)(即在第一个版本中)导致的损失值与使用sess.run时不同([loss,total_loss,final_state] ,train_step],...)?

1 个答案:

答案 0 :(得分:0)

想出来。在第二个for循环中没有获取和更新training_state = final_state的情况下运行会话是个问题。如果没有这个,模型就不会了解生成数据中内置的较长依赖关系。