您必须输入占位符张量的值

时间:2018-10-10 10:52:29

标签: python tensorflow lstm tensorboard recurrent-neural-network

我正在尝试使用tesorboarb实现lstm nn,并且收到以下错误消息:您必须为占位符张量'performance_1 / loss_summary'提供一个值。

我已经搜索了很多没有结果的问题的答案。

with tf.name_scope('performance'):
loss = tf.placeholder(tf.float32,shape=None,name='loss_summary') 
    tf_loss_summary = tf.summary.scalar('loss', loss)
    tf_accuracy_ph = tf.placeholder(tf.float32,shape=None, name='accuracy_summary') 
    tf_accuracy_summary = tf.summary.scalar('accuracy', tf_accuracy_ph)

# Gradient norm summary
for g in gradients:
for var in v:
    if 'hidden3' in var.name and 'w' in var.name:
        with tf.name_scope('Gradients'):
            tf_last_grad_norm = tf.sqrt(tf.reduce_mean(g**2))
            tf_gradnorm_summary = tf.summary.scalar('grad_norm', tf_last_grad_norm)
            break   

# Merge all summaries together
performance_summaries = tf.summary.merge([tf_loss_summary,tf_accuracy_summary])

我收到错误的另一部分代码是:

for ep in range(epochs):       

for step in range(train_seq_length//batch_size):

    u_data, u_labels = data_gen.unroll_batches()

    feed_dict = {}
    for ui,(dat,lbl) in enumerate(zip(u_data,u_labels)):            
        feed_dict[train_inputs[ui]] = dat.reshape(-1,1)
        feed_dict[train_outputs[ui]] = lbl.reshape(-1,1)

    feed_dict.update({tf_learning_rate: 0.0001, tf_min_learning_rate:0.000001})

    _, l = session.run([optimizer, loss], feed_dict=feed_dict)

    average_loss += l


if (ep+1) % valid_summary == 0:

    average_loss = average_loss/(valid_summary*(train_seq_length//batch_size))

  # The average loss
    if (ep+1)%valid_summary==0:
        print('Average loss at step %d: %f' % (ep+1, average_loss))

    train_mse_ot.append(average_loss)

    average_loss = 0 # reset loss

    predictions_seq = []

    mse_test_loss_seq = []

谢谢。

2 个答案:

答案 0 :(得分:0)

loss是一个占位符,因此您必须为其赋予一个值。您可能没有注意到它,并取代了您的实际损失功能。通常,摘要不是占位符,因此您对变量和代码流有误解。

答案 1 :(得分:0)

loss初始化为变量。当您将某物定义为占位符时,在运行依赖它的图时必须输入它的值。