tensorflow InvalidArgumentError:“您必须为占位符张量提供值”

时间:2017-09-15 20:20:17

标签: machine-learning tensorflow deep-learning

这是一个简单的张量流代码,可以创建2个具有共享参数但不同输入(占位符)的模型。

import tensorflow as tf
import numpy as np


class Test:
    def __init__(self):
        self.x = tf.placeholder(tf.float32, [None] + [64], name='states')

        self.y = tf.placeholder(tf.float32, [None] + [64],
                                name='y')
        self.x_test = tf.placeholder(tf.float32, [None] + [64],
                                     name='states_test')

        self.is_training = tf.placeholder(tf.bool, name='is_training')

        self.model()

    def network(self, x, reuse):
        with tf.variable_scope('test_network', reuse=reuse):
            h1 = tf.layers.dense(x, 64)
            bn1 = tf.layers.batch_normalization(h1, training=self.is_training)
            drp1 = tf.layers.dropout(tf.nn.relu(bn1), rate=.9, training=self.is_training,
                                     name='dropout')
            h2 = tf.layers.dense(drp1, 64)
            bn2 = tf.layers.batch_normalization(h2, training=self.is_training)
            out = tf.layers.dropout(tf.nn.relu(bn2), rate=.9, training=self.is_training,
                                    name='dropout')
            return out

    def model(self):
        self.out = self.network(self.x, False)
        self.out_test = self.network(self.x_test, True)

        self.loss = tf.losses.mean_squared_error(self.out, self.y)
        extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
        with tf.control_dependencies(extra_update_ops):
            self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)


def main(_):
    my_test = Test()
    sess = tf.Session()
    init = tf.global_variables_initializer()
    sess.run(init)

    batch_x = np.zeros((4, 64))
    batch_y = np.zeros((4, 64))
    for i in range(10):
        feed_dict = {my_test.x: batch_x, my_test.y: batch_y, my_test.is_training: True}
        _, loss = sess.run([my_test.train_step, my_test.loss], feed_dict)

if __name__ == '__main__':
    tf.app.run()

当我运行“train_step”节点时,我收到此错误:

    InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'states_test' with dtype float and shape [?,64]
         [[Node: states_test = Placeholder[dtype=DT_FLOAT, shape=[?,64], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
         [[Node: mean_squared_error/value/_77 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_2678_mean_squared_error/value", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

虽然train_step节点没有连接到“states_test”占位符,并且id不需要它运行!,那我为什么要提供它呢?

但是,如果我更改模型函数以便在优化器之后创建第二个网络,则代码运行时没有任何错误! (像这样):

def model(self):
    self.out = self.network(self.x, False)

    self.loss = tf.losses.mean_squared_error(self.out, self.y)
    extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
    with tf.control_dependencies(extra_update_ops):
        self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)

    self.out_test = self.network(self.x_test, True)

为什么即使两个代码导致相同的张量流图也会发生这种情况? 任何人都可以解释这种行为吗?

1 个答案:

答案 0 :(得分:2)

问题在于使用批量规范,即这些行:

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
            self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)

请注意,您有两个共享变量的图表 - 您的培训和测试图表。您先创建两个,然后创建优化器。但是,您在extra_update_ops上使用控件依赖关系,这是所有更新操作的集合。问题是 - 每个批次规范都会创建更新操作(跟踪平均值/差异) - 您的列车图中有一个,测试图中有一个。因此,通过请求控制依赖性,您告诉TF,您的列车操作可以执行当且仅当执行和测试图中的批量标准统计数据时。这需要喂食测试样品。那么你应该怎么做?将extra_update_ops更改为仅包括列车图更新(通过名称范围,手动过滤或任何其他方法)或在构建测试图之前调用tf.get_collection,所以:

   def model(self):
        self.out = self.network(self.x, False)
        # Note that at this point we only gather train batch_norms
        extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 

        self.out_test = self.network(self.x_test, True)

        self.loss = tf.losses.mean_squared_error(self.out, self.y)
        with tf.control_dependencies(extra_update_ops):
            self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)

你可能也希望将reuse = True传递给你的蝙蝠侠。