使用tf.merge_all_summaries()时TensorFlow:PlaceHolder错误

时间:2016-02-15 15:52:35

标签: python neural-network tensorflow

我收到占位符错误。

我不知道这意味着什么,因为我在sess.run(..., {_y: y, _X: X})上正确映射...我在这里提供了一个功能齐全的MWE来重现错误:

import tensorflow as tf
import numpy as np

def init_weights(shape):
    return tf.Variable(tf.random_normal(shape, stddev=0.01))

class NeuralNet:
    def __init__(self, hidden):
        self.hidden = hidden

    def __del__(self):
        self.sess.close()

    def fit(self, X, y):
        _X = tf.placeholder('float', [None, None])
        _y = tf.placeholder('float', [None, 1])

        w0 = init_weights([X.shape[1], self.hidden])
        b0 = tf.Variable(tf.zeros([self.hidden]))
        w1 = init_weights([self.hidden, 1])
        b1 = tf.Variable(tf.zeros([1]))

        self.sess = tf.Session()
        self.sess.run(tf.initialize_all_variables())

        h = tf.nn.sigmoid(tf.matmul(_X, w0) + b0)
        self.yp = tf.nn.sigmoid(tf.matmul(h, w1) + b1)

        C = tf.reduce_mean(tf.square(self.yp - y))
        o = tf.train.GradientDescentOptimizer(0.5).minimize(C)

        correct = tf.equal(tf.argmax(_y, 1), tf.argmax(self.yp, 1))
        accuracy = tf.reduce_mean(tf.cast(correct, "float"))
        tf.scalar_summary("accuracy", accuracy)
        tf.scalar_summary("loss", C)

        merged = tf.merge_all_summaries()
        import shutil
        shutil.rmtree('logs')
        writer = tf.train.SummaryWriter('logs', self.sess.graph_def)

        for i in xrange(1000+1):
            if i % 100 == 0:
                res = self.sess.run([o, merged], feed_dict={_X: X, _y: y})
            else:
                self.sess.run(o, feed_dict={_X: X, _y: y})
        return self

    def predict(self, X):
        yp = self.sess.run(self.yp, feed_dict={_X: X})
        return (yp >= 0.5).astype(int)


X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1]])
y = np.array([[0],[1],[1],[0]]])

m = NeuralNet(10)
m.fit(X, y)
yp = m.predict(X)[:, 0]
print accuracy_score(y, yp)

错误:

I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 8
I tensorflow/core/common_runtime/direct_session.cc:58] Direct session inter op parallelism threads: 8
0.847222222222
W tensorflow/core/common_runtime/executor.cc:1076] 0x2340f40 Compute status: Invalid argument: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float
     [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
W tensorflow/core/common_runtime/executor.cc:1076] 0x2340f40 Compute status: Invalid argument: You must feed a value for placeholder tensor 'Placeholder' with dtype float
     [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Traceback (most recent call last):
  File "neuralnet.py", line 64, in <module>
    m.fit(X[tr], y[tr, np.newaxis])
  File "neuralnet.py", line 44, in fit
    res = self.sess.run([o, merged], feed_dict={self._X: X, _y: y})
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 368, in run
    results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 444, in _do_run
    e.code)
tensorflow.python.framework.errors.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float
     [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'Placeholder_1', defined at:
  File "neuralnet.py", line 64, in <module>
    m.fit(X[tr], y[tr, np.newaxis])
  File "neuralnet.py", line 16, in fit
    _y = tf.placeholder('float', [None, 1])
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 673, in placeholder
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 463, in _placeholder
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 664, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1834, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1043, in __init__
    self._traceback = _extract_stack()

如果我从tf.merge_all_summaries()删除merged或删除self.sess.run([o, merged], ...),那么它就可以正常运行。

这看起来类似于这篇文章: Error when computing summaries in TensorFlow 但是,我没有使用iPython ...

1 个答案:

答案 0 :(得分:18)

tf.merge_all_summaries()函数很方便,但也有些危险:它合并默认图表中的所有摘要,其中包括以前显然未连接的代码调用的任何摘要汇总节点到默认图表。如果旧的摘要节点依赖于旧的占位符,您将收到错误,例如您在问题中显示的错误(以及previous questions也是如此)。

有两种独立的解决方法:

  1. 确保明确收集要计算的摘要。这与在您的示例中使用显式tf.merge_summary() op一样简单:

    accuracy_summary = tf.scalar_summary("accuracy", accuracy)
    loss_summary = tf.scalar_summary("loss", C)
    
    merged = tf.merge_summary([accuracy_summary, loss_summary])
    
  2. 确保每次创建新的摘要集时,都会在新图表中执行此操作。建议的样式是使用显式默认图:

    with tf.Graph().as_default():
      # Build model and create session in this scope.
      #
      # Only summary nodes created in this scope will be returned by a call to
      # `tf.merge_all_summaries()`
    

    或者,如果您使用的是TensorFlow的最新开源版本(或即将推出的0.7.0版本),则可以调用tf.reset_default_graph()来重置图形的状态并删除任何旧的摘要节点。 / p>