InvalidArgumentError(请参见上面的回溯):您必须使用dtype float为占位符张量'Placeholder'提供值

时间:2017-10-05 03:11:49

标签: python tensorflow

这是我的代码:

import numpy as np
import tensorflow as tf

input_dim=8
layer1_dim=6

learning_rate=0.01

train_data=np.loadtxt("data.txt",dtype=float)
train_target=train_data[:,-1]
train_feature=train_data[:,0:-1]
test_data=np.loadtxt("data.txt",dtype=float)
test_target=test_data[:,-1]
test_feature=test_data[:,0:-1]


x=tf.placeholder(tf.float32)
y=tf.placeholder(tf.float32)

w1=tf.Variable(tf.random_normal([input_dim,layer1_dim]))


b1=tf.Variable(tf.random_normal([1,layer1_dim]))


layer_1 = tf.nn.tanh(tf.add(tf.matmul(x, w1), b1))


loss=tf.reduce_mean(tf.square(layer_1-y))

train_op =   tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

 init = tf.global_variables_initializer()

with tf.Session() as session:
    session.run(init)

    for i in range(10):
        print(session.run(train_op, feed_dict={x: train_feature, y: train_target}))
        print(layer_1)
        print(loss.eval())

这是我的错误:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float
 [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=<unknown>, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

使用退出代码1完成处理

数据只是一个普通矩阵,即6x8特征和6x1目标。 sess.run的打印是None。 如果我不打印损失,则没有错误,但是没有错误的sess.run。

1 个答案:

答案 0 :(得分:0)

如果他们真的是你想要的,你应该仔细检查你的输入。以下代码段有效:

import numpy as np
import tensorflow as tf

input_dim = 8
layer1_dim = 6
learning_rate = 0.01

train_data = np.random.randn(6, 9).astype(np.float32)
train_target = np.expand_dims(train_data[:, -1], axis=-1)
train_feature = train_data[:, 0:-1]

assert train_feature.dtype == np.float32
assert train_target.dtype == np.float32
assert train_feature.shape == (6, 8)
assert train_target.shape == (6, 1)


x = tf.placeholder(tf.float32, name='plhdr_X')
y = tf.placeholder(tf.float32, name='pldhr_Y')

w1 = tf.Variable(tf.random_normal([input_dim, layer1_dim]))
b1 = tf.Variable(tf.random_normal([1, layer1_dim]))

layer_1 = tf.nn.tanh(tf.add(tf.matmul(x, w1), b1))
loss = tf.reduce_mean(tf.square(layer_1 - y))

train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

init = tf.global_variables_initializer()

with tf.Session() as session:
    session.run(init)
    for i in range(10):
        _, err = session.run([train_op, loss], feed_dict={
                             x: train_feature, y: train_target})
        print err

如果您为每个占位符添加名称,则会获得更详细的信息。