由于tf.placeholder的麻烦,无法运行预测

时间:2017-07-15 12:57:24

标签: tensorflow neural-network perceptron

道歉,我是Tensorflow的新手。我正在开发一个简单的onelayer_perceptron脚本,只需使用Tensorflow获取init参数训练神经网络:

我的编译器抱怨:

  

您必须为占位符张量输入值输入'用dtype float

错误发生在这里:

  

input_tensor = tf.placeholder(tf.float32,[None,n_input],name =" input")

请看看到目前为止我做了什么:

1)我初始化我的输入值

n_input = 10  # Number of input neurons
n_hidden_1 = 10  # Number of hidden layers
n_classes = 3  # Out layers

weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

2)初始化占位符:

input_tensor = tf.placeholder(tf.float32, [None, n_input], name="input")
output_tensor = tf.placeholder(tf.float32, [None, n_classes], name="output")

3)训练NN

# Construct model
prediction = onelayer_perceptron(input_tensor, weights, biases)

init = tf.global_variables_initializer() 

4)这是我的onelayer_perceptron函数,它只进行典型的NN计算matmul图层和权重,添加偏差并使用sigmoid激活

def onelayer_perceptron(input_tensor, weights, biases):
    layer_1_multiplication = tf.matmul(input_tensor, weights['h1'])
    layer_1_addition = tf.add(layer_1_multiplication, biases['b1'])
    layer_1_activation = tf.nn.sigmoid(layer_1_addition)

    out_layer_multiplication = tf.matmul(layer_1_activation, weights['out'])
    out_layer_addition = out_layer_multiplication + biases['out']

    return out_layer_addition

5)运行我的脚本

with tf.Session() as sess:
   sess.run(init)

   i = sess.run(input_tensor)
   print(i)

1 个答案:

答案 0 :(得分:1)

您没有将输入馈送到占位符;你使用feed_dict

来做

你应该做类似的事情:

 out = session.run(Tensor(s)_you_want_to_evaluate, feed_dict={input_tensor: input of size [batch_size,n_input], output_tensor: output of size [batch size, classes] })