如何解决“ ValueError:无法为具有形状'(?,1)'的张量'Placeholder_1:0'提供形状(5941,)的值“

时间:2018-12-20 14:50:05

标签: python tensorflow

我正在训练输入X_train和y_train的线性回归模型。稍后我将使用X_val,y_val,X_test,y_test对其进行测试。但是现在,事情是Tensorflow产生一个错误,我无法提供我的X_train,它的形式为(5941,2)。它与我的X占位符兼容,因为它的形状为[None,2]。我不知道出了什么问题。

# Assign placeholders
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 2])
Y = tf.placeholder(tf.float32, [None, 1])

# Set parameters for layers
num_in = 2
num_hidden = 10
num_out = 1
keep_prob = 0.7

# Layer 1
W1 = tf.get_variable("weight1", shape=[num_in,num_hidden], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
b1 = tf.get_variable("bias1", shape=[num_hidden], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
_L1 = tf.nn.leaky_relu(tf.matmul(X, W1) + b1)
L1 =tf.nn.dropout(_L1, keep_prob)

# Layer 2
W2 = tf.get_variable("weight2", shape=[num_hidden,num_hidden], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
b2 = tf.get_variable("bias2", shape=[num_hidden], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
_L2 = tf.nn.leaky_relu(tf.matmul(L1, W2) + b2)
L2 =tf.nn.dropout(_L2, keep_prob)

# Layer 3
W3 = tf.get_variable("weight3", shape=[num_hidden,num_hidden], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
b3 = tf.get_variable("bias3", shape=[num_hidden], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
_L3 = tf.nn.leaky_relu(tf.matmul(L2, W3) + b3)
L3 = tf.nn.dropout(_L3, keep_prob)

# Layer 4
W4 = tf.get_variable("weight4", shape=[num_hidden,num_out], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
b4 = tf.get_variable("bias4", shape=[num_out], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
hypothesis = tf.add(tf.matmul(L3, W4), b4)

# Hyperparameters
learning_rate = 0.0001
keep_prob = tf.placeholder_with_default(0.7, shape=())
epochs = 10000

# Cost function, optimizer and cost history list for visualization
cost = tf.reduce_mean(tf.square(hypothesis - Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train = optimizer.minimize(cost)
cost_history = []
cost_history = np.empty(shape=[1], dtype=float)

# Initializer
init = tf.global_variables_initializer()

# Close the existing session
if 'session' in locals() and session is not None:
    print('Close interactive session')
    session.close()

# Launch the Graph
with tf.Session() as sess:
    # Initialize TensorFlow variables
    sess.run(init)
    # Starts the optimization
    print()
    print("Starts learning...")
    print()
    for step in range(epochs+1):
        sess.run(train,feed_dict={X: X_train, Y: y_train})
        cost_history = np.append(cost_history,sess.run(cost,feed_dict={X: X_train, Y: y_train}))
        if step % 100 == 0:
                print("step ",step, "cost ", sess.run(cost, feed_dict={
                      X: X_train, Y: y_train}))

0 个答案:

没有答案