我最近一直在研究Andrew Ng的课程,我想我也可以尝试实现我在其他语言中学到的东西(这对我来说恰好是Python),但我碰到了一堵墙。 这是我的代码:
train_x = [[1,2,3,4], [5,6,7,8]]
train_y = [24, 1680]
train_x = np.asarray(train_x)
train_y = np.asarray(train_y)
m = train_x.shape[0]
n = train_x.shape[1]
X = tf.placeholder(tf.float32, [None, n])
Y = tf.placeholder(tf.float32, [None, n])
W = tf.Variable(tf.zeros(n, 1))
b = tf.Variable(tf.zeros(1, 1))
model = tf.add(tf.multiply(X, W), b)
cost = tf.reduce_sum(tf.pow(model-Y, 2)) / (2*m)
然后我使用以下方法训练GradientDescentOptimizer:
for i in range(1000):
for x, y in zip(train_x, train_y):
sess.run(optimizer, feed_dict={X: x, Y: y})
我得到的错误是(在最后一行):
ValueError: Cannot feed value of shape (4,) for Tensor 'Placeholder:0', which has shape '(?, 4)'
非常感谢任何帮助。解释甚至更好。
答案 0 :(得分:1)
您需要将输入x
重塑为(some_number, 4)
。同时修复y
占位符
train_x = [[1, 2, 3, 4], [5, 6, 7, 8]]
train_y = [24, 1680]
train_x = np.asarray(train_x)
train_y = np.asarray(train_y)
m = train_x.shape[0]
n = train_x.shape[1]
X = tf.placeholder(tf.float32, [None, n])
Y = tf.placeholder(tf.float32, [None, 1])
W = tf.Variable(tf.random_normal((n, 1)))
b = tf.Variable(tf.zeros(1, 1))
model = tf.add(tf.matmul(X, W), b)
cost = tf.reduce_sum(tf.pow(model - Y, 2)) / (2 * m)
...
for i in range(1000):
for x, y in zip(train_x, train_y):
x = np.reshape(x, (-1, 4))
y = np.reshape(y, (-1, 1))
sess.run(optimizer, feed_dict={X: x, Y: y})