使用TensorFlow绘制2D梯度下降轨迹时的Traceback

时间:2019-05-07 11:57:50

标签: python-3.x tensorflow

我想绘制2D梯度下降的轨迹。不幸的是,我有以下回溯:

InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,1]

我正在使用 TensorFlow v 1.13.1 Google COLAB Python v 3.6.7

从下面的代码中,我发现变量target属于类<tf.Tensor 'Placeholder_1:0' shape=(?, 1) dtype=float32>。 我尝试按照feed_dict={features: x, target: y}的方式进行输入,但是回溯仍然相同。

这是我用于此任务的代码:

## BLOCK 1

import tensorflow as tf
import numpy as np
from matplotlib import animation, rc
import matplotlib_utils
from IPython.display import HTML, display_html
import matplotlib.pyplot as plt
%matplotlib inline
## BLOCK 2

tf.reset_default_graph()

# generate model data
N = 1000
D = 3
x = np.random.random((N, D))
w = np.random.random((D, 1))
y = x @ w + np.random.randn(N, 1) * 0.20

## Deep Learning steps:
# 1. Get input (features) and true output (target)
features = tf.placeholder(tf.float32, shape=(None, D))
target = tf.placeholder(tf.float32, shape=(None, 1))

weights = tf.get_variable("weights", shape=(D, 1), dtype=tf.float32)

# 2. Compute the "guess" (predictions) based on the features and weights
predictions = features @ weights

# 3. Compute the loss based on the difference between the predictions and the target
loss = tf.reduce_mean((target - predictions) ** 2)

# 4. Update the weights (parameters) based on the gradient descent of the loss
optimizer = tf.train.GradientDescentOptimizer(0.1)
step = optimizer.minimize(loss)

s = tf.Session()
s.run(tf.global_variables_initializer())
_, curr_loss, curr_weights = s.run([step, loss, weights], 
                             feed_dict={features: x, target: y})

我希望以下代码能够正常运行(运行此代码时会引起回溯):

## BLOCK 3

# nice figure settings

fig, ax = plt.subplots()
y_true_value = s.run(target)
level_x = np.arange(0, 2, 0.02)
level_y = np.arange(0, 3, 0.02)
X, Y = np.meshgrid(level_x, level_y)
Z = (X - y_true_value[0])**2 + (Y - y_true_value[1])**2
ax.set_xlim(-0.02, 2)
ax.set_ylim(-0.02, 3)
s.run(tf.global_variables_initializer())
ax.scatter(*s.run(target), c='red')
contour = ax.contour(X, Y, Z, 10)
ax.clabel(contour, inline=1, fontsize=10)
line, = ax.plot([], [], lw=2)

# start animation with empty trajectory
def init():
    line.set_data([], [])
    return (line,)

trajectory = [s.run(predictions)]

# one animation step (make one GD step)
def animate(i):
    s.run(step)
    trajectory.append(s.run(predictions))
    line.set_data(*zip(*trajectory))
    return (line,)

anim = animation.FuncAnimation(fig, animate, init_func=init,
                               frames=100, interval=20, blit=True)

注意:可以here找到库matplotlib_utils


示例

这是一个使代码完美运行的示例。

如果我运行以下代码而不是第二个代码块,它将显示2D精美的梯度下降

y_guess = tf.Variable(np.zeros(2, dtype='float32'))
y_true = tf.range(1, 3, dtype='float32')

loss = tf.reduce_mean((y_guess - y_true + 0.5*tf.random_normal([2]))**2) 

optimizer = tf.train.RMSPropOptimizer(0.03, 0.5)
step = optimizer.minimize(loss, var_list=y_guess)

这种轨迹是这样的:

trajectory of gradient descent in 2D

添加这段黑色代码将显示轨迹的完美自动生成器:

## BLOCK 4

try:
    display_html(HTML(anim.to_html5_video()))
except (RuntimeError, KeyError):
    # In case the build-in renderers are unaviable, fall back to
    # a custom one, that doesn't require external libraries
    anim.save(None, writer=matplotlib_utils.SimpleMovieWriter(0.001))

现在,我想使用自己的代码(第二个代码块)绘制2D梯度下降的轨迹。

1 个答案:

答案 0 :(得分:0)

feed_dict参数传递给tf.session.run。示例:

s.run([step, loss, weights], feed_dict={features: x, target: y})

说明: 当计算图上的操作依赖于占位符时,必须提供它们。像s.run(tf.global_variables_initializer())这样的操作不依赖于占位符,因此不传递占位符不会引发错误。