ValueError:没有要优化的变量

时间:2017-09-15 19:45:40

标签: python machine-learning tensorflow gradient-descent

我正在尝试计算两张图片之间的l2_loss并获取gradient。我的代码片段在这里给出:

with tf.name_scope("train"):

    X = tf.placeholder(tf.float32, [1, None, None, None], name='X')
    y = tf.placeholder(tf.float32, [1, None, None, None], name='y')
    Z = tf.nn.l2_loss(X - y, name="loss")
    step_loss = tf.reduce_mean(Z)
    optimizer = tf.train.AdamOptimizer()
    training_op = optimizer.minimize(step_loss)

init = tf.global_variables_initializer()

with tf.Session() as sess:
    init.run()
    content = tf.gfile.FastGFile('cat.0.jpg', 'rb').read()
    noise = tf.gfile.FastGFile('color_img.jpg', 'rb').read()
    loss_append = []
    for epoch in range(10):
        for layer in layers:
            c = sess.run(layer, feed_dict={input_img: content})
            n = sess.run(layer, feed_dict={input_img: noise})
            sess.run(training_op, feed_dict={X: c, y: n})

但它会出现以下错误:

    Traceback (most recent call last):
   File "/home/noise_image.py",     line 68, in <module>
    training_op = optimizer.minimize(lossss)
   File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training /optimizer.py", line 315, in minimize
    grad_loss=grad_loss)
   File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training   /optimizer.py", line 380, in compute_gradients
    raise ValueError("No variables to optimize.")
ValueError: No variables to optimize. 

如何摆脱它?

2 个答案:

答案 0 :(得分:1)

yfeed_dict的值来自Z,而X是这些值的函数,因此TensorFlow无法训练它们。

不是将layer设置为占位符,而是将其分配给张量值(y)。对for epoch in range(10): sess.run(training_op, feed_dict={input_image_content: content, input_image_noise: noise}) 执行相同操作。

您的最终代码应如下所示:

http://sampleserver6.arcgisonline.com/arcgis/rest/services/Census/MapServer/3?f=pjson

答案 1 :(得分:0)

您构建的图形不包含任何变量节点。您还试图在没有任何变量的情况下最小化损失函数。

最小化是指为数学函数(成本函数)的变量找到一组值,当在函数中替换时给出最小可能值(至少是我们通常处理的局部最小值)非凸函数)。

因此,当您运行代码时,编译器抱怨您的成本函数中没有变量。 正如澄清一样,placeholder指的是在运行时期间用于将值提供给图形的各种输入的对象。

要解决此问题,您必须再次考虑要构建的图形。您必须定义如下所示的变量:(忽略此问题的非相关部分代码)

with tf.name_scope("train"):
    X = tf.placeholder(tf.float32, [1, 224, 224, 3], name='X')
    y = tf.placeholder(tf.float32, [1, 224, 224, 3], name='y') 

    X_var = tf.get_variable('X_var', dtype = tf.float32, initializer = tf.random_normal((1, 224, 224, 3)))
    y_var = tf.get_variable('y_var', dtype = tf.float32, initializer = tf.random_normal((1, 224, 224, 3)))
    Z = tf.nn.l2_loss((X_var - X) ** 2 + (y_var - y) ** 2, name="loss")

    step_loss = tf.reduce_mean(Z)
    optimizer = tf.train.AdamOptimizer()
    training_op = optimizer.minimize(step_loss)

...
with tf.Session() as sess:
    ....
    sess.run(training_op, feed_dict={X: c, y: n})