在Tensorflow急切执行中为优化器设置变量

时间:2018-09-02 12:51:51

标签: tensorflow

x=tfe.Variable(np.random.uniform(size=[166,]), name='x')

optimizer = tf.train.AdamOptimizer()
optimizer.minimize(lambda: compute_cost(normed_data[:10], x))

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-28-9ff2a070e305> in <module>()
     23 
     24 optimizer = tf.train.AdamOptimizer()
---> 25 optimizer.minimize(lambda: compute_cost(normed_data[:10], x))

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py in minimize(self, loss, global_step, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, name, grad_loss)
    398         aggregation_method=aggregation_method,
    399         colocate_gradients_with_ops=colocate_gradients_with_ops,
--> 400         grad_loss=grad_loss)
    401 
    402     vars_with_grad = [v for g, v in grads_and_vars if g is not None]

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py in compute_gradients(self, loss, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, grad_loss)
    471       if var_list is None:
    472         var_list = tape.watched_variables()
--> 473       grads = tape.gradient(loss_value, var_list, grad_loss)
    474       return list(zip(grads, var_list))
    475 

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/backprop.py in gradient(self, target, sources, output_gradients)
    856     flat_grad = imperative_grad.imperative_grad(
    857         _default_vspace, self._tape, nest.flatten(target), flat_sources,
--> 858         output_gradients=output_gradients)
    859 
    860     if not self._persistent:

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/eager/imperative_grad.py in imperative_grad(vspace, tape, target, sources, output_gradients)
     61   """
     62   return pywrap_tensorflow.TFE_Py_TapeGradient(
---> 63       tape._tape, vspace, target, sources, output_gradients)  # pylint: disable=protected-access

AttributeError: 'numpy.ndarray' object has no attribute '_id'

有人可以解释为什么我遇到此错误吗? “ x”是我的“模型/损失fnx”的唯一有状态变量/权重(这是联合pdf的MLE)。 Compute_cost可以在自己的单元测试中正常工作。

1 个答案:

答案 0 :(得分:0)

我的猜测是您的compute_cost函数中有一些使用numpy运算而不是TensorFlow运算的运算。而且TensorFlow无法通过这些来区分。

例如,考虑以下内容:

import tensorflow as tf
import numpy as np
tf.enable_eager_execution()

v = tf.contrib.eager.Variable(2.0)

# x * v^2
def f(x):
  return np.multiply(x, np.multiply(v, v))

with tf.GradientTape() as tape:
  y = f(10.0)

# This next line will raise an error similar to what you observed.
print(tape.gradient(y, v)) 

# However, replacing the `np` operations with their equivalent
# `tf` operations will allow things to complete

def f(x):
  return tf.multiply(x, tf.multiply(v, v))

with tf.GradientTape() as tape:
  y = f(10.0)
print(tape.gradient(y, v)) # Correctly prints 40.0

因此,您的compute_cost函数使用numpy操作的情况很可能类似。

也就是说,此错误消息肯定有改进的余地,因此您应该考虑提交一个错误以对此进行改进。

希望有帮助。