optimizer.applyGradient在tensorflow中不起作用

时间:2017-03-14 03:42:03

标签: python-2.7 optimization tensorflow gradient

我正在尝试更改tensorflow中的渐变,然后尝试使用applyGradient()函数进行更新。这是我的代码,它不起作用

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
optimizer = tf.train.AdamOptimizer(LEARNING_RATE)
        for j in range(n_rounds):
            sample = np.random.randint(row, size=int(batch_size))
            batch_xs = temp[sample][:]
            batch_ys = output[sample][:]
            vars_with_grads = sess.run(optimizer.compute_gradients(cross_entropy), feed_dict={x: batch_xs, y_: batch_ys})
            noiseAddedGradient = []
            print(vars_with_grads)
            for var in vars_with_grads:
                gaussianNoise = [np.random.normal(MEAN_FOR_AUTOENCODER, SCALE_FOR_AUTOENCODER, var[0].shape) for i in range(int(batch_size))]
                totalGaussianNoise = [sum(x) for x in zip(*gaussianNoise)]
                averageGaussianNoise = [x1 / float(batch_size) for x1 in totalGaussianNoise]
                averageGaussianNoiseList = np.array(averageGaussianNoise).flatten().reshape(var[0].shape)
                noiseAddedGradient.append((tf.Variable(np.add(var[0], averageGaussianNoiseList), dtype=np.float32), var[1]))
            appliedGradient = sess.run(optimizer.apply_gradients(noiseAddedGradient))

它返回错误函数:

Traceback (most recent call last):
  File "/home/Downloads/objectPerturbation.py", line 214, in <module>
    appliedGradient = sess.run(optimizer.apply_gradients(noiseAddedGradient))
  File "/home/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 384, in apply_gradients
    p = _get_processor(v)
  File "/home/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 98, in _get_processor
    if v.op.type == "ReadVariableOp":
AttributeError: 'numpy.ndarray' object has no attribute 'op'

你能帮帮我吗?

1 个答案:

答案 0 :(得分:1)

尝试像这样追逐它。渐变应该在图中计算。

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
optimizer = tf.train.AdamOptimizer(LEARNING_RATE)
grads = optimizer.compute_gradients(cross_entropy)
grad_placeholder = [(tf.placeholder("float", shape=grad[1].get_shape()), grad[1] for grad in grads]
apply_placeholder_op = opt.apply_gradients(grad_placeholder)

#added in case you don't do this
sess.run(tf.initialize_all_variables())

    for j in range(n_rounds):
        sample = np.random.randint(row, size=int(batch_size))
        batch_xs = temp[sample][:]
        batch_ys = output[sample][:]
        vars_with_grads = sess.run(grads, feed_dict={x: batch_xs, y_: batch_ys})
        #Add gaussian noise to gradients
        feed_dict = {}
        for i in range(len(grad_placeholder)):
            feed_dict[grad_placeholder[i][0]] = add_gaussian_noise_fn(grad_vals[i])
        sess.run(apply_placeholder_op, feed_dict=feed_dict)

#separate function to make it more general to do whatever you want with grads
def add_gaussian_noise_fn(x):
  return x + np.random.normal(size=x.shape)

创意类似于上一篇文章: Efficiently grab gradients from TensorFlow?