ValueError:在Tensorflow研究模型dp_sgd中没有为任何变量提供渐变

时间:2018-04-08 07:34:26

标签: tensorflow python-3.5

我尝试在https://github.com/tensorflow/models/tree/master/research/differential_privacy中运行dp_sgd模型。 在我按照README.md中的步骤操作后,我在Mac上发现了以下错误消息。

lizhuzhende-MacBook-Air:dp janicelee$ bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist     --training_data_path=data/mnist_train.tfrecord     --eval_data_path=data/mnist_test.tfrecord     --save_path=./tmp/mnist_dir

Traceback (most recent call last):
  File "/Users/janicelee/sd/ve/dp/bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist.runfiles/__main__/differential_privacy/dp_sgd/dp_mnist/dp_mnist.py", line 507, in <module>
    tf.app.run()
  File "/Users/janicelee/sd/ve/privacy/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 30, in run
    sys.exit(main(sys.argv))
  File "/Users/janicelee/sd/ve/dp/bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist.runfiles/__main__/differential_privacy/dp_sgd/dp_mnist/dp_mnist.py", line 503, in main
    eval_steps=FLAGS.eval_steps)
  File "/Users/janicelee/sd/ve/dp/bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist.runfiles/__main__/differential_privacy/dp_sgd/dp_mnist/dp_mnist.py", line 337, in Train
    cost, global_step=global_step)
  File "/Users/janicelee/sd/ve/dp/bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist.runfiles/__main__/differential_privacy/dp_sgd/dp_optimizer/dp_optimizer.py", line 145, in minimize
    global_step=global_step, name=name)
  File "/Users/janicelee/sd/ve/privacy/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 298, in apply_gradients
    (grads_and_vars,))
ValueError: No gradients provided for any variable: ()

在dp_optimizer.py中调用最小化时发生错误:

   def minimize(self, loss, global_step=None, var_list=None,
               name=None):
    """Minimize using sanitized gradients.

    This gets a var_list which is the list of trainable variables.
    For each var in var_list, we defined a grad_accumulator variable
    during init. When batches_per_lot > 1, we accumulate the gradient
    update in those. At the end of each lot, we apply the update back to
    the variable. This has the effect that for each lot we compute
    gradients at the point at the beginning of the lot, and then apply one
    update at the end of the lot. In other words, semantically, we are doing
    SGD with one lot being the equivalent of one usual batch of size
    batch_size * batches_per_lot.
    This allows us to simulate larger batches than our memory size would permit.

    The lr and the num_steps are in the lot world.

    Args:
      loss: the loss tensor.
      global_step: the optional global step.
      var_list: the optional variables.
      name: the optional name.
    Returns:
      the operation that runs one step of DP gradient descent.
    """

    # First validate the var_list

    if var_list is None:
      var_list = tf.trainable_variables()
    for var in var_list:
      if not isinstance(var, tf.Variable):
        raise TypeError("Argument is not a variable.Variable: %s" % var)

    # Modification: apply gradient once every batches_per_lot many steps.
    # This may lead to smaller error

    if self._batches_per_lot == 1:
      sanitized_grads = self.compute_sanitized_gradients(
          loss, var_list=var_list)

      grads_and_vars = zip(sanitized_grads, var_list)
      self._assert_valid_dtypes([v for g, v in grads_and_vars if g is not None])


      apply_grads = self.apply_gradients(grads_and_vars,
                                         global_step=global_step, name=name)

      return apply_grads

    # Condition for deciding whether to accumulate the gradient
    # or actually apply it.
    # we use a private self_batch_count to keep track of number of batches.
    # global step will count number of lots processed.

    update_cond = tf.equal(tf.constant(0),
                           tf.mod(self._batch_count,
                                  tf.constant(self._batches_per_lot)))

    # Things to do for batches other than last of the lot.
    # Add non-noisy clipped grads to shadow variables.

我的python版本是3.5.3。 我的tensorflow版本是0.10.0,bazel版本是0.3.1。 这个错误的原因是什么?如何解决?

谢谢!

1 个答案:

答案 0 :(得分:0)

我有类似的问题可以通过使用models / research / slim / download_and_convert_data.py来修复,它们创建了正确的tfrecords格式,如下所述: https://github.com/tensorflow/models/issues/2605