TensorFlow错误:没有为任何变量提供渐变,请检查图形以获取不支持渐变的操作

时间:2019-06-18 06:58:46

标签: python tensorflow backpropagation mnist

尝试使用Tensorflow FIFOQueue 的派生类。我重写了入队功能。它接收图像,并使最后密集层的输出进入队列。 现在,我使输出张量出队,并尝试计算Cost函数并使用Adam Optimiser将其最小化。

在计算成本并将其最小化到enqueue函数本身中时,我的代码工作正常。但是,一旦我将loss_op(即我的费用)转移到Derived类之外,就会收到错误消息:“没有为任何变量提供渐变,请检查您的图形以获取不支持渐变的操作”。

导入

from tensorflow.python.ops.data_flow_ops import FIFOQueue
import tensorflow as tf
from tensorflow.python.framework import dtypes as _dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_data_flow_ops

读取数据

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
Y = mnist.train.labels
X = mnist.train.images

派生队列

class MyQueue(FIFOQueue):
    def enqueue(self, x,Y,name=None):

        #Reshape
        x = tf.reshape(x, shape=[-1, 28, 28, 1])
        # 1st conv_2d layer
        conv1_mp = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu,name = 'Q1_c1')
        # 1st max pool layer
        conv1 = tf.layers.max_pooling2d(conv1_mp, 2, 2,name='Q1_mp1')
        # 2nd conv_2d layer
        conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu,name = 'Q1_c2')
        # 2nd max pool layer
        conv2_mp = tf.layers.max_pooling2d(conv2, 2, 2,name='Q1_mp2')
        #Flatten
        flat = tf.contrib.layers.flatten(conv2_mp)                
        #Dense 1
        dense_1 = tf.layers.dense(tf.reshape(flat,[-1,1600]), 1024,name = 'Q2_D1' )
        #Dropout = 0.8
        drop = tf.layers.dropout(dense_1, rate=0.8, training=True,name='Q2_Dp')
        #Output class = 10
        out = tf.layers.dense(drop, n_classes,name = 'Q2_Op')      


        #update vals to put "out" in the queue
        vals = out


        # Rest of the enqueue operation which has not been changed

        with ops.name_scope(name, "%s_enqueue" % self._name,
                        self._scope_vals(vals)) as scope:
              vals = self._check_enqueue_dtypes(vals)
              # NOTE(mrry): Not using a shape function because
              #  we need access to the `QueueBase` object.
              for val, shape in zip(vals, self._shapes):
                val.get_shape().assert_is_compatible_with(shape)

              if self._queue_ref.dtype == _dtypes.resource:
                return gen_data_flow_ops.queue_enqueue_v2(
                    self._queue_ref, vals, name=scope)
              else:
                return gen_data_flow_ops.queue_enqueue(
                    self._queue_ref, vals, name=scope)

主要

q_pred = MyQueue( capacity=1, dtypes=tf.float32 )
enqueue_op = q_pred.enqueue(X,Y)
data_pred = q_pred.dequeue()

init = tf.global_variables_initializer()

with tf.Session() as sess:
   sess.run(init)
   sess.run(enqueue_op)  

   out = data_pred


   #Calculating Cost
   cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
            logits=out, labels=Y),name = 'Q2_loss')

   # Adam optimiser
   optimizer = tf.train.AdamOptimizer(learning_rate=0.001)

   #Write in the graph
   writer = tf.summary.FileWriter("logs\MyDerivedQueue", sess.graph)

   ####### ERROR LINE ###################
   # Minimising the cost. 
   train_op = optimizer.minimize(cost)

   correct_pred = tf.equal(tf.argmax(out, 1), tf.argmax(Y, 1))
   accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

1 个答案:

答案 0 :(得分:0)

使用多种匹配和试用方法。我得出的结论是,由于反向传播不在我们的控制范围之内,因此无法使用。在使用多GPU时,每个GPU都会给出前馈,而在向后传播时,我们将不知道应该更新哪些权重/参数。