如何解决“ RuntimeError:会话图为空。在调用run()之前向该图添加操作”

时间:2019-08-19 18:55:53

标签: python tensorflow

我是tensorflow的新手,我有一个描述我的神经网络以及该网络所需的所有方法的类。我有一种特定的方法来提供神经元值。我最初将神经元声明为占位符,希望使用get_neuron方法将值提供给神经元(如果可用)。在运行时,出现运行时错误,提示RuntimeError: The Session graph is empty. Add operations to the graph before calling run()。谁能帮我解决这个问题?究竟是什么意思?我不了解添加操作的确切含义,因为我一直在使用feed_dict操作并运行会话。如果我为张量的每个计算运行会话,或者仅在具有最终输出时才在最后运行会话,这也会带来任何不利之处。我之所以这样问,是因为我想知道这是否会影响张量流的有效性,因为它在使用优化程序时遵循图进行优化。

def get_neuron(self, a, b):
        #'while a being the name of the neuron and b being the value of that neuron'
        with tf.Session() as sess_n:
            sess_n.run(a, feed_dict={a: b})
        return

编辑: 这就是我调用上述函数的方式,即 knowledge_out = knowledge.run(carollis_inp) 知识是从具有方法run的知识知识转移类创建的对象,该函数的第一行是 self.get_neuron(self.neuron_input, carollis_input) 确切显示的错误是

[ERROR] [1566241992.292524, 15.300000]: bad callback: <function joint_callback at 0x7f42221982f0>
Traceback (most recent call last):
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 748, in _invoke_callback
    cb(msg, cb_args)
  File "/home/microbot/catkin_ws/src/spider/spider_control/control1.py", line 60, in joint_callback
    knowledge_out = knowledge.run(carollis_inp)
  File "/home/microbot/catkin_ws/src/spider/spider_control/knowledge_transfer.py", line 99, in run
    self.get_neuron(self.neuron_input, carollis_input)
  File "/home/microbot/catkin_ws/src/spider/spider_control/knowledge_transfer.py", line 81, in get_neuron
    sess_n.run(a, feed_dict={a: b})
  File "/home/microbot/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/home/microbot/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1098, in _run
    raise RuntimeError('The Session graph is empty.  Add operations to the '
RuntimeError: The Session graph is empty.  Add operations to the graph before calling run().

EDIT2: 我有一个名为Knowledge_transfer的神经网络,它接受8个输入,给出4个输出,然后进行软最大化,有点像多分类问题。有一个此类方法run接受大小为8的输入数组,然后返回大小为4的输出数组。在run方法内部,它将给定的输入参数输入到大小为8的输入占位符,然后对图层和权重进行一些处理,最后给出输出。
例:  考虑到第一个参数是一个占位符,并且两个参数的大小相同,我想创建一个类成员方法,该方法接受两个输入并将feed_dict第一个第二个输入参数传递给第一个参数。

Edit3: 我将输入提供给输入层,然后使用输入函数计算输入层的输出,然后将其提供给隐藏层的输入函数,然后将输入函数的输出提供给隐藏层以通过Leaky_rectified线性输出,然后将其输出传递给输出层的输入函数,方法是将其输出传递给softmax函数,然后将该softmax输出作为NN的输出。

代码如下:

        self.neuron_input = tf.compat.v1.placeholder(tf.float32, shape=(self.neurons, 1))
        self.weight_in = tf.get_variable(name="Weight_input", dtype=tf.float32, shape=[self.neurons, 1], initializer=self.weight_initer)
        self.neuron_hid = tf.compat.v1.placeholder(tf.float32, shape=(int(self.neurons/2), 1))
        self.weight_initer1 = tf.truncated_normal_initializer(mean=1.0, stddev=0.01)
        self.weight_hid = tf.get_variable(name="Weight_hidden", dtype=tf.float32, shape=[self.neurons, 1], initializer=self.weight_initer1)
        self.neuron_out = tf.compat.v1.placeholder(tf.float32, shape=(4, 1))
        self.weight_initer2 = tf.truncated_normal_initializer(mean=2.0, stddev=0.01)
        self.weight_out = tf.get_variable(name="Weight_output", dtype=tf.float32, shape=[4, 2], initializer=self.weight_initer2)
        self.bias_initer =tf.truncated_normal_initializer(mean=0-1, stddev=0.01)
        self.bias_in  =tf.get_variable(name="Bias_input", dtype=tf.float32, shape=[self.neurons, 1], initializer=self.bias_initer)
        self.bias_initer1 =tf.truncated_normal_initializer(mean=0-2, stddev=0.01)
        self.bias_hid = tf.get_variable(name="Bias_hidden", dtype=tf.float32, shape=[self.neurons, 1], initializer=self.bias_initer1)
        self.bias_initer2 =tf.truncated_normal_initializer(mean=0-3, stddev=0.01)
        self.bias_out = tf.get_variable(name="Bias_output", dtype=tf.float32, shape=[4, 1], initializer=self.bias_initer2)

,然后run()函数如下:

 def run(self, carollis_input):
        self.normalization(carollis_input)
        #'finding the output of the input layer'
        knowledge_input = tf.add(tf.multiply(self.neuron_input, self.weight_in), self.bias_in)

            #'calculating the input for the hidden layer'
        knowledge_hidden = tf.add(tf.multiply(knowledge_input, self.weight_in), self.bias_hid)
        #'calculating the output of hidden layer'
        knowledge_hidden_output = 3.14*(tf.add(tf.multiply(knowledge_hidden, self.weight_hid), self.bias_hid))#input function of hidden layer
        knowledge_hidden_out = tf.nn.leaky_relu(self.neuron_hid, alpha=0.01, name='leaky_relu')
        with tf.Session() as sess1_2:
            sess1_2.run(knowledge_hidden_out, feed_dict={self.neuron_input: carollis_input, self.neuron_hid: knowledge_hidden_output})
        #'calculating the input of output layer'
        tf.reshape(knowledge_hidden_out, [4, 2])#for quadrant method
        in_out = tf.add(tf.multiply(knowledge_hidden_out, self.weight_out), self.bias_out)
        with tf.Session as s:
            s.run(in_out)
        #'finding the softmax output of the neurons'
        softmax_output = np.array(4)
        softmax_output = self.out_softmax(in_out)  # this gives the softmax output and stores it in the newly created array
        return softmax_output

错误如下:

    sess1_2.run(knowledge_hidden_out, feed_dict={self.neuron_input: carollis_input, self.neuron_hid: knowledge_hidden_output})
  File "/home/microbot/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/home/microbot/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1098, in _run
    raise RuntimeError('The Session graph is empty.  Add operations to the '
RuntimeError: The Session graph is empty.  Add operations to the graph before calling run().

0 个答案:

没有答案