“操作的图形不同于会话的图形”错误

时间:2019-09-16 05:28:17

标签: tensorflow graph

我是Tensorflow的新手,在尝试创建图然后执行一些操作时收到了此错误消息:

ValueError                                Traceback (most recent call last)
<ipython-input-136-9e5ed7cede4c> in <module>()
----> 1 get_ipython().run_cell_magic('time', '', '\nn_epochs = 20\nbatch_size = 5\n\ninit = tf.global_variables_initializer()\n\nwith tf.Session(graph=graph) as sess:\n    print(0)\n    init.run()\n    print(1)\n    # init = tf.global_variables_initializer()\n    # saver = tf.train.Saver()\n    print(2)\n    for epoch in range(n_epochs):\n        for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):\n            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})\n        mse_batch = loss.eval(feed_dict={X: X_batch, y: y_batch})\n        mse_valid = loss.eval(feed_dict={X: X_valid, y: y_valid})\n        print(epoch, "Batch mse:", mse_batch, "Validation mse:", mse_valid)\n\n    # save_path = saver.save(sess, "./my_model_final.ckpt")')

4 frames
</usr/local/lib/python3.6/dist-packages/decorator.py:decorator-gen-60> in time(self, line, cell, local_ns)

<timed exec> in <module>()

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _run_using_default_session(operation, feed_dict, graph, session)
   5603                        "`run(session=sess)`")
   5604     if session.graph is not graph:
-> 5605       raise ValueError("Cannot use the default session to execute operation: "
   5606                        "the operation's graph is different from the "
   5607                        "session's graph. Pass an explicit session to "

ValueError: Cannot use the default session to execute operation: the operation's graph is different from the session's graph. Pass an explicit session to run(session=sess)

我正在尝试一种方法来构建用于超参数调整的多个图形(每层没有神经元,没有隐藏层等)。该代码主要以https://github.com/ageron/handson-ml为例。我认为我没有正确构建和使用图形。

import tensorflow as tf
import numpy as np
import pandas as pd


def reset_graph(seed=42):
    tf.reset_default_graph()
    tf.set_random_seed(seed)
    np.random.seed(seed)


def create_graph(n_inputs, n_outputs, n_hidden_layers=2, n_neurons_per_layer=100, activation_function=tf.nn.relu, learning_rate=0.01, 
                 optimize_method='nesterov'):

  g = tf.Graph()


  with g.as_default():
    X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
    y = tf.placeholder(tf.int32, shape=(None), name="y")

    # create layers
    with tf.name_scope("dnn"):
      hidden_layers = {}
      for i in range(n_hidden_layers):
        if i == 0:
          hidden_layers['hidden_1'] = tf.layers.dense(X, n_neurons_per_layer, activation=activation_function, name="hidden_1")
        else:
          name_last_layer = 'hidden_' + str(i)
          name_this_layer = 'hidden_' + str(i+1)
          hidden_layers[name_this_layer] = tf.layers.dense(hidden_layers[name_last_layer], n_neurons_per_layer, 
                                                           activation=activation_function, name="hidden_{}".format(i+1))
      name_last_hidden = 'hidden_' + str(n_hidden_layers)
      logits = tf.layers.dense(hidden_layers[name_last_hidden], n_outputs, name="logits")

    with tf.name_scope("loss"):
        xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
        loss = tf.reduce_mean(xentropy, name="loss")
        y_proba = tf.nn.softmax(logits)

    with tf.name_scope("eval"):
        correct = tf.nn.in_top_k(logits, y, 1)
        accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

    with tf.name_scope("train"):
      if optimize_method == 'nesterov': # nesterov optimizer
        optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=0.9, use_nesterov=True)
      elif optimize_method == 'adam': # adam optimizer
        optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
      else: # momentum optimizer
        optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate, momentum=0.9)
      training_op = optimizer.minimize(loss)

  return g, loss, training_op, X, y


def shuffle_batch(X, y, batch_size):
    rnd_idx = np.random.permutation(len(X))
    n_batches = len(X) // batch_size
    for batch_idx in np.array_split(rnd_idx, n_batches):
        X_batch, y_batch = X[batch_idx], y[batch_idx]
        yield X_batch, y_batch

graph, loss, training_op, X, y = create_graph(100, 10)

n_epochs = 20
batch_size = 5

init = tf.global_variables_initializer()
# saver = tf.train.Saver()

with tf.Session(graph=graph) as sess:
    init.run()
    print(2)
    for epoch in range(n_epochs):
        for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        mse_batch = loss.eval(feed_dict={X: X_batch, y: y_batch})
        mse_valid = loss.eval(feed_dict={X: X_valid, y: y_valid})
        print(epoch, "Batch mse:", mse_batch, "Validation mse:", mse_valid)

    # save_path = saver.save(sess, "./my_model_final.ckpt")

1 个答案:

答案 0 :(得分:1)

因此,这里的问题是,您为大多数操作指定了默认对象以外的图形对象(在函数中定义为g,在函数外部定义为graph),但是操作tf.global_variables_initializer()被添加到默认图形而不是graph

在声明操作时(而不是在通过会话调用时)将操作添加到图形对象,因此尽管已指定

with tf.Session(graph=graph) as sess:

init操作已在与指定图形不同的图形上定义,因此无法在此会话中调用。

init操作声明更改为

with graph.as_default():
    init = tf.global_variables_initializer()

将解决。

或者,您可以一起删除init的声明,并直接在会话中直接调用tf.global_variables_initializer(),就像这样:

with tf.Session(graph=graph) as sess:
    sess.run(tf.global_variables_initializer())
    print(2)
    for epoch in range(n_epochs):
    ...

,它将自动初始化与sess关联的图形上的变量。