TensorFlow:AttributeError:'Session'对象没有属性'_session'

时间:2018-03-16 09:38:34

标签: python tensorflow compiler-errors

我正在尝试使用TensorFlow运行我的代码。

init = tf.global_variables_initializer()

loaded_graph = tf.Graph()
saver = tf.train.Saver()

with tf.Session(loaded_graph) as sess:
    sess.run(init)
    ...

但我收到了这个错误。

  File "C:\Users\K451LN\My Documents\LiClipse Workspace\neuralnet\FFNN.py", line 68, in <module>
with tf.Session(loaded_graph) as sess:
AttributeError: 'Session' object has no attribute '_session'

tf.Graph()是否有任何问题?

这是我的代码:

for i in range(num_networks):
     print("Neural network: {0}".format(i))

     X = tf.placeholder(tf.float32)
     Y = tf.placeholder(tf.float32)

     W1 = tf.Variable(tf.random_uniform([n_input, n_hidden], -1.0, 1.0), name = 'W1')
     W2 = tf.Variable(tf.random_uniform([n_hidden, n_output], -1.0, 1.0), name = 'W2')

     b1 = tf.Variable(tf.zeros([n_hidden]), name="Bias1")
     b2 = tf.Variable(tf.zeros([n_output]), name="Bias2")

     L2 = tf.sigmoid(tf.matmul(X, W1) + b1)
     hy = tf.sigmoid(tf.matmul(L2, W2) + b2, name="op_to_restore")

     cost = tf.reduce_mean(-Y*tf.log(hy) - (1-Y)*tf.log(1-hy))
     optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

    init = tf.global_variables_initializer()

    loaded_graph = tf.Graph()
    saver = tf.train.Saver()

    with tf.Session(loaded_graph) as sess:
        sess.run(init)
    ...

我正在添加此tf.Graph()来解决ValueError: At least two variables have the same name: Bias2的错误。

1 个答案:

答案 0 :(得分:1)

将loaded_graph传递给tf.Session()意味着您只能运行在该图中创建的操作。正如您所做的那样是创建一个名为loaded_graph的图形但不添加任何内容然后在尝试执行sess.run(init)时会出现此错误,因为init op不在loaded_graph的图形中。

我猜你的Bias2原始错误的原因是for循环。如果您删除for循环并且不创建/传递loaded_graph,则不会有任何错误。

如果您希望使用for循环,则可能需要在每个循环中使用

创建新图形

g_1 = tf.Graph() with g_1.as_default(): ...

所以你的代码就像:

for i in range(num_networks):
     g_1 = tf.Graph()
     with g_1.as_default():

         print("Neural network: {0}".format(i))

         X = tf.placeholder(tf.float32)
         Y = tf.placeholder(tf.float32)

         W1 = tf.Variable(tf.random_uniform([n_input, n_hidden], -1.0, 1.0), name = 'W1')
         W2 = tf.Variable(tf.random_uniform([n_hidden, n_output], -1.0, 1.0), name = 'W2')

         b1 = tf.Variable(tf.zeros([n_hidden]), name="Bias1")
         b2 = tf.Variable(tf.zeros([n_output]), name="Bias2")

         L2 = tf.sigmoid(tf.matmul(X, W1) + b1)
         hy = tf.sigmoid(tf.matmul(L2, W2) + b2, name="op_to_restore")

         cost = tf.reduce_mean(-Y*tf.log(hy) - (1-Y)*tf.log(1-hy))
         optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

         init = tf.global_variables_initializer()


         saver = tf.train.Saver()

         with tf.Session(graph=g_1) as sess:
             sess.run(init)
             ...