我在我的一个项目中使用tensorflow和python多处理。我注意到如果我在多处理之前初始化一个会话,那么多处理就好像停留在某处。
我的代码看起来像这样:
import tensorflow as tf
from multiprocessing.pool import Pool
graph = tf.Graph()
with graph.as_default():
X = tf.Variable([10, 1])
init = tf.initialize_all_variables()
graph.finalize()
def run(i):
sess = tf.Session(graph=graph)
sess.run(init)
print sess.run(X)
#uncomment for the bug
#sess = tf.Session(graph=graph)
#sess.close()
p = Pool(4)
res = p.map(run, [1,2,3])
我打断时收到的消息:
Process PoolWorker-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
File "/usr/lib/python2.7/multiprocessing/queues.py", line 374, in get
racquire()
KeyboardInterrupt
答案 0 :(得分:2)
你想要达到什么目的?为什么要尝试为同一个图创建多个会话并并行运行?正如@fabrizioM所提到的,如果配置正确,TF会负责在CPU和GPU之间分配计算。因此,您尝试使用TF的方式并不受支持。