我尝试使用带有GPU和TORQUE v6.1.0的Tensorflow v1.0.1以及MOAB作为作业调度程序来解决群集上出现的问题。
执行的python脚本尝试启动新会话时发生错误:
[...]
with tf.Session() as sess:
[...]
错误消息:
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
E tensorflow/core/common_runtime/direct_session.cc:137] Internal: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_INVALID_DEVICE
Load Data...
input: (12956, 128, 128, 1)
output: (12956, 64, 64, 16)
Initiliaze training
Traceback (most recent call last):
File "[...]/train.py", line 154, in <module>
tf.app.run()
File "[...]/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "[...]/train.py", line 150, in main
training()
File "[...]/train.py", line 72, in training
with tf.Session() as sess:
File "[...]/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1176, in __init__
super(Session, self).__init__(target, graph, config=config)
File "[...]/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 552, in __init__
self._session = tf_session.TF_NewDeprecatedSession(opts, status)
File "[...]/python/3.5.1/lib/python3.5/contextlib.py", line 66, in __exit__
next(self.gen)
File "[...]/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InternalError: Failed to create session.
为了重现这个问题,我直接在离线GPU节点上执行了脚本(因此没有涉及TORQUE),并且没有出现任何错误。因此,我认为问题与TORQUE有关,但我还没有找到解决方案。
TORQUE的参数:
#PBS -l nodes=1:ppn=2:gpus=4:exclusive_process
#PBS -l mem=25gb
我在没有exclusive_process
的情况下尝试了一次,但是Job没有被执行。我认为当涉及GPU时我们的调度程序需要这个标志。
答案 0 :(得分:0)
我认为我找到了一种方法来通过改变计算模式来实现工作的运行&#39; exclusive_process&#39;分享&#39;。
现在工作开始了,似乎计算了一些东西。但是我问自己是否使用了所有四个GPU,因为nvidia-smi的输出。为什么所有GPU都在同一个进程上工作?
ViewStub