使用CudnnLSTM训练快速绘制模型会导致CUDNN_STATUS_EXECUTION_FAILED

时间:2019-02-01 07:37:55

标签: tensorflow cudnn quickdraw

System setup:
Ubuntu 16.04, Tesla V100 on AWS p3-2xlarge, Nvidia driver 396.54, Cuda 9.0.176_384.81, CuDNN 9.0
Tensorflow GPU 1.9.0, Python 3.6 using pyenv

我对Google Quickdraw游戏感到好奇,并且正在研究他们如何训练模型。

我关注了文件

https://github.com/tensorflow/models/blob/master/tutorials/rnn/quickdraw/train_model.py

运行以下命令

python train_model.py \
--training_data train_data \
--eval_data eval_data \
--model_dir /tmp/quickdraw_model/ \
--cell_type cudnn_lstm

训练和评估数据是使用

生成的

https://github.com/tensorflow/models/blob/master/tutorials/rnn/quickdraw/create_dataset.py

并在此处使用文件:https://console.cloud.google.com/storage/browser/quickdraw_dataset/full/simplified

然后在出现以下错误后程序停止:

2019-02-01 06:41:15.770071: E tensorflow/stream_executor/cuda/cuda_dnn.cc:943] CUDNN_STATUS_EXECUTION_FAILED Failed to set dropout descriptor with state memory size: 3932160 bytes.
2019-02-01 06:41:15.770123: W tensorflow/core/framework/op_kernel.cc:1318] OP_REQUIRES failed at cudnn_rnn_ops.cc:1214 : Unknown: CUDNN_STATUS_EXECUTION_FAILED Failed to set dropout descriptor with state memory size: 3932160 bytes.
Traceback (most recent call last):
File "/home/ubuntu/.pyenv/versions/abc/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "/home/ubuntu/.pyenv/versions/abc/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/ubuntu/.pyenv/versions/abc/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnknownError: CUDNN_STATUS_EXECUTION_FAILED Failed to set dropout descriptor with state memory size: 3932160 bytes.
[[Node: cudnn_lstm/CudnnRNN = CudnnRNN[T=DT_FLOAT, direction="bidirectional", dropout=0.3, input_mode="linear_input", is_training=true, rnn_mode="lstm", seed=0, seed2=0, _device="/job:localhost/replica:0/task:0/device:GPU:0"](transpose, cudnn_lstm/zeros, cudnn_lstm/zeros, cudnn_lstm/opaque_kernel/read)]]
[[Node: OptimizeLoss/clip_by_global_norm/mul_1/_239 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_354_OptimizeLoss/clip_by_global_norm/mul_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

经过一些研究,似乎错误是通过调用函数cudnnSetDropoutDescriptor给出的。

https://github.com/tensorflow/tensorflow/blob/r1.9/tensorflow/stream_executor/cuda/cuda_dnn.cc#L932

检查API文档后,似乎CUDNN_STATUS_EXECUTION_FAILED可能是库错误或错误安装引起的。

我通过运行mnist测试检查了安装,并通过了。

顺便说一句,我也尝试在没有cell_type参数的情况下运行以上命令,这意味着它将在CPU上运行。它能够运行而没有任何问题。另外,我尝试使用以下设置运行相同的程序,并且产生了相同的错误。

Ubuntu 18.04, Tesla V100 on AWS p3.2xlarge, NVidia Driver 410.79, Cuda 10.0.130_410.48, CuDNN 10.0,
Tensorflow GPU 12.0/10.0, Python 3.6 using pyenv

有人尝试过这样做并遇到类似的问题吗?

0 个答案:

没有答案