在gpu上运行tensorflow的警告

时间:2018-02-09 05:15:56

标签: tensorflow

当我在gpu上运行我的代码时,我收到了警告 每一步 。如下所示:

2018-02-09 12:59:58.635500: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\stream_executor\cuda\cuda_dnn.cc:3100] 

直到我将bn图层添加到模型中才会发生此警告。我不知道原因。我没有得到谷歌的答案。

在我的终端输出开始时,还有一些其他输出:

totalMemory: 11.00GiB freeMemory: 10.71GiB
2018-02-09 12:46:34.030887: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0, compute capability: 6.1)
Begin Training!!!
0
pickle 0 load finished
2018-02-09 12:47:47.429015: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.25GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-02-09 12:47:47.429015: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-02-09 12:47:47.475816: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.02GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-02-09 12:47:47.475816: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 34.84MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-02-09 12:47:47.491416: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-02-09 12:47:47.569416: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 85.05MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
[step 000000] loss 4484.4204     lr 0.00500
[step 000001] loss 14606.2910    lr 0.00500
2018-02-09 12:47:54.823428: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 256.0KiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-02-09 12:47:54.823428: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\stream_executor\cuda\cuda_dnn.cc:3100] 
[step 000002] loss 10781.2871    lr 0.00500
2018-02-09 12:47:55.931030: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 256.0KiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.
2018-02-09 12:47:55.946630: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\stream_executor\cuda\cuda_dnn.cc:3100] 
[step 000003] loss 2583.5710     lr 0.00500

0 个答案:

没有答案