我的代码在iPython终端中运行时工作正常,但由于内存不足错误而失败,如下所示。
/home/abigail/anaconda3/envs/tf_gpuenv/bin/python -Xms1280m -Xmx4g /home/abigail/PycharmProjects/MLNN/src/test.py
Using TensorFlow backend.
Epoch 1/150
2019-01-19 22:12:39.539156: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-01-19 22:12:39.588899: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-01-19 22:12:39.589541: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: GeForce GTX 750 Ti major: 5 minor: 0 memoryClockRate(GHz): 1.0845
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 59.69MiB
2019-01-19 22:12:39.589552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
Traceback (most recent call last):
File "/home/abigail/PycharmProjects/MLNN/src/test.py", line 20, in <module>
model.fit(X, Y, epochs=150, batch_size=10)
File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/keras/engine/training.py", line 1039, in fit
validation_steps=validation_steps)
File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/keras/engine/training_arrays.py", line 199, in fit_loop
outs = f(ins_batch)
File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2697, in __call__
if hasattr(get_session(), '_make_callable_from_options'):
File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 186, in get_session
_SESSION = tf.Session(config=config)
File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1551, in __init__
super(Session, self).__init__(target, graph, config=config)
File "/home/abigail/anaconda3/envs/tf_gpuenv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 676, in __init__
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
Process finished with exit code 1
在PyCharm中,我首先编辑了“帮助->编辑自定义VM选项”:
-Xms1280m
-Xmx4g
这不能解决问题。然后,我编辑了“运行->编辑配置->解释器选项”:
-Xms1280m -Xmx4g
它仍然给出相同的错误。我的桌面Linux有足够的内存(64G)。如何解决这个问题?
顺便说一句,在PyCharm中,如果我不使用GPU,它不会给出错误。
编辑:
In [5]: exit
(tf_gpuenv) abigail@abigail-XPS-8910:~/nlp/MLMastery/DLwithPython/code/chapter_07$ nvidia-smi
Sun Jan 20 00:41:49 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 415.25 Driver Version: 415.25 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 750 Ti Off | 00000000:01:00.0 On | N/A |
| 38% 54C P0 2W / 38W | 1707MiB / 1993MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 770 G /usr/bin/akonadi_archivemail_agent 2MiB |
| 0 772 G /usr/bin/akonadi_sendlater_agent 2MiB |
| 0 774 G /usr/bin/akonadi_mailfilter_agent 2MiB |
| 0 1088 G /usr/lib/xorg/Xorg 166MiB |
| 0 1440 G kwin_x11 60MiB |
| 0 1446 G /usr/bin/krunner 1MiB |
| 0 1449 G /usr/bin/plasmashell 60MiB |
| 0 1665 G ...quest-channel-token=3687002912233960986 137MiB |
| 0 20728 C ...ail/anaconda3/envs/tf_gpuenv/bin/python 1255MiB |
+-----------------------------------------------------------------------------+
答案 0 :(得分:0)
要按照评论总结我们的对话,我不认为您可以为GPU分配GPU内存或台式机内存-并非以您尝试的方式。当您只有一个GPU时,大多数情况下Tensorflow-GPU会将大约95%的可用内存分配给它运行的任务。就您而言,某些东西已经占用了所有可用的GPU内存,这是您的程序无法运行的主要原因。您需要检查GPU的内存使用情况并释放一些内存(我不禁要想您已经使用后台运行的Tensorflow GPU或其他密集型GPU程序安装了另一个实例python)。在Linux中,命令行上的命令nvidia-smi
会告诉您使用GPU的原因
这是一个例子
Sun Jan 20 18:23:35 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.130 Driver Version: 384.130 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 970 Off | 00000000:01:00.0 Off | N/A |
| 32% 63C P2 69W / 163W | 3823MiB / 4035MiB | 40% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 3019 C ...e/scarter/anaconda3/envs/tf1/bin/python 3812MiB |
+-----------------------------------------------------------------------------+
在我的情况下,您可以看到服务器上的卡有4035MB或RAM,正在使用3823MB。此外,请在底部查看GPU流程。进程PID 3019占用卡上可用4035MB的3812MB。如果我们想使用tensorflow运行另一个python脚本,我有两个主要选择,我可以安装第二个GPU并在第二个GPU上运行,或者如果没有可用的GPU,然后在CPU上运行。比我更精通的人可能会说,您可以为每个任务分配一半的内存,但是对于tensorflow训练来说,2Gig的内存已经很低了。通常,建议为该任务提供更多内存(6 gig +)的存储卡。
最后,找出消耗所有视频卡内存的原因,然后结束该任务。我相信它将解决您的问题。
答案 1 :(得分:-1)
enter image description here 这些python2和python3占用了我的全部资源,我无法弄清楚这些进程是什么