我正在为Embedding
层(使用预训练的嵌入)开始为NLP创建一个NN。但是当我在Keras(Tensorflow后端)中声明Embedding
层时,我有了ResourceExhaustedError
:
ResourceExhaustedError: OOM when allocating tensor with shape[137043,300] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node embedding_4/random_uniform/RandomUniform}} = RandomUniform[T=DT_INT32, dtype=DT_FLOAT, seed=87654321, seed2=9524682, _device="/job:localhost/replica:0/task:0/device:GPU:0"](embedding_4/random_uniform/shape)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
我已经检查过Google:大多数ResourceExhaustedError发生在训练时,这是因为GPU的RAM不够大。通过减小批量大小可以解决此问题。
但就我而言,我什至没有开始训练!这是问题所在:
q1 = Embedding(nb_words + 1,
param['embed_dim'].value,
weights=[word_embedding_matrix],
input_length=param['sentence_max_len'].value)(question1)
这里,word_embedding_matrix
是大小为(137043, 300)
的矩阵,是预训练的嵌入。
据我所知,这不会占用大量内存(与here不同):
137043 * 300 * 4字节= 53 kiB
这是使用的GPU:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26 Driver Version: 396.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:02:00.0 Off | N/A |
| 23% 32C P8 16W / 250W | 6956MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:03:00.0 Off | N/A |
| 23% 30C P8 16W / 250W | 530MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 108... Off | 00000000:82:00.0 Off | N/A |
| 23% 34C P8 16W / 250W | 333MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX 108... Off | 00000000:83:00.0 Off | N/A |
| 24% 46C P2 58W / 250W | 4090MiB / 11178MiB | 23% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1087 C uwsgi 1331MiB |
| 0 1088 C uwsgi 1331MiB |
| 0 1089 C uwsgi 1331MiB |
| 0 1090 C uwsgi 1331MiB |
| 0 1091 C uwsgi 1331MiB |
| 0 4176 C /usr/bin/python3 289MiB |
| 1 2631 C ...e92/venvs/wordintent_venv/bin/python3.6 207MiB |
| 1 4176 C /usr/bin/python3 313MiB |
| 2 4176 C /usr/bin/python3 323MiB |
| 3 4176 C /usr/bin/python3 347MiB |
| 3 10113 C python 1695MiB |
| 3 13614 C python3 1347MiB |
| 3 14116 C python 689MiB |
+-----------------------------------------------------------------------------+
有人知道我为什么遇到这个例外吗?
答案 0 :(得分:0)
从this link开始,配置TensorFlow不直接分配最大GPU似乎可以解决问题。
在声明模型层之前运行此命令可以解决问题:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.3
session = tf.Session(config=config)
K.set_session(session)
我会花一些时间来接受我的答案,以查看其他答案。