当我在iPython笔记本中运行流动代码时:
_x = np.concatenate([_batches.next() for i in range(_batches.samples)])
我收到此错误消息
---------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-14-313ecf2ea184> in <module>()
----> 1 _x = np.concatenate([_batches.next() for i in
range(_batches.samples)])
MemoryError:
迭代器有9200个元素。
next(_batch)返回一个np.array形状:(1,400,400,3)
我有30GB内存和16GB GPU。
当我在Keras中使用predict_generator()时,我遇到了类似的问题。我运行以下代码:
bottleneck_features_train = bottleneck_model.predict_generator(batches, len(batches), verbose=1)
使用verbose = 1时,我可以看到进度指示器一直显示,但后来我收到以下错误:
2300/2300 [==============================] - 177s 77ms/step
---------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-19-d0e463f64f5a> in <module>()
----> 1 bottleneck_features_train =
bottleneck_model.predict_generator(batches, len(batches), verbose=1)
~/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py in
wrapper(*args, **kwargs)
85 warnings.warn('Update your `' + object_name +
86 '` call to the Keras 2 API: ' +
signature, stacklevel=2)
---> 87 return func(*args, **kwargs)
88 wrapper._original_function = func
89 return wrapper
~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in
predict_generator(self, generator, steps, max_queue_size, workers,
use_multiprocessing, verbose)
2345 return all_outs[0][0]
2346 else:
-> 2347 return np.concatenate(all_outs[0])
2348 if steps_done == 1:
2349 return [out for out in all_outs]
MemoryError:
请问这个内存问题的解决方案吗?谢谢!
答案 0 :(得分:2)
对于第一个错误,数据太大了。假设数据类型为int64或float64(每个元素8个字节),则总数据为9200 * 400 * 400 * 3 * 8字节,即35GB。所有这些数据都以块的形式收集,然后通过串联复制成一个大数组。
你可以预先分配数组,也许它可以工作:
x_ = np.empty((9200,400,400,3))
for i in range(9200):
x_[i] = batches.next()