我在Python中遇到内存错误。使用一台具有4GB内存的计算机,最近刚安装了Ubuntu足够的高清空间。运行以下机器学习脚本:https://github.com/mrmotallebi/synthesizing_obama_network_training
我已经确认我正在使用64位版本的Python。我是机器学习的新手,是否应该仅设置具有显着更多RAM的VM?
Traceback (most recent call last): File "run.py", line 225, in <module>
main() File "run.py", line 222, in main s = Speech() File "run.py",
line 42, in __init__ self.loadData() File
"/home/gabriel/Downloads/synthesizing_obama_network_training-
master/util.py", line 129, in loadData meani, stdi, meano, stdo =
self.normalize(inps, outps) File
"/home/gabriel/Downloads/synthesizing_obama_network_training-
master/util.py", line 103, in normalize meani, stdi =
normalizeData(inps["training"], "save/" + self.args.save_dir,
"statinput", ["fea%02d" % x for x in range(inps["training"]
[0].shape[1])], normalize=self.args.normalizeinput) File
"/home/gabriel/Downloads/synthesizing_obama_network_training-
master/util.py", line 48, in normalizeData std = np.std(allstrokes, 0)
File "/home/gabriel/.local/lib/python2.7/site-
packages/numpy/core/fromnumeric.py", line 3242, in std **kwargs) File
"/home/gabriel/.local/lib/python2.7/site-
packages/numpy/core/_methods.py", line 140, in _std
keepdims=keepdims) File "/home/gabriel/.local/lib/python2.7/site-
packages/numpy/core/_methods.py", line 117, in _var x =
asanyarray(arr - arrmean)
MemoryError
答案 0 :(得分:0)
如果您正在使用的数据集占用的内存超过4GB,则RAM内存无法分配所有数据。然后,您应该使用一种称为批处理的技术,该技术可以从硬盘驱动器中批量收集数据并将其临时存储在RAM内存中。
希望是有用的!