如何在此脚本中解决内存限制?

时间:2016-10-06 09:56:00

标签: python

我试图规范化我的数据集1.7 Gigabyte。我有14Gig of RAM,我很快达到了极限。

计算训练数据的mean/std时会发生这种情况。加载到RAM(13.8Gig)时,训练数据占用大部分内存,因此计算平均值,但是当计算std时它到达下一行时,它会崩溃。

按照脚本:

import caffe
import leveldb
import numpy as np
from caffe.proto import caffe_pb2
import cv2
import sys
import time

direct = 'examples/svhn/'
db_train = leveldb.LevelDB(direct+'svhn_train_leveldb')
db_test = leveldb.LevelDB(direct+'svhn_test_leveldb')
datum = caffe_pb2.Datum()

#using the whole dataset for training which is 604,388
size_train = 604388 #normal training set is 73257
size_test = 26032
data_train = np.zeros((size_train, 3, 32, 32))
label_train = np.zeros(size_train, dtype=int)

print 'Reading training data...'
i = -1
for key, value in db_train.RangeIter():
    i = i + 1
    if i % 1000 == 0:
        print i
    if i == size_train:
        break
    datum.ParseFromString(value)
    label = datum.label
    data = caffe.io.datum_to_array(datum)
    data_train[i] = data
    label_train[i] = label

print 'Computing statistics...'
print 'calculating mean...'
mean = np.mean(data_train, axis=(0,2,3))
print 'calculating std...'
std = np.std(data_train, axis=(0,2,3))

#np.savetxt('mean_svhn.txt', mean)
#np.savetxt('std_svhn.txt', std)

print 'Normalizing training'
for i in range(3):
        print i
        data_train[:, i, :, :] = data_train[:, i, :, :] - mean[i]
        data_train[:, i, :, :] = data_train[:, i, :, :]/std[i]


print 'Outputting training data'
leveldb_file = direct + 'svhn_train_leveldb_normalized'
batch_size = size_train

# create the leveldb file
db = leveldb.LevelDB(leveldb_file)
batch = leveldb.WriteBatch()
datum = caffe_pb2.Datum()

for i in range(size_train):
    if i % 1000 == 0:
        print i

    # save in datum
    datum = caffe.io.array_to_datum(data_train[i], label_train[i])
    keystr = '{:0>5d}'.format(i)
    batch.Put( keystr, datum.SerializeToString() )

    # write batch
    if(i + 1) % batch_size == 0:
        db.Write(batch, sync=True)
        batch = leveldb.WriteBatch()
        print (i + 1)

# write last batch
if (i+1) % batch_size != 0:
    db.Write(batch, sync=True)
    print 'last batch'
    print (i + 1)
#explicitly freeing memory to avoid hitting the limit!
#del data_train
#del label_train

print 'Reading test data...'
data_test = np.zeros((size_test, 3, 32, 32))
label_test = np.zeros(size_test, dtype=int)
i = -1
for key, value in db_test.RangeIter():
    i = i + 1
    if i % 1000 == 0:
        print i
    if i ==size_test:
        break
    datum.ParseFromString(value)
    label = datum.label
    data = caffe.io.datum_to_array(datum)
    data_test[i] = data
    label_test[i] = label

print 'Normalizing test'
for i in range(3):
        print i
        data_test[:, i, :, :] = data_test[:, i, :, :] - mean[i]
        data_test[:, i, :, :] = data_test[:, i, :, :]/std[i]

#Zero Padding
#print 'Padding...'
#npad = ((0,0), (0,0), (4,4), (4,4))
#data_train = np.pad(data_train, pad_width=npad, mode='constant', constant_values=0)
#data_test = np.pad(data_test, pad_width=npad, mode='constant', constant_values=0)

print 'Outputting test data'
leveldb_file = direct + 'svhn_test_leveldb_normalized'
batch_size = size_test

# create the leveldb file
db = leveldb.LevelDB(leveldb_file)
batch = leveldb.WriteBatch()
datum = caffe_pb2.Datum()

for i in range(size_test):
    # save in datum
    datum = caffe.io.array_to_datum(data_test[i], label_test[i])
    keystr = '{:0>5d}'.format(i)
    batch.Put( keystr, datum.SerializeToString() )

    # write batch
    if(i + 1) % batch_size == 0:
        db.Write(batch, sync=True)
        batch = leveldb.WriteBatch()
        print (i + 1)

# write last batch
if (i+1) % batch_size != 0:
    db.Write(batch, sync=True)
    print 'last batch'
    print (i + 1)

如何让它消耗更少的内存,以便我可以运行脚本?

2 个答案:

答案 0 :(得分:1)

为什么不计算原始数据子集的统计数据?例如,在这里我们只计算100个点的平均值和标准值:


-0.9165215479156338

如果您的数据是1.7Gb,则您不太可能需要所有数据来准确估算平均值和标准值。

此外,您可以使用数据类型中较少的位来逃避吗?我不确定返回的数据类型 time = Math.sin(42.0); System.out.println(time); 是什么,但您可以这样做:

sample_size = 100
data_train = np.random.rand(1000, 20, 10, 10)

# Take subset of training data
idxs = np.random.choice(data_train.shape[0], sample_size)
data_train_subset = data_train[idxs]

# Compute stats
mean = np.mean(data_train_subset, axis=(0,2,3))
std = np.std(data_train_subset, axis=(0,2,3))

确保数据为caffe.io.datum_to_array格式。 (如果数据当前是data = caffe.io.datum_to_array(datum).astype(np.float32) ,那么这将为您节省一半的空间。

答案 1 :(得分:0)

由于内存不足导致如此多问题和不断崩溃的罪魁祸首是由于批量大小是整个训练集的大小:

print 'Outputting test data'
leveldb_file = direct + 'svhn_test_leveldb_normalized'
batch_size = size_test

这显然是原因,在整个数据集被读取并加载到一个巨大的事务中之前,没有任何内容会被提交并保存到磁盘中,当使用@BillCheatham建议的np.float32没有正常工作时也是如此。

由于某些原因,内存映射解决方案对我不起作用,我使用了上面提到的解决方案。

ps:后来,我完全改为float32,修复了batch_size并将所有东西放在一起,这就是我怎么说我以前的解决方案(devide并将分数加在一起)有效并且给出了最多2个的确切数字小数。