我在python中创建了一个字典并将其转换为pickle。它的大小达到了300MB。现在,我想加载相同的泡菜。
output = open('myfile.pkl', 'rb')
mydict = pickle.load(output)
加载此次泡菜需要 15秒。 如何缩短这段时间?
硬件规格:Ubuntu 14.04,4GB RAM
下面的代码显示了使用json,pickle,cPickle转储或加载文件所需的时间。
转储后,文件大小约为300MB。
import json, pickle, cPickle
import os, timeit
import json
mydict= {all values to be added}
def dump_json():
output = open('myfile1.json', 'wb')
json.dump(mydict, output)
output.close()
def dump_pickle():
output = open('myfile2.pkl', 'wb')
pickle.dump(mydict, output,protocol=cPickle.HIGHEST_PROTOCOL)
output.close()
def dump_cpickle():
output = open('myfile3.pkl', 'wb')
cPickle.dump(mydict, output,protocol=cPickle.HIGHEST_PROTOCOL)
output.close()
def load_json():
output = open('myfile1.json', 'rb')
mydict = json.load(output)
output.close()
def load_pickle():
output = open('myfile2.pkl', 'rb')
mydict = pickle.load(output)
output.close()
def load_cpickle():
output = open('myfile3.pkl', 'rb')
mydict = pickle.load(output)
output.close()
if __name__ == '__main__':
print "Json dump: "
t = timeit.Timer(stmt="pickle_wr.dump_json()", setup="import pickle_wr")
print t.timeit(1),'\n'
print "Pickle dump: "
t = timeit.Timer(stmt="pickle_wr.dump_pickle()", setup="import pickle_wr")
print t.timeit(1),'\n'
print "cPickle dump: "
t = timeit.Timer(stmt="pickle_wr.dump_cpickle()", setup="import pickle_wr")
print t.timeit(1),'\n'
print "Json load: "
t = timeit.Timer(stmt="pickle_wr.load_json()", setup="import pickle_wr")
print t.timeit(1),'\n'
print "pickle load: "
t = timeit.Timer(stmt="pickle_wr.load_pickle()", setup="import pickle_wr")
print t.timeit(1),'\n'
print "cPickle load: "
t = timeit.Timer(stmt="pickle_wr.load_cpickle()", setup="import pickle_wr")
print t.timeit(1),'\n'
输出
Json dump:
42.5809804916
Pickle dump:
52.87407804489
cPickle dump:
1.1903790187836
Json load:
12.240660209656
pickle load:
24.48748306274
cPickle load:
24.4888298893
我已经看到cPickle需要更少的时间来转储和加载,但加载文件仍然需要很长时间。
答案 0 :(得分:23)
尝试使用json
library代替pickle
。这应该是你的案例中的一个选项,因为你正在处理一个相对简单的对象字典。
根据this website,
JSON的读取速度(加载速度)快25倍,速度提高15倍 写作(转储)。
另请参阅此问题:What is faster - Loading a pickled dictionary object or Loading a JSON file - to a dictionary?
升级Python或使用带有固定Python版本的the marshal
module也有助于提高速度(code adapted from here):
try: import cPickle
except: import pickle as cPickle
import pickle
import json, marshal, random
from time import time
from hashlib import md5
test_runs = 1000
if __name__ == "__main__":
payload = {
"float": [(random.randrange(0, 99) + random.random()) for i in range(1000)],
"int": [random.randrange(0, 9999) for i in range(1000)],
"str": [md5(str(random.random()).encode('utf8')).hexdigest() for i in range(1000)]
}
modules = [json, pickle, cPickle, marshal]
for payload_type in payload:
data = payload[payload_type]
for module in modules:
start = time()
if module.__name__ in ['pickle', 'cPickle']:
for i in range(test_runs): serialized = module.dumps(data, protocol=-1)
else:
for i in range(test_runs): serialized = module.dumps(data)
w = time() - start
start = time()
for i in range(test_runs):
unserialized = module.loads(serialized)
r = time() - start
print("%s %s W %.3f R %.3f" % (module.__name__, payload_type, w, r))
结果:
C:\Python27\python.exe -u "serialization_benchmark.py"
json int W 0.125 R 0.156
pickle int W 2.808 R 1.139
cPickle int W 0.047 R 0.046
marshal int W 0.016 R 0.031
json float W 1.981 R 0.624
pickle float W 2.607 R 1.092
cPickle float W 0.063 R 0.062
marshal float W 0.047 R 0.031
json str W 0.172 R 0.437
pickle str W 5.149 R 2.309
cPickle str W 0.281 R 0.156
marshal str W 0.109 R 0.047
C:\pypy-1.6\pypy-c -u "serialization_benchmark.py"
json int W 0.515 R 0.452
pickle int W 0.546 R 0.219
cPickle int W 0.577 R 0.171
marshal int W 0.032 R 0.031
json float W 2.390 R 1.341
pickle float W 0.656 R 0.436
cPickle float W 0.593 R 0.406
marshal float W 0.327 R 0.203
json str W 1.141 R 1.186
pickle str W 0.702 R 0.546
cPickle str W 0.828 R 0.562
marshal str W 0.265 R 0.078
c:\Python34\python -u "serialization_benchmark.py"
json int W 0.203 R 0.140
pickle int W 0.047 R 0.062
pickle int W 0.031 R 0.062
marshal int W 0.031 R 0.047
json float W 1.935 R 0.749
pickle float W 0.047 R 0.062
pickle float W 0.047 R 0.062
marshal float W 0.047 R 0.047
json str W 0.281 R 0.187
pickle str W 0.125 R 0.140
pickle str W 0.125 R 0.140
marshal str W 0.094 R 0.078
Python 3.4 uses pickle protocol 3 as default,与协议4相比没有任何区别.Python 2将协议2作为最高的pickle协议(如果为转储提供负值,则选择),这是协议3的两倍慢。
答案 1 :(得分:7)
我在使用cPickle本身读取大文件(例如:~750 MB igraph对象 - 二进制pickle文件)方面取得了很好的效果。这是通过简单地完成提到here
的pickle加载调用来实现的您案例中的示例代码段如下:
$parameters[0]
当然,可能有更合适的方法来完成任务,但是,这种解决方法确实可以大大减少所需的时间。 (对我来说,它从843.04s减少到41.28s,大约20x)
答案 2 :(得分:3)
如果您尝试将字典存储到单个文件中,那么大文件的加载时间会降低您的速度。您可以做的最简单的事情之一是将字典写入磁盘上的目录,每个字典条目都是单个文件。然后,您可以在多个线程中(或使用多处理)对文件进行pickle和unpickled。对于一个非常大的字典,无论您选择哪种序列化,这都应该比读取单个文件快得多。有一些像klepto
和joblib
这样的软件包已经为你做了很多(如果不是全部的话)。我会检查这些包裹。 (注意:我是klepto
作者。请参阅https://github.com/uqfoundation/klepto)。