基于此comment和参考文档,Python 3.4+中的Pickle 4.0+应该能够挑选大于4 GB的字节对象。
但是,在Mac OS X 10.10.4上使用python 3.4.3或python 3.5.0b2时,当我尝试挑选一个大字节数组时出现错误:
>>> import pickle
>>> x = bytearray(8 * 1000 * 1000 * 1000)
>>> fp = open("x.dat", "wb")
>>> pickle.dump(x, fp, protocol = 4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 22] Invalid argument
我的代码中是否有错误,或者我误解了文档?
答案 0 :(得分:28)
以下是issue 24658的简单解决方法。使用pickle.loads
或pickle.dumps
并将bytes对象分成大小为2**31 - 1
的块,以使其进入或退出文件。
import pickle
import os.path
file_path = "pkl.pkl"
n_bytes = 2**31
max_bytes = 2**31 - 1
data = bytearray(n_bytes)
## write
bytes_out = pickle.dumps(data)
with open(file_path, 'wb') as f_out:
for idx in range(0, len(bytes_out), max_bytes):
f_out.write(bytes_out[idx:idx+max_bytes])
## read
bytes_in = bytearray(0)
input_size = os.path.getsize(file_path)
with open(file_path, 'rb') as f_in:
for _ in range(0, input_size, max_bytes):
bytes_in += f_in.read(max_bytes)
data2 = pickle.loads(bytes_in)
assert(data == data2)
答案 1 :(得分:16)
总结评论中回答的内容:
是的,Python可以腌制大于4GB的字节对象。观察到的错误是由实现中的错误引起的(请参阅Issue24658)。
答案 2 :(得分:12)
这是完整的解决方法,虽然看起来pickle.load不再尝试转储一个巨大的文件了(我在Python 3.5.2上)所以严格来说只有pickle.dumps需要这个才能正常工作。
import pickle
class MacOSFile(object):
def __init__(self, f):
self.f = f
def __getattr__(self, item):
return getattr(self.f, item)
def read(self, n):
# print("reading total_bytes=%s" % n, flush=True)
if n >= (1 << 31):
buffer = bytearray(n)
idx = 0
while idx < n:
batch_size = min(n - idx, 1 << 31 - 1)
# print("reading bytes [%s,%s)..." % (idx, idx + batch_size), end="", flush=True)
buffer[idx:idx + batch_size] = self.f.read(batch_size)
# print("done.", flush=True)
idx += batch_size
return buffer
return self.f.read(n)
def write(self, buffer):
n = len(buffer)
print("writing total_bytes=%s..." % n, flush=True)
idx = 0
while idx < n:
batch_size = min(n - idx, 1 << 31 - 1)
print("writing bytes [%s, %s)... " % (idx, idx + batch_size), end="", flush=True)
self.f.write(buffer[idx:idx + batch_size])
print("done.", flush=True)
idx += batch_size
def pickle_dump(obj, file_path):
with open(file_path, "wb") as f:
return pickle.dump(obj, MacOSFile(f), protocol=pickle.HIGHEST_PROTOCOL)
def pickle_load(file_path):
with open(file_path, "rb") as f:
return pickle.load(MacOSFile(f))
答案 3 :(得分:2)
如果执行bytes
连接,则按2GB块读取文件需要两倍的内存,我对加载 pickle的方法基于bytearray:
class MacOSFile(object):
def __init__(self, f):
self.f = f
def __getattr__(self, item):
return getattr(self.f, item)
def read(self, n):
if n >= (1 << 31):
buffer = bytearray(n)
pos = 0
while pos < n:
size = min(n - pos, 1 << 31 - 1)
chunk = self.f.read(size)
buffer[pos:pos + size] = chunk
pos += size
return buffer
return self.f.read(n)
用法:
with open("/path", "rb") as fin:
obj = pickle.load(MacOSFile(fin))
答案 4 :(得分:1)
您可以指定转储的协议。
如果您执行pickle.dump(obj,file,protocol=4)
,它应该可以工作。
答案 5 :(得分:0)
我也发现了这个问题,为了解决这个问题,我把代码分成了几个迭代。假设在这种情况下,我有50.000数据,我必须计算tf-idf并做knn classfication。当我运行并直接迭代50.000时,它给我“那个错误”。所以,为了解决这个问题,我把它搞定了。
tokenized_documents = self.load_tokenized_preprocessing_documents()
idf = self.load_idf_41227()
doc_length = len(documents)
for iteration in range(0, 9):
tfidf_documents = []
for index in range(iteration, 4000):
doc_tfidf = []
for term in idf.keys():
tf = self.term_frequency(term, tokenized_documents[index])
doc_tfidf.append(tf * idf[term])
doc = documents[index]
tfidf = [doc_tfidf, doc[0], doc[1]]
tfidf_documents.append(tfidf)
print("{} from {} document {}".format(index, doc_length, doc[0]))
self.save_tfidf_41227(tfidf_documents, iteration)
答案 6 :(得分:0)
存在相同的问题,并通过升级到Python 3.6.8来解决。