我正在尝试一次处理多个文件,其中每个文件将生成数据块,以同时馈入一定大小限制的队列。 例如,如果有5个文件,每个文件包含一百万个元素,我想将每个文件中的100个元素提供给另一个生成器,该生成器一次生成500个元素。
这是到目前为止我一直在尝试的方法,但是遇到can't pickle generator
错误:
import os
from itertools import islice
import multiprocessing as mp
import numpy as np
class File(object):
def __init__(self, data_params):
data_len = 100000
self.large_data = np.array([data_params + str(i) for i in np.arange(0, data_len)])
def __iter__(self):
for i in self.large_data:
yield i
def parse_file(file_path):
# differnt filepaths yeild different data obviously
# here we just emulate with something silly
if file_path == 'elephant_file':
p = File(data_params = 'elephant')
if file_path == 'number_file':
p = File(data_params = 'number')
if file_path == 'horse_file':
p = File(data_params = 'horse')
yield from p
def parse_dir(user_given_dir, chunksize = 10):
pool = mp.Pool(4)
paths = ['elephant_file', 'number_file', 'horse_file'] #[os.path.join(user_given_dir, p) for p in os.listdir(user_given_dir)]
# Works, but not simultaneously on all paths
# for path in paths:
# data_gen = parse_file(path)
# parsed_data_batch = True
# while parsed_data_batch:
# parsed_data_batch = list(islice(data_gen, chunksize))
# yield parsed_data_batch
# Doesn't work
for objs in pool.imap(parse_file, paths, chunksize = chunksize):
for o in objs:
yield o
it = parse_dir('.')
for ix, o in enumerate(it):
print(o) # hopefully just prints 10 elephants, horses and numbers
if ix>2: break
有人知道如何获得所需的行为吗?
答案 0 :(得分:0)
对于泡菜错误:
parse_file
是生成器,而不是常规函数,因为它在内部使用了yield
。
multiprocessing
需要一个函数作为任务来执行。因此,您应该在yield from p
return p
替换为parse_file()
如果要从所有文件中逐块读取记录,请尝试在zip
中使用parse_dir()
:
iterators = [
iter(e) for e in pool.imap(parse_file, paths, chunksize=chunksize)
]
while True:
batch = [
o for i in iterators
for _, o in zip(range(100), i) # e.g., 100
]
if batch:
yield batch
else:
return