我在“Simulation”类中运行了一个模拟,还有一个“DataRecorder”类负责在磁盘上保存数据(经过多次操作)。这是一个简化的模型:
class DataRecorder(object):
"""
Fill in an internal buffer, and flush it on disk when it reaches
a given data amount.
"""
_f = r'n:\99-tmp\test_async\toto.txt'
_events_buffer = []
_events_buffer_limit = 10
flushing_cpt = 0
def __init__(self):
with open(self._f, 'w') as fh:
fh.write('new sim')
def save_event(self, ix, val):
""" append data to internal buffer, and flush it when limit is reached
"""
if len(self._events_buffer)>self._events_buffer_limit:
self._flush_events_buffer()
self.flushing_cpt += 1
self._events_buffer.append((ix, val))
def _flush_events_buffer(self):
""" write bunch of data on disk """
# here, in reality, deal with numpy arrays and HDF5 file
buf = [str(i) for i in self._events_buffer]
_s = '\n'.join(buf)
with open(self._f, 'a') as fh:
fh.write(_s)
self._events_buffer = []
def stop_records(self):
self._flush_events_buffer()
class Simulation(object):
def __init__(self):
self.dr = DataRecorder()
def run(self, nb=10000):
""" long-term simulation (could be 10min calculations generating about 1Gb of data) """
for ix in range(nb):
sol = ix * 3.14
self.dr.save_event(ix, sol)
self.dr.stop_records()
if __name__ == '__main__':
sim = Simulation()
sim.run()
虽然这很有效,但是磁盘IO目前是我的瓶颈,因为DataRecorder
停止模拟,每次缓冲区满时它都会在磁盘(HDF5文件)上转储数据的时间。
我的目标是将DataRecorder
转换为异步类,在后台写入磁盘,并在填充数据缓冲区时继续运行模拟。
我不是(从远处)一个多处理超级英雄,这是我第一次使用pool
失败的尝试:
我从Write data to disk in Python as a background process获得灵感
并且还尝试了Queue
Solving embarassingly parallel problems using Python multiprocessing
class MPDataRecorder(object):
_f = r'n:\99-tmp\test_async\toto_mp.txt'
_events_buffer = []
_events_buffer_limit = 10
flushing_cpt = 0
numprocs = mp.cpu_count()
def __init__(self):
with open(self._f, 'w') as fh:
fh.write('new sim')
self.record = True
self.pool = mp.Pool()
self._watch_buffer()
def save_event(self, ix, val):
""" append data to internal buffer, and flush it when limit is reached
"""
self._events_buffer.append((ix, val))
def _flush_events_buffer(self):
""" write bunch of data on disk """
# here, in reality, deal with numpy arrays and HDF5 file
buf = [str(i) for i in self._events_buffer]
_s = '\n'.join(buf)
with open(self._f, 'a') as fh:
fh.write(_s)
self._events_buffer = []
def _watch_buffer(self):
# here, in reality, deal with numpy arrays and HDF5 file
while self.record:
self.pool.apply_async(self._flush_events_buffer)
def stop_records(self):
self.record = False
self.pool.close()
self.pool.join()
这导致跟随TraceBack,然后是内存错误:
PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed
是否有机会在通用类中封装这样的异步数据写入器功能?
答案 0 :(得分:0)
如果您的磁盘I / O是瓶颈,那么没有多少聪明的缓冲可以解决您必须将整个输出保留在内存中的问题。如果磁盘写入器无法跟上,那么它将如何“赶上”您的模拟过程?
但是,如果在某些密集的“峰值”期间这只是一个问题,您可以通过缓冲来解决您的问题。在尝试任何更奇特的东西之前,我建议至少从一个非常简单的解决方案开始:在它们之间使用两个单独的进程和管道输出。 Python中最简单的方法是使用subprocess模块。一个更漂亮的解决方案可能是在它周围使用一个框架,比如Parallel python(但我不能保证它,因为我从来没有做过任何事情,只是玩弄它)。