所以让我们想象一下我们想要使用cElementTree.iterparse进行iterparse的大型xml文档(文件大小> 100 mb)。
但英特尔向我们承诺的所有核心都值得,我们如何使用它们?这就是我想要的:
from itertools import islice
from xml.etree import ElementTree as etree
tree_iter = etree.iterparse(open("large_file.xml", encoding="utf-8"))
first = islice(tree_iter, 0, 10000)
second = islice(tree_iter, 10000)
parse_first()
parse_second()
这似乎有几个问题,尤其是iterparse()返回的迭代器似乎抵制切片。
有没有办法将大型xml文档的解析工作量分成两个或四个单独的任务(不将整个文档加载到内存中?目的是在不同的处理器上执行任务。
答案 0 :(得分:0)
我认为你需要一个带有任务队列的好的线程池。我找到(并使用)这个非常好的(它在python3中,但不应该太难转换为2.x):
# http://code.activestate.com/recipes/577187-python-thread-pool/
from queue import Queue
from threading import Thread
class Worker(Thread):
def __init__(self, tasks):
Thread.__init__(self)
self.tasks = tasks
self.daemon = True
self.start()
def run(self):
while True:
func, args, kargs = self.tasks.get()
try: func(*args, **kargs)
except Exception as exception: print(exception)
self.tasks.task_done()
class ThreadPool:
def __init__(self, num_threads):
self.tasks = Queue(num_threads)
for _ in range(num_threads): Worker(self.tasks)
def add_task(self, func, *args, **kargs):
self.tasks.put((func, args, kargs))
def wait_completion(self):
self.tasks.join()
现在你可以在iterparse上运行循环,让线程池为你分工。使用它很简单:
def executetask(arg):
print(arg)
workers = threadpool.ThreadPool(4) # 4 is the number of threads
for i in range(100): workers.add_task(executetask, i)
workers.wait_completion() # not needed, only if you need to be certain all work is done before continuing