for循环中的并行化功能

时间:2019-02-14 21:14:31

标签: python for-loop parallel-processing multiprocessing

我有一个要并行化的函数。

import multiprocessing as mp
from pathos.multiprocessing import ProcessingPool as Pool

cores=mp.cpu_count()

# create the multiprocessing pool
pool = Pool(cores)

def clean_preprocess(text):
    """
    Given a string of text, the function:
    1. Remove all punctuations and numbers and converts texts to lower case
    2. Handles negation words defined above.
    3. Tokenies words that are of more than length 1
    """
    cores=mp.cpu_count()
    pool = Pool(cores)
    lower = re.sub(r'[^a-zA-Z\s\']', "", text).lower()
    lower_neg_handled = n_pattern.sub(lambda x: n_dict[x.group()], lower)
    letters_only = re.sub(r'[^a-zA-Z\s]', "", lower_neg_handled)
    words = [i for i  in tok.tokenize(letters_only) if len(i) > 1] ##parallelize this? 
return (' '.join(words))

我一直在阅读有关多处理的文档,但是对于如何适当地并行化我的函数仍然有些困惑。如果有人可以将我的函数并行化时指出正确的方向,我将不胜感激。

1 个答案:

答案 0 :(得分:1)

在您的函数上,您可以决定将文本拆分成多个子部分,将标记化应用于子部分,然后合并结果,从而进行并行化。

类似的东西:

text0 = text[:len(text)/2]
text1 = text[len(text)/2:]

然后使用以下方法将处理应用于这两个部分:

# here, I suppose that clean_preprocess is the sequential version, 
# and we manage the pool outside of it
with Pool(2) as p:
  words0, words1 = pool.map(clean_preprocess, [text0, text1])
words = words1 + words2
# or continue with words0 words1 to save the cost of joining the lists

但是,您的函数似乎受内存限制,因此不会有可怕的加速(通常,因子2是我们如今在标准计算机上希望达到的最大值),例如How much does parallelization help the performance if the program is memory-bound?What do the terms "CPU bound" and "I/O bound" mean?

因此,您可以尝试将文本分成两个以上的部分,但可能不会更快。您甚至可能会获得令人失望的性能,因为拆分文本可能比处理文本更为昂贵。