如何使用StdLib和Python 3在一定范围内并行化迭代?

时间:2018-10-03 21:34:56

标签: python python-3.x parallel-processing multiprocessing range

几天来我一直在寻找答案,但无济于事。我可能只是不了解其中漂浮的部分,而multiprocessing模块上的Python文档相当大,我不清楚。

假设您有以下for循环:

import timeit


numbers = []

start = timeit.default_timer()

for num in range(100000000):
    numbers.append(num)

end = timeit.default_timer()

print('TIME: {} seconds'.format(end - start))
print('SUM:', sum(numbers))

输出:

TIME: 23.965870224497916 seconds
SUM: 4999999950000000

在此示例中,您有一个4核处理器。是否有办法总共创建4个进程,每个进程在单独的CPU内核上运行,并且完成速度大约快4倍,所以24s / 4个进程=〜6秒?

以某种方式将for循环分为4个相等的块,然后将这4个块添加到数字列表中以等于相同的总和?有一个stackoverflow线程:Parallel Simple For Loop,但我不明白。谢谢大家。

3 个答案:

答案 0 :(得分:2)

是的,这是可行的。您的计算不依赖中间结果,因此您可以轻松地将任务划分为多个块,并将其分布在多个流程中。这就是所谓的

  

令人尴尬的并行问题

这里唯一棘手的部分可能是,首先将范围分成相当相等的部分。理顺我的个人lib的两个函数来处理此问题:

# mp_utils.py

from itertools import accumulate

def calc_batch_sizes(n_tasks: int, n_workers: int) -> list:
    """Divide `n_tasks` optimally between n_workers to get batch_sizes.

    Guarantees batch sizes won't differ for more than 1.

    Example:
    # >>>calc_batch_sizes(23, 4)
    # Out: [6, 6, 6, 5]

    In case you're going to use numpy anyway, use np.array_split:
    [len(a) for a in np.array_split(np.arange(23), 4)]
    # Out: [6, 6, 6, 5]
    """
    x = int(n_tasks / n_workers)
    y = n_tasks % n_workers
    batch_sizes = [x + (y > 0)] * y + [x] * (n_workers - y)

    return batch_sizes


def build_batch_ranges(batch_sizes: list) -> list:
    """Build batch_ranges from list of batch_sizes.

    Example:
    # batch_sizes [6, 6, 6, 5]
    # >>>build_batch_ranges(batch_sizes)
    # Out: [range(0, 6), range(6, 12), range(12, 18), range(18, 23)]
    """
    upper_bounds = [*accumulate(batch_sizes)]
    lower_bounds = [0] + upper_bounds[:-1]
    batch_ranges = [range(l, u) for l, u in zip(lower_bounds, upper_bounds)]

    return batch_ranges

然后您的主脚本将如下所示:

import time
from multiprocessing import Pool
from mp_utils import calc_batch_sizes, build_batch_ranges


def target_foo(batch_range):
    return sum(batch_range)  # ~ 6x faster than target_foo1


def target_foo1(batch_range):
    numbers = []
    for num in batch_range:
        numbers.append(num)
    return sum(numbers)


if __name__ == '__main__':

    N = 100000000
    N_CORES = 4

    batch_sizes = calc_batch_sizes(N, n_workers=N_CORES)
    batch_ranges = build_batch_ranges(batch_sizes)

    start = time.perf_counter()
    with Pool(N_CORES) as pool:
        result = pool.map(target_foo, batch_ranges)
        r_sum = sum(result)
    print(r_sum)
    print(f'elapsed: {time.perf_counter() - start:.2f} s')

请注意,我也将for循环切换为range对象的简单总和,因为它提供了更好的性能。如果您无法在实际应用中做到这一点,列表理解仍然比示例中手动填写列表快60%。

示例输出:

4999999950000000
elapsed: 0.51 s

Process finished with exit code 0

答案 1 :(得分:0)

if (await DisplayAlert("Warning", $"Are you sure you want to delete {book.Author} {book.BookTitle}?", "Yes", "No"))
{
  await _connection.DeleteAsync(book);
  _booksIWant.Remove(book);
}

因此import timeit from multiprocessing import Pool def appendNumber(x): return x start = timeit.default_timer() with Pool(4) as p: numbers = p.map(appendNumber, range(100000000)) end = timeit.default_timer() print('TIME: {} seconds'.format(end - start)) print('SUM:', sum(numbers)) 类似于内置的Pool.map函数。它接受一个函数和一个可迭代的函数,并生成一个在可迭代的每个元素上调用该函数的结果的列表。在这里,由于我们实际上并不想更改可迭代范围内的元素,因此只返回参数。

关键是map将提供的可迭代项(此处为Pool.map)分成多个块,并将它们发送到它具有的进程数(此处在range(1000000000)中定义为4)然后将结果重新加入一个列表。

运行此命令时得到的输出是

Pool(4)

答案 2 :(得分:0)

我进行了比较,有时拆分任务所花费的时间可能会更长:

文件 class CAdress { string street; string postal; string city; public: CAdress() { street = "Studentska #1"; postal = "9010"; city = "Varna"; }; CAdress(string st, string pos, string ct) { street = st; postal = pos; city = ct; } }; class CStudent : public CPerson2 { string fn; CAdress adr; public: CStudent() { fn = "12131547"; } CStudent(string nm, CAdress add, string egnn) { name = nm; //how to give values to the adress? //add = ? egn = egnn; } };

multiprocessing_summation.py

文件 def summation(lst): sum = 0 for x in range(lst[0], lst[1]): sum += x return sum

multiprocessing_summation_master.py

运行第二个脚本:

%%file ./examples/multiprocessing_summation_master.py import multiprocessing as mp import timeit import os import sys import multiprocessing_summation as mps if __name__ == "__main__": if len(sys.argv) == 1: print(f'{sys.argv[0]} <number1 ...>') sys.exit(1) else: args = [int(x) for x in sys.argv[1:]] nBegin = 1 nCore = os.cpu_count() for nEnd in args: ### Approach 1 #### #################### start = timeit.default_timer() answer1 = mps.summation((nBegin, nEnd+1)) end = timeit.default_timer() print(f'Answer1 = {answer1}') print(f'Time taken = {end - start}') ### Approach 2 #### #################### start = timeit.default_timer() lst = [] for x in range(nBegin, nEnd, int((nEnd-nBegin+1)/nCore)): lst.append(x) lst.append(nEnd+1) lst2 = [] for x in range(1, len(lst)): lst2.append((lst[x-1], lst[x])) with mp.Pool(processes=nCore) as pool: answer2 = pool.map(mps.summation, lst2) end = timeit.default_timer() print(f'Answer2 = {sum(answer2)}') print(f'Time taken = {end - start}')

输出为:

python multiprocessing_summation_master.py 1000 100000 10000000 1000000000