Python-多线程按顺序运行

时间:2019-11-28 05:37:05

标签: python multithreading python-multithreading

我看不出为什么它像顺序处理那样运行。

from queue import Queue, Empty
from concurrent.futures import ThreadPoolExecutor
import threading
import time
import random

pool = ThreadPoolExecutor(max_workers=3)
to_crawl = Queue()

#Import urls
for i in range(100):
    to_crawl.put(str(i))

def scraping(random_sleep):
    time.sleep(random_sleep)
    return

def post_scrape(url):
    print('URL %s finished' % url)

def my_crawler():
    while True:
        try:
            target_url = to_crawl.get()
            random_sleep = random.randint(1, 5)
            print("Current URL: %s, sleep: %s" % (format(target_url), random_sleep))
            executor = pool.submit(scraping(random_sleep))
            executor.add_done_callback(post_scrape(target_url))
        except Empty:
            return
        except Exception as e:
            print(e)
            continue

if __name__ == '__main__':
    my_crawler()

预期输出:

Current URL: 0, sleep: 5
Current URL: 1, sleep: 1
Current URL: 2, sleep: 2
URL 1 finished
URL 2 finished
URL 0 finished

实际输出:

Current URL: 0, sleep: 5
URL 0 finished
Current URL: 1, sleep: 1
URL 1 finished
Current URL: 2, sleep: 2
URL 2 finished

1 个答案:

答案 0 :(得分:1)

问题出在您呼叫pool.submit的方式上:

pool.submit(scraping(random_sleep))

这表示将scraping(random_sleep)结果提交到池中;实际上,我很惊讶它不会导致错误。您要执行的操作是使用参数scraping提交random_sleep函数,可以通过以下方法实现:

pool.submit(scraping, random_sleep)

类似地,下一行应该是:

executor.add_done_callback(post_scrape)

并且回调应声明为:

def post_scrape(executor):

executor就是未来本身,executor来自其他代码。请注意,没有简单的方法可以将用户参数附加到此回调中,因此您可以执行以下操作并删除add_done_callback

def scraping(random_sleep, url):
    time.sleep(random_sleep)
    print('URL %s finished' % url)
    return

#...

pool.submit(scraping, random_sleep, target_url)