Python线程 - 意外输出

时间:2014-02-11 11:37:42

标签: python multithreading

我是Python新手,并在下面编写了一个线程脚本,它接受文件的每一行,并将其传递给get_result函数。 get_result函数应输出url和status代码(如果它是200或301)。

代码如下:

import requests
import Queue
import threading
import re
import time

start_time = int(time.time())
regex_to_use = re.compile(r"^")


def get_result(q, partial_url):
    partial_url = regex_to_use.sub("%s" % "http://www.domain.com/", partial_url)
    r = requests.get(partial_url)
    status = r.status_code
    #result = "nothing"
    if status == 200 or status == 301:
        result = str(status) + " " + partial_url
        print(result)


#need list of urls from file
file_list = [line.strip() for line in open('/home/shares/inbound/seo/feb-404s/list.csv', 'r')]
q = Queue.Queue()
for url in file_list:
    #for each partial. send to the processing function get_result
    t = threading.Thread(target=get_result, args=(q, url))
    t.start()

end_time = int(time.time())
exec_time = end_time - start_time
print("execution time was " + str(exec_time))

我使用了Queue和线程,但正在发生的是“执行时间为x”的打印输出之前线程完成输出数据。

即。典型的输出是:

200 www.domain.com/ok-url
200 www.domain.com/ok-url-1
200 www.domain.com/ok-url-2
execution time was 3
200 www.domain.com/ok-url-4
200 www.domain.com/ok-ur-5
200 www.domain.com/ok-url-6

这是怎么回事,我想知道如何在脚本结尾处显示脚本执行,即一旦所有网址都被处理并输出?

感谢utdemir下面给出的答案,这里是加入的更新代码。

import requests
import Queue
import threading
import re
import time

start_time = int(time.time())
regex_to_use = re.compile(r"^")


def get_result(q, partial_url):
    partial_url = regex_to_use.sub("%s" % "http://www.domain.com/", partial_url)
    r = requests.get(partial_url)
    status = r.status_code
    #result = "nothing"
    if status == 200 or status == 301:
        result = str(status) + " " + partial_url
        print(result)


#need list of urls from file
file_list = [line.strip() for line in open('/home/shares/inbound/seo/feb-404s/list.csv', 'r')]
q = Queue.Queue()
threads_list = []

for url in file_list:
    #for each partial. send to the processing function get_result
    t = threading.Thread(target=get_result, args=(q, url))
    threads_list.append(t)
    t.start()

for thread in threads_list:
    thread.join()


end_time = int(time.time())
exec_time = end_time - start_time
print("execution time was " + str(exec_time))

1 个答案:

答案 0 :(得分:3)

你应该join线程等待它们,否则它们将继续在后台执行。

像这样:

threads = []
for url in file_list:
    ...
    threads.append(t)

for thread in threads:
    thread.join() # Wait until each thread terminates

end_time = int(time.time()
...