我花了一整天的时间在Python中寻找最简单的多线程URL提取器,但我找到的大多数脚本都使用队列或多处理或复杂的库。
最后我自己写了一个,我作为答案报告。请随时提出改进建议。
我猜其他人可能一直在寻找类似的东西。
答案 0 :(得分:39)
尽可能简化原始版本:
import threading
import urllib2
import time
start = time.time()
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
def fetch_url(url):
urlHandler = urllib2.urlopen(url)
html = urlHandler.read()
print "'%s\' fetched in %ss" % (url, (time.time() - start))
threads = [threading.Thread(target=fetch_url, args=(url,)) for url in urls]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
print "Elapsed Time: %s" % (time.time() - start)
这里唯一的新技巧是:
join
已经告诉你了。Thread
子类,只需要target
函数。答案 1 :(得分:26)
multiprocessing
有一个不会启动其他进程的线程池:
#!/usr/bin/env python
from multiprocessing.pool import ThreadPool
from time import time as timer
from urllib2 import urlopen
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
def fetch_url(url):
try:
response = urlopen(url)
return url, response.read(), None
except Exception as e:
return url, None, e
start = timer()
results = ThreadPool(20).imap_unordered(fetch_url, urls)
for url, html, error in results:
if error is None:
print("%r fetched in %ss" % (url, timer() - start))
else:
print("error fetching %r: %s" % (url, error))
print("Elapsed Time: %s" % (timer() - start,))
与基于Thread
的解决方案相比的优势:
ThreadPool
允许限制最大并发连接数(代码示例中为20
)from urllib.request import urlopen
)。答案 2 :(得分:12)
concurrent.futures
中的主要示例可以完成您想要的一切,更简单。此外,它可以一次只执行5次处理大量的URL,并且可以更好地处理错误。
当然这个模块只是内置在Python 3.2或更高版本中......但是如果你使用2.5-3.1,你可以在PyPI上安装反向端口futures
。您需要更改示例代码的所有内容是使用concurrent.futures
搜索和替换futures
,而对于2.x,urllib.request
和urllib2
。
以下是向后移植到2.x的示例,修改为使用您的URL列表并添加时间:
import concurrent.futures
import urllib2
import time
start = time.time()
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
# Retrieve a single page and report the url and contents
def load_url(url, timeout):
conn = urllib2.urlopen(url, timeout=timeout)
return conn.readall()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in urls}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print '%r generated an exception: %s' % (url, exc)
else:
print '"%s" fetched in %ss' % (url,(time.time() - start))
print "Elapsed Time: %ss" % (time.time() - start)
但你可以让这更简单。真的,你所需要的只是:
def load_url(url):
conn = urllib2.urlopen(url, timeout)
data = conn.readall()
print '"%s" fetched in %ss' % (url,(time.time() - start))
return data
with futures.ThreadPoolExecutor(max_workers=5) as executor:
pages = executor.map(load_url, urls)
print "Elapsed Time: %ss" % (time.time() - start)
答案 3 :(得分:1)
我现在正在发布一个不同的解决方案,通过让工作线程不是-damon并将它们连接到主线程(这意味着阻塞主线程直到所有工作线程完成)而不是通知每个工作线程的执行结束,并回调一个全局函数(正如我在上一个答案中所做的那样),就像在一些注释中所指出的那样,这种方式不是线程安全的。
import threading
import urllib2
import time
start = time.time()
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
class FetchUrl(threading.Thread):
def __init__(self, url):
threading.Thread.__init__(self)
self.url = url
def run(self):
urlHandler = urllib2.urlopen(self.url)
html = urlHandler.read()
print "'%s\' fetched in %ss" % (self.url,(time.time() - start))
for url in urls:
FetchUrl(url).start()
#Join all existing threads to main thread.
for thread in threading.enumerate():
if thread is not threading.currentThread():
thread.join()
print "Elapsed Time: %s" % (time.time() - start)
答案 4 :(得分:-1)
此脚本从数组中定义的一组URL中提取内容。它为每个要获取的URL生成一个线程,因此它可用于一组有限的URL。
每个线程都使用队列对象,而不是使用队列对象,而是通过对全局函数的回调来通知它,该函数会保持运行的线程数的计数。
import threading
import urllib2
import time
start = time.time()
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
left_to_fetch = len(urls)
class FetchUrl(threading.Thread):
def __init__(self, url):
threading.Thread.__init__(self)
self.setDaemon = True
self.url = url
def run(self):
urlHandler = urllib2.urlopen(self.url)
html = urlHandler.read()
finished_fetch_url(self.url)
def finished_fetch_url(url):
"callback function called when a FetchUrl thread ends"
print "\"%s\" fetched in %ss" % (url,(time.time() - start))
global left_to_fetch
left_to_fetch-=1
if left_to_fetch==0:
"all urls have been fetched"
print "Elapsed Time: %ss" % (time.time() - start)
for url in urls:
"spawning a FetchUrl thread for each url to fetch"
FetchUrl(url).start()