我正在尝试找到一种在Python(2.6)中异步下载多个文件的方法,最好是通过请求模块。 Gevent和Twisted也将被接受,因为我将在不久的将来学习它们。
我的应用程序需要在短时间内下载40多个文件,我想一次连续下载所有文件4。每次一个文件下载完成另一个文件下载完成所以它保持在4。 这可能吗?
答案 0 :(得分:12)
您不需要使用任何外部库或框架来完成这么简单的任务,将URL列表放在队列中,启动4个线程,每个线程都应该从队列中获取一个项目并下载它。
类似的东西:
import sys
import os
import urllib
import threading
from Queue import Queue
class DownloadThread(threading.Thread):
def __init__(self, queue, destfolder):
super(DownloadThread, self).__init__()
self.queue = queue
self.destfolder = destfolder
self.daemon = True
def run(self):
while True:
url = self.queue.get()
try:
self.download_url(url)
except Exception,e:
print " Error: %s"%e
self.queue.task_done()
def download_url(self, url):
# change it to a different way if you require
name = url.split('/')[-1]
dest = os.path.join(self.destfolder, name)
print "[%s] Downloading %s -> %s"%(self.ident, url, dest)
urllib.urlretrieve(url, dest)
def download(urls, destfolder, numthreads=4):
queue = Queue()
for url in urls:
queue.put(url)
for i in range(numthreads):
t = DownloadThread(queue, destfolder)
t.start()
queue.join()
if __name__ == "__main__":
download(sys.argv[1:], "/tmp")
用法:
$ python download.py http://en.wikipedia.org/wiki/1 http://en.wikipedia.org/wiki/2 http://en.wikipedia.org/wiki/3 http://en.wikipedia.org/wiki/4
[4456497152] Downloading http://en.wikipedia.org/wiki/1 -> /tmp/1
[4457033728] Downloading http://en.wikipedia.org/wiki/2 -> /tmp/2
[4457701376] Downloading http://en.wikipedia.org/wiki/3 -> /tmp/3
[4458258432] Downloading http://en.wikipedia.org/wiki/4 -> /tmp/4