我使用Python Requests库下载一个大文件,例如:
r = requests.get("http://bigfile.com/bigfile.bin")
content = r.content
大文件下载速度为+ - 30 Kb / s,这有点慢。与bigfile服务器的每个连接都受到限制,所以我想建立多个连接。
有没有办法同时建立多个连接来下载一个文件?
答案 0 :(得分:20)
您可以使用HTTP Range
标头来获取文件的一部分(already covered for python here)。
只需启动多个线程并获取不同的范围,然后就完成了;)
def download(url,start):
req = urllib2.Request('http://www.python.org/')
req.headers['Range'] = 'bytes=%s-%s' % (start, start+chunk_size)
f = urllib2.urlopen(req)
parts[start] = f.read()
threads = []
parts = {}
# Initialize threads
for i in range(0,10):
t = threading.Thread(target=download, i*chunk_size)
t.start()
threads.append(t)
# Join threads back (order doesn't matter, you just want them all)
for i in threads:
i.join()
# Sort parts and you're done
result = ''.join(parts[i] for i in sorted(parts.keys()))
另请注意,并非每个服务器都支持Range
标头(尤其是具有php scripts responsible for data fetching的服务器通常不会实现对它的处理。)
答案 1 :(得分:6)
这是一个Python脚本,它将给定的URL保存到文件中并使用多个线程下载它:
#!/usr/bin/env python
import sys
from functools import partial
from itertools import count, izip
from multiprocessing.dummy import Pool # use threads
from urllib2 import HTTPError, Request, urlopen
def download_chunk(url, byterange):
req = Request(url, headers=dict(Range='bytes=%d-%d' % byterange))
try:
return urlopen(req).read()
except HTTPError as e:
return b'' if e.code == 416 else None # treat range error as EOF
except EnvironmentError:
return None
def main():
url, filename = sys.argv[1:]
pool = Pool(4) # define number of concurrent connections
chunksize = 1 << 16
ranges = izip(count(0, chunksize), count(chunksize - 1, chunksize))
with open(filename, 'wb') as file:
for s in pool.imap(partial(download_part, url), ranges):
if not s:
break # error or EOF
file.write(s)
if len(s) != chunksize:
break # EOF (servers with no Range support end up here)
if __name__ == "__main__":
main()
如果服务器返回空主体或416 http代码,或者响应大小不是chunksize
,则检测到文件结尾。
它支持不理解Range
标头的服务器(在这种情况下,所有内容都在单个请求中下载;要支持大文件,请更改download_chunk()
以保存到临时文件并返回文件名要在主线程中读取而不是文件内容本身。)
它允许在一个http请求中独立更改并发连接数(池大小)和请求的字节数。
要使用多个进程而不是线程,请更改导入:
from multiprocessing.pool import Pool # use processes (other code unchanged)
答案 2 :(得分:1)
此解决方案需要名为“aria2c”的linux实用程序,但它具有轻松恢复下载的优势。
它还假定您要下载的所有文件都列在位置MY_HTTP_LOC
的http目录列表中。我在lighttpd / 1.4.26 http服务器的实例上测试了这个脚本。但是,您可以轻松修改此脚本,以便其适用于其他设置。
#!/usr/bin/python
import os
import urllib
import re
import subprocess
MY_HTTP_LOC = "http://AAA.BBB.CCC.DDD/"
# retrieve webpage source code
f = urllib.urlopen(MY_HTTP_LOC)
page = f.read()
f.close
# extract relevant URL segments from source code
rgxp = '(\<td\ class="n"\>\<a\ href=")([0-9a-zA-Z\(\)\-\_\.]+)(")'
results = re.findall(rgxp,str(page))
files = []
for match in results:
files.append(match[1])
# download (using aria2c) files
for afile in files:
if os.path.exists(afile) and not os.path.exists(afile+'.aria2'):
print 'Skipping already-retrieved file: ' + afile
else:
print 'Downloading file: ' + afile
subprocess.Popen(["aria2c", "-x", "16", "-s", "20", MY_HTTP_LOC+str(afile)]).wait()