使用请求在python中下载大文件

时间:2013-05-22 14:47:37

标签: python download stream python-requests

Requests是一个非常好的库。我想用它来下载大文件(> 1GB)。 问题是不可能将整个文件保存在内存中我需要以块的形式读取它。这是以下代码的问题

import requests

def DownloadFile(url)
    local_filename = url.split('/')[-1]
    r = requests.get(url)
    f = open(local_filename, 'wb')
    for chunk in r.iter_content(chunk_size=512 * 1024): 
        if chunk: # filter out keep-alive new chunks
            f.write(chunk)
    f.close()
    return 

由于某种原因,它不能以这种方式工作。在将其保存到文件之前,它仍会将响应加载到内存中。

更新

如果你需要一个可以从FTP下载大文件的小客户端(Python 2.x /3.x),你可以找到它here。它支持多线程和重新连接(它确实监视连接)它也调整套接字参数下载任务。

6 个答案:

答案 0 :(得分:535)

使用以下流代码,无论下载文件的大小如何,都会限制Python内存使用情况:

def download_file(url):
    local_filename = url.split('/')[-1]
    # NOTE the stream=True parameter below
    with requests.get(url, stream=True) as r:
        r.raise_for_status()
        with open(local_filename, 'wb') as f:
            for chunk in r.iter_content(chunk_size=8192): 
                if chunk: # filter out keep-alive new chunks
                    f.write(chunk)
                    # f.flush()
    return local_filename

请注意,使用iter_content返回的字节数不完全是chunk_size;它应该是一个通常更大的随机数,并且预计在每次迭代中都会有所不同。

有关详细信息,请参阅http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow

答案 1 :(得分:165)

使用Response.rawshutil.copyfileobj()

会更容易
import requests
import shutil

def download_file(url):
    local_filename = url.split('/')[-1]
    r = requests.get(url, stream=True)
    with open(local_filename, 'wb') as f:
        shutil.copyfileobj(r.raw, f)

    return local_filename

这会将文件流式传输到磁盘而不会占用过多内存,而且代码很简单。

答案 2 :(得分:40)

您的块大小可能太大,您是否尝试删除它 - 一次可能是1024个字节? (另外,您可以使用with来整理语法)

def DownloadFile(url):
    local_filename = url.split('/')[-1]
    r = requests.get(url)
    with open(local_filename, 'wb') as f:
        for chunk in r.iter_content(chunk_size=1024): 
            if chunk: # filter out keep-alive new chunks
                f.write(chunk)
    return 

顺便说一下,你如何推断​​响应已被加载到内存中?

听起来好像python没有将数据刷新到文件,从其他SO questions你可以尝试f.flush()os.fsync()强制文件写入和释放内存;

    with open(local_filename, 'wb') as f:
        for chunk in r.iter_content(chunk_size=1024): 
            if chunk: # filter out keep-alive new chunks
                f.write(chunk)
                f.flush()
                os.fsync(f.fileno())

答案 3 :(得分:38)

不完全是OP所要求的,但......用urllib来做这件事非常容易:

from urllib.request import urlretrieve
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
dst = 'ubuntu-16.04.2-desktop-amd64.iso'
urlretrieve(url, dst)

或者这样,如果要将其保存到临时文件中:

from urllib.request import urlopen
from shutil import copyfileobj
from tempfile import NamedTemporaryFile
url = 'http://mirror.pnl.gov/releases/16.04.2/ubuntu-16.04.2-desktop-amd64.iso'
with urlopen(url) as fsrc, NamedTemporaryFile(delete=False) as fdst:
    copyfileobj(fsrc, fdst)

我看了这个过程:

watch 'ps -p 18647 -o pid,ppid,pmem,rsz,vsz,comm,args; ls -al *.iso'

我看到该文件在增长,但内存使用量保持在17 MB。我错过了什么吗?

答案 4 :(得分:4)

基于上面罗马人最受好评的评论,这是我的实现, 包括“下载为”和“重试”机制:

def download(url: str, file_path='', attempts=2):
    """Downloads a URL content into a file (with large file support by streaming)

    :param url: URL to download
    :param file_path: Local file name to contain the data downloaded
    :param attempts: Number of attempts
    :return: New file path. Empty string if the download failed
    """
    if not file_path:
        file_path = os.path.realpath(os.path.basename(url))
    logger.info(f'Downloading {url} content to {file_path}')
    url_sections = urlparse(url)
    if not url_sections.scheme:
        logger.debug('The given url is missing a scheme. Adding http scheme')
        url = f'http://{url}'
        logger.debug(f'New url: {url}')
    for attempt in range(1, attempts+1):
        try:
            if attempt > 1:
                time.sleep(10)  # 10 seconds wait time between downloads
            with requests.get(url, stream=True) as response:
                response.raise_for_status()
                with open(file_path, 'wb') as out_file:
                    for chunk in response.iter_content(chunk_size=1024*1024):  # 1MB chunks
                        out_file.write(chunk)
                logger.info('Download finished successfully')
                return file_path
        except Exception as ex:
            logger.error(f'Attempt #{attempt} failed with error: {ex}')
    return ''

答案 5 :(得分:2)

改为使用python的wget模块。这是一个片段

import wget
wget.download(url)