ConnectionResetError:远程主机强制关闭现有连接

时间:2016-12-12 22:08:28

标签: python python-3.x

我正在编写一个脚本来下载一组文件。我成功完成了这项工作并且工作正常。现在我尝试添加下载进度的动态打印输出。

对于小型下载(顺便说一下是.mp4文件),例如5MB,进展很有效,文件成功关闭,从而生成一个完整且有效的下载.mp4文件。对于较大的文件,如250MB及以上,它无法正常工作,我收到以下错误:

enter image description here

这是我的代码:

import urllib.request
import shutil
import os
import sys
import io

script_dir = os.path.dirname('C:/Users/Kenny/Desktop/')
rel_path = 'stupid_folder/video.mp4'
abs_file_path = os.path.join(script_dir, rel_path)
url = 'https://archive.org/download/SF145/SF145_512kb.mp4'
# Download the file from `url` and save it locally under `file_name`:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    eventID = 123456

    resp = urllib.request.urlopen(url)
    length = resp.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096, length//100)
    else:
        blocksize = 1000000 # just made something up

    # print(length, blocksize)

    buf = io.BytesIO()
    size = 0
    while True:
        buf1 = resp.read(blocksize)
        if not buf1:
            break
        buf.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    print()

    shutil.copyfileobj(response, out_file)

这适用于小文件,但较大的文件我得到错误。现在,如果我注释掉 progress 指标代码,我不会得到更大的文件错误:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    # eventID = 123456
    # 
    # resp = urllib.request.urlopen(url)
    # length = resp.getheader('content-length')
    # if length:
    #     length = int(length)
    #     blocksize = max(4096, length//100)
    # else:
    #     blocksize = 1000000 # just made something up
    # 
    # # print(length, blocksize)
    # 
    # buf = io.BytesIO()
    # size = 0
    # while True:
    #     buf1 = resp.read(blocksize)
    #     if not buf1:
    #         break
    #     buf.write(buf1)
    #     size += len(buf1)
    #     if length:
    #         print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    # print()

    shutil.copyfileobj(response, out_file)

有没有人有任何想法?这是我项目的最后一部分,我真的希望能够看到进展。再一次,这是Python 3.5。感谢您提供的任何帮助!

1 个答案:

答案 0 :(得分:3)

您打开了两次网址,一次为response,一次为resp。使用进度条的东西,你正在消耗数据,所以当使用copyfileobj复制文件时,数据是空的(也许这是不准确的,因为它适用于小文件,但你在这里做了两次事情它可能是你问题的原因)

要获取进度条和有效文件,请执行以下操作:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    eventID = 123456

    length = response.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096, length//100)
    else:
        blocksize = 1000000 # just made something up


    size = 0
    while True:
        buf1 = response.read(blocksize)
        if not buf1:
            break
        out_file.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    print()

对您的代码进行简化:

  • 只有一个urlopenresponse
  • BytesIO,直接写入out_file