运行上传器脚本时的结果不同

时间:2017-02-17 02:36:25

标签: python amazon-s3 boto

我已经整理了一个将数据上传到S3的脚本。如果文件小于5MB,则将其作为一个块上传,但如果文件较大,则会进行分段上传。我知道阈值目前很小我只是在测试脚本的同时。如果我通过导入每个函数并以这种方式运行它来从Python运行脚本,那么一切都按预期工作。我知道代码需要清理,因为它尚未完成。但是,当我从命令行运行脚本时,我遇到了这个错误:

Traceback (most recent call last):
  File "upload_files_to_s3.py", line 106, in <module>
    main()
  File "upload_files_to_s3.py", line 103, in main
    check_if_mp_needed(conn, input_file, mb, bucket_name, sub_directory)
  File "upload_files_to_s3.py", line 71, in check_if_mp_needed
    multipart_upload(conn, input_file, mb, bucket_name, sub_directory)
  File "upload_files_to_s3.py", line 65, in multipart_upload
    mp.complete_upload()
  File "/usr/local/lib/python2.7/site-packages/boto/s3/multipart.py", line 304, in complete_upload
    self.id, xml)
  File "/usr/local/lib/python2.7/site-packages/boto/s3/bucket.py", line 1571, in complete_multipart_upload
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request

>The XML you provided was not well-formed or did not validate against our published schema

以下是代码:

import sys
import boto
from boto.s3.key import Key
import os
import math
from filechunkio import FileChunkIO


KEY = os.environ['AWS_ACCESS_KEY_ID']
SECRET = os.environ['AWS_SECRET_ACCESS_KEY']

def start_connection():
    key = KEY
    secret = SECRET
    return boto.connect_s3(key, secret)

def get_bucket_key(conn, bucket_name):
    bucket = conn.get_bucket(bucket_name)
    k = Key(bucket)
    return k

def get_key_name(sub_directory, input_file):
    full_key_name = os.path.join(sub_directory, os.path.basename(input_file))
    return full_key_name

def get_file_info(input_file):
    source_size = os.stat(input_file).st_size
    return source_size

def multipart_request(conn, input_file, bucket_name, sub_directory):
    bucket = conn.get_bucket(bucket_name)
    mp = bucket.initiate_multipart_upload(get_key_name(sub_directory, input_file))
    return mp

def get_chunk_size(mb):
    chunk_size = mb * 1048576
    return chunk_size

def get_chunk_count(input_file, mb):
    chunk_count = int(math.ceil(get_file_info(input_file)/float(get_chunk_size(mb))))
    return chunk_count

def regular_upload(conn, input_file, bucket_name, sub_directory):
    k = get_bucket_key(conn, bucket_name)
    k.key = get_key_name(sub_directory, input_file)
    k.set_contents_from_filename(input_file)


def multipart_upload(conn, input_file, mb, bucket_name, sub_directory):
    chunk_size = get_chunk_size(mb)
    chunks = get_chunk_count(input_file, mb)
    source_size = get_file_info(input_file)
    mp = multipart_request(conn, input_file, bucket_name, sub_directory)
    for i in range(chunks):
        offset = chunk_size * i
        b = min(chunk_size, source_size - offset)
        with FileChunkIO(input_file, 'r', offset = offset, bytes = b) as fp:
            mp.upload_part_from_file(fp, part_num = i + 1)
    mp.complete_upload()

def check_if_mp_needed(conn, input_file, mb, bucket_name, sub_directory):
    if get_file_info(input_file) <= 5242880:
        regular_upload(conn, input_file, bucket_name, sub_directory)
    else:
        multipart_upload(conn, input_file, mb, bucket_name, sub_directory)

def main():
    input_file = sys.argv[1]
    mb = sys.argv[2]
    bucket_name = sys.argv[3]
    sub_directory = sys.argv[4]
    conn = start_connection()
    check_if_mp_needed(conn, input_file, mb, bucket_name, sub_directory)

if __name__ == '__main__':
    main()

谢谢!

1 个答案:

答案 0 :(得分:0)

您的两种情况之间的版本不匹配。当您使用旧版本的boto时,它使用了错误的AWS架构,因此您会看到错误。

更详细一点,当在IPython(使用virtualenv)中运行时,你有2.45.0版本,当从命令行运行时,你有版本2.8.0的boto。鉴于版本2.8.0可以追溯到2013年,因此出现架构错误并不奇怪。

此修复程序是通过运行pip install -U boto升级您的系统版本的boto(您当前在脚本中获取)或转换脚本以使用虚拟环境。有关后者的建议,请查看关于SO的其他答案:Running python script from inside virtualenv bin is not working