使用boto3将大字符串流式传输到S3

时间:2018-10-15 22:13:09

标签: python-3.x amazon-s3 boto3

我正在从S3下载文件,转换文件中的数据,然后创建一个新文件上传到S3。我正在下载的文件不到2GB,但是由于我正在增强数据,因此当我上传数据时,它会很大(200gb +)。

当前,您可以通过代码来想象:

files = list_files_in_s3()
new_file = open('new_file','w')
for file in files:
    file_data = fetch_object_from_s3(file)
    str_out = ''
    for data in file_data:
        str_out += transform_data(data)
    new_file.write(str_out)
s3.upload_file('new_file', 'bucket', 'key')

此问题是'new_file'太大,有时无法容纳在磁盘上。因此,我想使用boto3 upload_fileobj以流形式上载数据,这样我根本就不需要在磁盘上放置临时文件。

有人可以提供示例吗? Python方法似乎与我熟悉的Java完全不同。

1 个答案:

答案 0 :(得分:3)

您可以在读取功能中使用amt参数,在此处记录:https://botocore.amazonaws.com/v1/documentation/api/latest/reference/response.html

然后使用此处记录的MultiPartUpload逐段上传文件: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#multipartupload

https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

您应该有一条删除不完整的分段上传的规则:

https://aws.amazon.com/es/blogs/aws/s3-lifecycle-management-update-support-for-multipart-uploads-and-delete-markers/

否则,您可能最终要为S3中存储的不完整数据部分付费。

为此,我从自己的脚本中复制粘贴了一些内容。这显示了从下载到上传的所有过程。万一您有内存限制要考虑。您还可以更改此设置,以便在上传之前将文件存储在本地。

无论如何,您都必须使用MultiPartUpload,因为S3在一次操作中可以上传多大文件受到限制:https://aws.amazon.com/s3/faqs/

“单个PUT中可以上传的最大对象为5 GB。对于大于100 MB的对象,客户应考虑使用分段上传功能。”

这是一个代码示例(我尚未在此处测试此代码):

import boto3
amt = 1024*1024*10 # 10 MB at the time
session = boto3.Session(profile_name='yourprofile')
s3res = session.resource('s3')
source_s3file = "yourfile.file"
target_s3file = "yourfile.file"
source_s3obj = s3res.Object("your-bucket", source_s3file)
target_s3obj = s3res.Object("your-bucket", target_s3file)

# initiate MultiPartUpload
mpu = target_s3obj.initiate_multipart_upload()
partNr = 0
parts = []

body = source_s3obj.get()["Body"]   
# get initial chunk
chunk = body.read(amt=amt).decode("utf-8") # this is where you use the amt-parameter
# Every time you call the read-function it reads the next chunk of data until its empty.
# Then do something with the chunk and upload it to S3 using MultiPartUpload
partNr += 1
part = mpu.Part(partNr)
response = part.upload(Body=f)
parts.append({
    "PartNumber": partNr,
    "ETag": response["ETag"]
})

while len(chunk) > 0:
    # there is more data, get a new chunk
    chunk = body.read(amt=amt).decode("utf-8")
    # do something with the chunk, and upload the part
    partNr += 1
    part = mpu.Part(partNr)
    response = part.upload(Body=f)
    parts.append({
        "PartNumber": partNr,
        "ETag": response["ETag"]
    })
# no more chunks, complete the upload
part_info = {}
part_info["Parts"] = parts
mpu_result = mpu.complete(MultipartUpload=part_info)