我知道在AWS S3 API中上传大于5 GB的文件是有限制的。在boto3
中,我应该使用multipart
我正在尝试将S3File
中的s3fs
对象配置为相同,但是我无法弄清楚。
我正在使用(作为错误的示例)非常基本的代码:
import s3fs
s3 = s3fs.S3FileSystem()
with s3.open("s3://bucket/huge_file.csv", "w") as s3_obj:
with open("huge_file.csv") as local_file
s3_obj.write(local_file.read())
huge_file.csv
的大小> 5Gb
。
我得到的错误是
...
botocore.exceptions.ClientError: An error occurred (EntityTooLarge) when calling the PutObject operation: Your proposed upload exceeds the maximum allowed size
...
File ... /s3fs/core.py" line 1487, in __exit__
self.close()
File ... /s3fs/core.py" line 1454, in close
因此,问题是如何(如果可能)设置s3fs
以上传大于5Gb
的文件(应该如何配置它以进行分段上传)?
答案 0 :(得分:2)
我认为这个Github线程应该解决您遇到的更多问题,并使您的生活更轻松,这就是您想要的。
import boto3
from boto3.s3.transfer import TransferConfig
# Get the service client
s3 = boto3.client('s3')
GB = 1024 ** 3
# Ensure that multipart uploads only happen if the size of a transfer
# is larger than S3's size limit for nonmultipart uploads, which is 5 GB.
config = TransferConfig(multipart_threshold=5 * GB)
# Upload tmp.txt to bucket-name at key-name
s3.upload_file("tmp.txt", "bucket-name", "key-name", Config=config)