我有一个用python / django(REST api)编写的服务器应用程序,用于接受来自客户端应用程序的文件上载。我希望这个上传的文件存储在AWS S3中。我还希望将文件作为多部分表单/数据从客户端上传。我怎样才能做到这一点。任何示例代码应用程序都将帮助我理解它应该如何完成。请协助。
class FileUploadView(APIView):
parser_classes = (FileUploadParser,)
def put(self, request, filename, format=None):
file_obj = request.data['file']
self.handle_uploaded_file(file_obj)
return self.get_response("", True, "", {})
def handle_uploaded_file(self, f):
destination = open('<path>', 'wb+')
for chunk in f.chunks():
destination.write(chunk)
destination.close()
提前致谢
答案 0 :(得分:3)
如果您希望上传直接转到AWS S3,可以使用django-storages
并设置Django文件存储后端以使用AWS S3。
这将允许您的Django项目透明地处理存储到S3,而无需手动将上传的文件重新上传到S3。
存储设置
您需要在Django设置中添加至少这些配置:
# default remote file storage
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
# aws access keys
AWS_ACCESS_KEY_ID = 'YOUR-ACCESS-KEY'
AWS_SECRET_ACCESS_KEY = 'YOUR-SECRET-ACCESS-KEY'
AWS_BUCKET_NAME = 'your-bucket-name'
AWS_STORAGE_BUCKET_NAME = AWS_BUCKET_NAME
存储上传到远程存储的示例代码
这是视图的修改版本,使用Django的存储后端handle_uploaded_file
方法将uploade文件保存到远程目标(使用django-storages)。
注意:请务必在DEFAULT_FILE_STORAGE
中定义settings
和AWS密钥,以便django-storage
可以访问您的存储分区。
from django.core.files.storage import default_storage
from django.core.files import File
# set file i/o chunk size to maximize throughput
FILE_IO_CHUNK_SIZE = 128 * 2**10
class FileUploadView(APIView):
parser_classes = (FileUploadParser,)
def put(self, request, filename, format=None):
file_obj = request.data['file']
self.handle_uploaded_file(file_obj)
return self.get_response("", True, "", {})
def handle_uploaded_file(self, f):
"""
Write uploaded file to destination using default storage.
"""
# set storage object to use Django's default storage
storage = default_storage
# set the relative path inside your bucket where you want the upload
# to end up
fkey = 'sub-path-in-your-bucket-to-store-the-file'
# determine mime type -- you may want to parse the upload header
# to find out the exact MIME type of the upload file.
content_type = 'image/jpeg'
# write file to remote server
# * "file" is a File storage object that will use your
# storage backend (in this case, remote storage to AWS S3)
# * "media" is a File object created with your upload file
file = storage.open(fkey, 'w')
storage.headers.update({"Content-Type": content_type})
f = open(path, 'rb')
media = File(f)
for chunk in media.chunks(chunk_size=FILE_IO_CHUNK_SIZE):
file.write(chunk)
file.close()
media.close()
f.close()
在此处查看有关如何访问远程存储的更多说明和示例:
答案 1 :(得分:-1)
查看提供AWS API的boto
包:
from boto.s3.connection import S3Connection
s3 = S3Connection(access_key, secret_key)
b = s3.get_bucket('<bucket>')
mp = b.initiate_multipart_upload('<object>')
for i in range(1, <parts>+1):
io = <receive-image-part> # E.g. StringIO
mp.upload_part_from_file(io, part_num=i)
mp.complete_upload()