如何在Django添加到存储位置之前优化图像(文件上传)?

时间:2015-08-03 17:05:13

标签: django amazon-s3 boto

我们正在将我们的Django项目的后端存储从本地磁盘存储更新到Amazon S3存储桶。目前,我们添加图像,然后对其进行优化,稍后将其同步到我们的CDN。我们控制这些步骤,所以我只是在上传之后和rsync之前进行优化。

我们正在迁移到Amazon S3,我现在想要在将图像上传到S3存储桶之前对其进行优化,主要是因为我们不上传到S3,然后下载以便优化并最终重新上传。为什么我们可以在一次旅行中进行三次旅行。

我的问题是:在将文件推送到存储后端(在本例中为Amazon S3)之前,我们如何拦截上传以优化文件。

如果它有助于我使用亚马逊的boto库和django-storages-redux。

1 个答案:

答案 0 :(得分:0)

我以草稿形式提出这个问题并意识到我从未发布过它。我没有找到堆栈溢出的解决方案,所以我想我会把它添加为Q& A帖子。

解决方案是覆盖Django的TemporaryFileUploadHandler类。我还将上传的文件大小设置为零,因此它们都发生在磁盘上而内存中没有,但这可能没有必要。

      # encoding: utf-8
      from image_diet import squeeze
      import shutil
      import uuid

      from django.core.files import File
      from django.core.files.uploadhandler import TemporaryFileUploadHandler


      class CompressImageUploadHandler(TemporaryFileUploadHandler):
      """
      Run image squeeze on our temporary file before upload to S3
      """

        def __init__(self, *args, **kwargs):
            self.image_types = ('image/jpeg', 'image/png')
            self.file_limit = 200000
            self.overlay_fields = (
                'attribute_name',  
            )
            self.skip_compress_fields = (
                'attribute_name',  
            )
            super(CompressImageUploadHandler, self).__init__(*args, **kwargs)

        def compress_image(self):
            """
            For image files we need to compress them, but we need to do some
            trickery along the way. We need to close the file, pass it to
            image_diet.squeeze, then reopen the file with the same file name
            """

            # if it's an image and small enough. Squeeze.
            if (self.file.size < self.file_limit and
                    self.field_name not in self.skip_compress_fields):
                # the beginning is a good place to start.
                self.file.seek(0)
                # let's squeeze this image. 
                # first, make a copy.
                file_name = self.file.name
                file_content_type = self.file.content_type
                copy_path = u"{}{}".format(
                    self.file.temporary_file_path(),
                    str(uuid.uuid4())[:8]
                )
                shutil.copyfile(
                    self.file.temporary_file_path(),
                    copy_path
                )
                # closed please. image_squeeze updates on an open file
                self.file.close()
                squeeze(copy_path)
                squeezed_file = open(copy_path)
                self.file = File(squeezed_file)
                # now reset some of the original values
                self.file.name = file_name
                self.file.content_type = file_content_type

        def screenshot_overlay(self):
            """
            Apply the guarantee_image_overlay method on screenshots
            """
            if self.field_name in self.overlay_fields:
                # this is a custom method that adds an overlay to the upload image if it's in the tuple of overlay_fields
                guarantee_image_overlay(self.file.temporary_file_path())
                # we have manipulated file, back to zero
                self.file.seek(0)

        def file_complete(self, file_size):
            """
            Return the file object, just run image_squeeze against it.
            This happens before the file object is uploaded to Amazon S3.
            While the pre_save hook happens after the Amazon upload.
            """
            self.file.seek(0)
            self.file.size = file_size

            if self.content_type in self.image_types:
                # see if we apply the screenshot overlay.
                self.screenshot_overlay()
                self.compress_image()

            return super(CompressImageUploadHandler, self).file_complete(file_size)