从Java中S3上的文件在S3上创建一个zip文件

时间:2019-07-02 06:52:38

标签: java amazon-web-services amazon-s3 java-stream aws-sdk

我在S3上有很多文件需要压缩,然后通过S3提供压缩文件。目前,我将它们从流中压缩到本地文件,然后再次上传该文件。这会占用大量磁盘空间,因为每个文件大约有3-10MB,我必须压缩多达100.000个文件。因此,一个zip可以超过1TB。因此,我想按以下方式寻求解决方案:

Create a zip file on S3 from files on S3 using Lambda Node

在这里可以直接在S3上创建zip,而不会占用本地磁盘空间。但是我只是不够聪明,无法将上述解决方案转移到Java。我还在Java aws sdk上发现了相互矛盾的信息,说他们计划在2017年更改流的行为。

不确定这是否会有所帮助,但是到目前为止,这是我一直在做的事情(Upload是保存S3信息的本地模型)。我只是删除了日志和其他东西以提高可读性。我认为我不会占用直接将InputStream直接“压缩”到zip中的下载空间。但是就像我说的,我也想避免使用本地zip文件,而直接在S3上创建它。但是,这可能需要使用S3作为目标而不是FileOutputStream创建ZipOutputStream。不确定该怎么做。

public File zipUploadsToNewTemp(List<Upload> uploads) {
    List<String> names = new ArrayList<>();

    byte[] buffer = new byte[1024];
    File tempZipFile;
    try {
      tempZipFile = File.createTempFile(UUID.randomUUID().toString(), ".zip");
    } catch (Exception e) {
      throw new ApiException(e, BaseErrorCode.FILE_ERROR, "Could not create Zip file");
    }
    try (
        FileOutputStream fileOutputStream = new FileOutputStream(tempZipFile);
        ZipOutputStream zipOutputStream = new ZipOutputStream(fileOutputStream)) {

      for (Upload upload : uploads) {
        InputStream inputStream = getStreamFromS3(upload);
        ZipEntry zipEntry = new ZipEntry(upload.getFileName());
        zipOutputStream.putNextEntry(zipEntry);
        writeStreamToZip(buffer, zipOutputStream, inputStream);
        inputStream.close();
      }
      zipOutputStream.closeEntry();
      zipOutputStream.close();
      return tempZipFile;
    } catch (IOException e) {
      logError(type, e);
      if (tempZipFile.exists()) {
        FileUtils.delete(tempZipFile);
      }
      throw new ApiException(e, BaseErrorCode.IO_ERROR,
          "Error zipping files: " + e.getMessage());
    }
}

  // I am not even sure, but I think this takes up memory and not disk space
private InputStream getStreamFromS3(Upload upload) {
    try {
      String filename = upload.getId() + "." + upload.getFileType();
      InputStream inputStream = s3FileService
          .getObject(upload.getBucketName(), filename, upload.getPath());
      return inputStream;
    } catch (ApiException e) {
      throw e;
    } catch (Exception e) {
      logError(type, e);
      throw new ApiException(e, BaseErrorCode.UNKOWN_ERROR,
          "Unkown Error communicating with S3 for file: " + upload.getFileName());
    }
}


private void writeStreamToZip(byte[] buffer, ZipOutputStream zipOutputStream,
      InputStream inputStream) {
    try {
      int len;
      while ((len = inputStream.read(buffer)) > 0) {
        zipOutputStream.write(buffer, 0, len);
      }
    } catch (IOException e) {
      throw new ApiException(e, BaseErrorCode.IO_ERROR, "Could not write stream to zip");
    }
}

最后是上传源代码。输入流是从Temp Zip文件创建的。

public PutObjectResult upload(InputStream inputStream, String bucketName, String filename, String folder) {
    String uploadKey = StringUtils.isEmpty(folder) ? "" : (folder + "/");
    uploadKey += filename;

    ObjectMetadata metaData = new ObjectMetadata();

    byte[] bytes;
    try {
      bytes = IOUtils.toByteArray(inputStream);
    } catch (IOException e) {
      throw new ApiException(e, BaseErrorCode.IO_ERROR, e.getMessage());
    }
    metaData.setContentLength(bytes.length);
    ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(bytes);

    PutObjectRequest putObjectRequest = new PutObjectRequest(bucketPrefix + bucketName, uploadKey, byteArrayInputStream, metaData);
    putObjectRequest.setCannedAcl(CannedAccessControlList.PublicRead);

    try {
      return getS3Client().putObject(putObjectRequest);
    } catch (SdkClientException se) {
      throw s3Exception(se);
    } finally {
      IOUtils.closeQuietly(inputStream);
    }
  }

也找到了与我需要的类似问题,也没有答案:

Upload ZipOutputStream to S3 without saving zip file (large) temporary to disk using AWS S3 Java

2 个答案:

答案 0 :(得分:1)

您可以从 S3 数据中获取输入流,然后压缩这批字节并将其流回 S3

        long numBytes;  // length of data to send in bytes..somehow you know it before processing the entire stream
        PipedOutputStream os = new PipedOutputStream();
        PipedInputStream is = new PipedInputStream(os);
        ObjectMetadata meta = new ObjectMetadata();
        meta.setContentLength(numBytes);

        new Thread(() -> {
            /* Write to os here; make sure to close it when you're done */
            try (ZipOutputStream zipOutputStream = new ZipOutputStream(os)) {
                ZipEntry zipEntry = new ZipEntry("myKey");
                zipOutputStream.putNextEntry(zipEntry);
                
                S3ObjectInputStream objectContent = amazonS3Client.getObject("myBucket", "myKey").getObjectContent();
                byte[] bytes = new byte[1024];
                int length;
                while ((length = objectContent.read(bytes)) >= 0) {
                    zipOutputStream.write(bytes, 0, length);
                }
                objectContent.close();
            } catch (IOException e) {
                e.printStackTrace();
            }
        }).start();
        amazonS3Client.putObject("myBucket", "myKey", is, meta);
        is.close();  // always close your streams

答案 1 :(得分:0)

我建议使用 Amazon EC2实例(低至1c /小时,或者您甚至可以使用竞价型实例以较低的价格获得它)。较小的实例类型成本较低,但带宽有限,因此请尝试使用大小以获得首选性能。

编写脚本以循环浏览文件,然后:

  • 下载
  • 邮政编码
  • 上传
  • 删除本地文件

所有压缩魔术都发生在本地磁盘上。无需使用流。只需使用Amazon S3 download_file()upload_file()调用。

如果EC2实例与Amazon S3位于同一区域,则不收取数据传输费用。