我对Object single upload和multipart upload感到困惑。两者都消耗相同的时间。我的代码如下:
File file = new File("D:\\AmazonS3\\aws-java-sdk-1.8.3\\lib\\aws-java-sdk-1.8.3-javadoc.jar");
FileInputStream fis = new FileInputStream(file);
String keyName = System.currentTimeMillis()+"_aws-java-sdk-1.8.3-javadoc.jar";
ObjectMetadata metaData = new ObjectMetadata();
metaData.addUserMetadata("test","TEST");
//Object single upload
PutObjectRequest putobejcObjectRequest = new PutObjectRequest(BUCKET_NAME, keyName, fis,metaData);
putobejcObjectRequest.setMetadata(metaData);
s3client.putObject(putobejcObjectRequest);
//Object multipart upload
TransferManagerConfiguration configuration = new TransferManagerConfiguration();
configuration.setMultipartUploadThreshold(5*com.amazonaws.services.s3.internal.Constants.MB);
TransferManager transferManager = new TransferManager(s3client);
transferManager.setConfiguration(configuration);
Upload upload = transferManager.upload(BUCKET_NAME, keyName, fis,metaData);
upload.waitForCompletion();
transferManager.shutdownNow();
请帮帮我,我的代码有问题吗。
答案 0 :(得分:1)
我遇到了同样的问题,发现(通过检查SDK代码)传输管理器只有在传递文件而不是InputStream时才会使用并行部件上传。
查看sdk(版本1.8.9)中的决策代码:
if (TransferManagerUtils.isUploadParallelizable(putObjectRequest, isUsingEncryption)) {
captureUploadStateIfPossible();
uploadPartsInParallel(requestFactory, multipartUploadId);
return null;
} else {
return uploadPartsInSeries(requestFactory);
}
其中是uploadParallelizable:
// Each uploaded part in an encrypted upload depends on the encryption context
// from the previous upload, so we cannot parallelize encrypted upload parts.
if (isUsingEncryption) return false;
// Otherwise, if there's a file, we can process the uploads concurrently.
return (getRequestFile(putObjectRequest) != null);
因此,如果您希望在部件中获得并行上传的优势,请将文件传递给TransferManager。
答案 1 :(得分:1)
从流中上传选项时,呼叫者必须提供大小 流中的选项通过内容长度字段 ObjectMetadata参数。如果没有指定内容长度 输入流,然后TransferManager将尝试缓冲所有 在内存中传输内容并将选项作为传统方式上传, 单件上传。因为整个流内容必须是 缓冲在内存中,这可能非常昂贵,应该避免 只要有可能。