我正在将多个文件上传到Amazon S3。使用以下代码。
MultipartHttpServletRequest multipartRequest = (MultipartHttpServletRequest) request;
MultiValueMap<String, MultipartFile> map = multipartRequest
.getMultiFileMap();
try {
if (map != null) {
for (String filename : map.keySet()) {
List<MultipartFile> fileList = map.get(filename);
incrPercentge = 100 / fileList.size();
request.getSession().setAttribute("incrPercentge",
incrPercentge);
for (MultipartFile mpf : fileList) {
/*
* custom input stream wrap to original input stream to get
* the progress
*/
ProgressInputStream inputStream = new ProgressInputStream(
"test", mpf.getInputStream(), mpf.getBytes().length);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType(mpf.getContentType());
String key = Util.getLoginUserName() + "/"
+ mpf.getOriginalFilename();
PutObjectRequest putObjectRequest = new PutObjectRequest(
Constants.S3_BUCKET_NAME, key, inputStream,
metadata)
.withStorageClass(StorageClass.ReducedRedundancy);
PutObjectResult response = s3Client
.putObject(putObjectRequest);
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
我创建了自定义输入流来获取Amazon S3消耗的数字字节,我从问题中得到了一个想法: - Upload file or InputStream to S3 with a progress callback
我的ProgressInputStream类位于
之下package com.spectralnetworks.net.util;
import java.io.IOException;
import java.io.InputStream;
import org.apache.commons.vfs.FileContent;
import org.apache.commons.vfs.FileSystemException;
public class ProgressInputStream extends InputStream {
private final long size;
private long progress, lastUpdate = 0;
private final InputStream inputStream;
private final String name;
private boolean closed = false;
public ProgressInputStream(String name, InputStream inputStream, long size) {
this.size = size;
this.inputStream = inputStream;
this.name = name;
}
public ProgressInputStream(String name, FileContent content)
throws FileSystemException {
this.size = content.getSize();
this.name = name;
this.inputStream = content.getInputStream();
}
@Override
public void close() throws IOException {
super.close();
if (closed) throw new IOException("already closed");
closed = true;
}
@Override
public int read() throws IOException {
int count = inputStream.read();
if (count > 0)
progress += count;
lastUpdate = maybeUpdateDisplay(name, progress, lastUpdate, size);
return count;
}
@Override
public int read(byte[] b, int off, int len) throws IOException {
int count = inputStream.read(b, off, len);
if (count > 0)
progress += count;
lastUpdate = maybeUpdateDisplay(name, progress, lastUpdate, size);
return count;
}
/**
* This is on reserach to show a progress bar
* @param name
* @param progress
* @param lastUpdate
* @param size
* @return
*/
static long maybeUpdateDisplay(String name, long progress, long lastUpdate, long size) {
/* if (Config.isInUnitTests()) return lastUpdate;
if (size < B_IN_MB/10) return lastUpdate;
if (progress - lastUpdate > 1024 * 10) {
lastUpdate = progress;
int hashes = (int) (((double)progress / (double)size) * 40);
if (hashes > 40) hashes = 40;
String bar = StringUtils.repeat("#",
hashes);
bar = StringUtils.rightPad(bar, 40);
System.out.format("%s [%s] %.2fMB/%.2fMB\r",
name, bar, progress / B_IN_MB, size / B_IN_MB);
System.out.flush();
}*/
System.out.println("name "+ name+" progress "+ progress+" lastUpdate "+ lastUpdate+" "+ "sie "+ size);
return lastUpdate;
}
}
但这不能正常工作,它会立即打印到文件大小,如下所示
name test progress 4096 lastUpdate 0 sie 30489
name test progress 8192 lastUpdate 0 sie 30489
name test progress 12288 lastUpdate 0 sie 30489
name test progress 16384 lastUpdate 0 sie 30489
name test progress 20480 lastUpdate 0 sie 30489
name test progress 24576 lastUpdate 0 sie 30489
name test progress 28672 lastUpdate 0 sie 30489
name test progress 30489 lastUpdate 0 sie 30489
name test progress 30489 lastUpdate 0 sie 30489
但实际上传需要更多时间(打印线后超过10次) 我该怎么做才能获得真正的上传状态。请帮帮我
答案 0 :(得分:8)
我通过使用下面的代码
得到了我的问题的答案,以获得真正的进展状态的最佳方式ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType(mpf.getContentType());
String key = Util.getLoginUserName() + "/"
+ mpf.getOriginalFilename();
metadata.setContentLength(mpf.getSize());
PutObjectRequest putObjectRequest = new PutObjectRequest(
Constants.S3_BUCKET_NAME, key, mpf.getInputStream(),
metadata)
.withStorageClass(StorageClass.ReducedRedundancy);
putObjectRequest.setProgressListener(new ProgressListener() {
@Override
public void progressChanged(ProgressEvent progressEvent) {
System.out.println(progressEvent
.getBytesTransfered()
+ ">> Number of byte transfered "
+ new Date());
progressEvent.getBytesTransfered();
double totalByteRead = request
.getSession().getAttribute(
Constants.TOTAL_BYTE_READ) != null ? (Double) request
.getSession().getAttribute(Constants.TOTAL_BYTE_READ) : 0;
totalByteRead += progressEvent.getBytesTransfered();
request.getSession().setAttribute(Constants.TOTAL_BYTE_READ, totalByteRead);
System.out.println("total Byte read "+ totalByteRead);
request.getSession().setAttribute(Constants.TOTAL_PROGRESS, (totalByteRead/size)*100);
System.out.println("percentage completed >>>"+ (totalByteRead/size)*100);
if (progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) {
System.out.println("completed ******");
}
}
});
s3Client.putObject(putObjectRequest);
我以前的代码存在的问题是,我没有在元数据中设置内容长度,因此我没有获得真正的进度状态。以下行是PutObjectRequest类API
的副本构造一个新的PutObjectRequest对象,将数据流上传到指定的存储桶和密钥。在构造请求之后,用户可以可选地指定对象元数据或固定ACL。
必须在对象元数据参数中指定数据流的内容长度; Amazon S3要求在上传数据之前传递它。未指定内容长度将导致输入流的全部内容在本地缓冲在存储器中,从而可以计算内容长度,这可能导致负面的性能问题。
答案 1 :(得分:1)
我将假设您使用的是AWS SDK for Java。
您的代码正常运行: 它显示每次读取4K时都会调用读取。 您的想法(在消息中更新)也是正确的: AWS开发工具包提供了ProgressListener,作为通知应用程序上传进度的方法。
&#34;问题&#34;在AWS SDK的实现中,它缓冲的文件大小超过文件的大约30K(我假设它是64K),因此您无法获得任何进度报告。
尝试上传更大的文件(比如说1M),你会看到这两种方法都能给你带来更好的效果,毕竟今天的网络速度报告30K文件的进度甚至都不值得。
如果你想要更好的控制,你可以使用S3 REST interface(这是AWS Java SDK最终使用的)自己实现上传,这不是很困难,但它有点工作。如果你想走这条路线,我建议找一个计算会话授权令牌而不是doing it yourself的例子(抱歉,我的搜索foo目前还没有足够强大的链接到实际的示例代码。) 但是,一旦遇到所有麻烦,您会发现您确实希望在套接字流上拥有64K缓冲区,以确保快速网络中的最大吞吐量(这可能就是AWS Java SDK的行为原因)。