我目前正在使用Spring WebFlux。 我正在尝试使用Spring WebFlux上传大文件(70mo)。
我的控制器
@RequestMapping(method = RequestMethod.POST, consumes = MediaType.MULTIPART_FORM_DATA_VALUE, produces = MediaType.APPLICATION_JSON_VALUE)
public Flux<String> uploadHandler(@RequestBody Flux<Part> fluxParts, @RequestParam(value = "categoryType") String categoryType, @PathVariable(value = "traceabilityReportUuid") String traceabilityUuid) {
return documentHandler.upload(fluxParts, UUID.fromString(traceabilityUuid), categoryType);
}
我的服务
public Flux<String> upload(Flux<Part> fluxParts, UUID traceabilityUuid, String categoryType) {
return fluxParts
.filter(part -> part instanceof FilePart)
.ofType(FilePart.class)
.flatMap(p -> this.upload(p, traceabilityUuid, categoryType));
}
private Mono<String> upload(FilePart filePart, UUID traceabilityUuid, String categoryType) {
return filePart.content().collect(InputStreamCollector::new, (t, dataBuffer) -> t.collectInputStream(dataBuffer.asInputStream()))
.flatMap(inputStreamCollector -> {
upload(traceabilityUuid, inputStreamCollector.getInputStream(), filePart.filename(), categoryType);
return Mono.just("OK");
});
}
我的收藏家
public class InputStreamCollector {
private InputStream is;
public void collectInputStream(InputStream is) {
if (this.is == null) this.is = is;
this.is = new SequenceInputStream(this.is, is);
}
public InputStream getInputStream() {
return this.is;
}
}
最后,我通过这种方式检索完整的输入流:inputStreamCollector.getInputStream()并传递给我的对象。
我用这个对象来发送到存储桶S3。
但是在发送到S3之前,我必须将其转换为文件(使用apache工具),我有一个stackoverflow异常。
java.lang.StackOverflowError: null
at java.base/java.io.SequenceInputStream.read(SequenceInputStream.java:156)
at java.base/java.io.SequenceInputStream.read(SequenceInputStream.java:156)
at java.base/java.io.SequenceInputStream.read(SequenceInputStream.java:156)
at java.base/java.io.SequenceInputStream.read(SequenceInputStream.java:156)
at java.base/java.io.SequenceInputStream.read(SequenceInputStream.java:156)
at java.base/java.io.SequenceInputStream.read(SequenceInputStream.java:156)
at java.base/java.io.SequenceInputStream.read(SequenceInputStream.java:156)
at java.base/java.io.SequenceInputStream.read(SequenceInputStream.java:156)
它可以很好地处理一个小文件(7mo ..)
您有解决我问题的想法吗?
谢谢
答案 0 :(得分:1)
最后我找到了解决方案!
我修改了代码以返回InputStream,并且对大型文件也可以正常工作;-)
答案 1 :(得分:0)
要将 DataBuffer 转换为 String 或 List ,您可以使用 Apache IOUtils 。在此示例中,我返回了助焊剂,为避免尝试/捕获,我用Mono.fromCallable包装。
protected Flux<String> getLines(final DataBuffer dataBuffer) {
return Mono.fromCallable(() -> IOUtils.readLines(dataBuffer.asInputStream(), Charsets.UTF_8))
.flatMapMany(Flux::fromIterable);
}
答案 2 :(得分:0)
此示例将帮助您了解如何从FilePart加载数据:
public static Mono<String> readBase64Content(FilePart filePart) {
return filePart.content().flatMap(dataBuffer -> {
byte[] bytes = new byte[dataBuffer.readableByteCount()];
dataBuffer.read(bytes);
String content = Base64.getEncoder().encodeToString(bytes);
return Mono.just(content);
}).last();
}
休息方法
@PostMapping(value = "/person/{personId}/photo", consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
Mono<String> uploadPhoto(@PathVariable Long personId, @RequestPart("photo") Mono<FilePart> photo) {
return photo.ofType(FilePart.class).flatMap(StringUtil::readBase64Content);
}