Spring Boot通过REST端点将大型数据库导出到csv

时间:2019-07-06 05:23:35

标签: java spring spring-boot spring-data spring-jdbc

我需要构建一个Spring Boot应用程序,该应用程序公开一个REST端点,以使用不同的过滤器参数将巨大的数据库表导出为CSV文件。我正在尝试找到解决此问题的有效方法。

当前,我正在使用spring-data-jpa查询数据库表,该表返回POJO的列表。然后使用Apache Commons CSV将此列表作为CSV文件写入HttpServletResponse。这种方法存在几个问题。首先,它将所有数据加载到内存中。其次,它很慢。

我没有对数据进行任何业务逻辑处理,在这种情况下是否需要使用jpa和entity(POJO)。我觉得这是引起问题的地方。

2 个答案:

答案 0 :(得分:0)

您可以尝试在Spring 5中引入的新SpringWebflux: https://www.baeldung.com/spring-webflux

答案 1 :(得分:0)

首先从DataBuffer创建一个磁通控制器:


@GetMapping(path = "/report/detailReportFile/{uid}" ,  produces = "text/csv")

public Mono<Void> getWorkDoneReportDetailSofkianoFile (@PathVariable(name = "uid") String uid,
                                                       @RequestParam(name = "startDate", required = false, defaultValue = "0") long start,
                                                       @RequestParam(name = "endDate" , required = false, defaultValue = "0") long end,
                                                       ServerHttpResponse response) {

    var startDate = start == 0 ? GenericData.GENERIC_DATE : new Date(start);

    var endDate = end == 0 ? new Date() : new Date(end);

    response.getHeaders().set(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename="+uid+".csv");
    response.getHeaders().add("Accept-Ranges", "bytes");

    Flux<DataBuffer> df = queryWorkDoneUseCase.findWorkDoneByIdSofkianoAndDateBetween(uid, startDate, endDate).collectList()
            .flatMapMany(workDoneList -> WriteCsvToResponse.writeWorkDone(workDoneList));

    return response.writeWith(df);
}

现在在我的情况下必须创建DataBuffer并使用带有StringBuffer的opencsv创建它

public static Flux<DataBuffer> writeWorkDone(List<WorkDone> workDoneList) {

        try {
            StringWriter writer = new StringWriter();

            ColumnPositionMappingStrategy<WorkDone> mapStrategy = new ColumnPositionMappingStrategy<>();

            mapStrategy.setType(WorkDone.class);

            String[] columns = new String[]{"idSofkiano", "nameSofkiano","idProject", "nameProject", "description", "hours", "minutes", "type"};
            mapStrategy.setColumnMapping(columns);

            StatefulBeanToCsv<WorkDone> btcsv = new StatefulBeanToCsvBuilder<WorkDone>(writer)
                    .withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
                    .withMappingStrategy(mapStrategy)
                    .withSeparator(',')
                    .build();

            btcsv.write(workDoneList);

            return Flux.just(stringBuffer(writer.getBuffer().toString()));

        } catch (CsvException ex) {

            return Flux.error(ex.getCause());
        }
    }


    private static DataBuffer stringBuffer(String value) {
        byte[] bytes = value.getBytes(StandardCharsets.UTF_8);

        NettyDataBufferFactory nettyDataBufferFactory = new NettyDataBufferFactory(ByteBufAllocator.DEFAULT);
        DataBuffer buffer = nettyDataBufferFactory.allocateBuffer(bytes.length);
        buffer.write(bytes);
        return buffer;
    }