Apache Camel:如何使用"已完成"用于识别写入文件的记录的文件已结束,可以移动

时间:2018-03-13 09:07:12

标签: apache-camel

正如标题所示,我想在将DB记录写入其中之后将文件移动到另一个文件夹中。 我已经研究了几个与此相关的问题:Apache camel file with doneFileName

但我的问题有点不同,因为我使用split,stream和parallelProcessing来获取DB记录并写入文件。我无法知道何时以及如何与parallelProcessing一起创建完成文件。以下是代码段:

我获取记录并将其写入文件的路线:

from(<ROUTE_FETCH_RECORDS_AND_WRITE>)
        .setHeader(Exchange.FILE_PATH, constant("<path to temp folder>"))
        .setHeader(Exchange.FILE_NAME, constant("<filename>.txt"))
        .setBody(constant("<sql to fetch records>&outputType=StreamList))
        .to("jdbc:<endpoint>)
        .split(body(), <aggregation>).streaming().parallelProcessing()
            .<some processors>
            .aggregate(header(Exchange.FILE_NAME), (o, n) -> {
                <file aggregation>
                return o;
            }).completionInterval(<some time interval>)
                .toD("file://<to the temp file>")
            .end()
        .end()
        .to("file:"+<path to temp folder>+"?doneFileName=${file:header."+Exchange.FILE_NAME+"}.done"); //this line is just for trying out done filename 

在我对分割器的聚合策略中,我的代码基本上对已处理的记录进行计数,并准备将被发送回调用者的响应。 在我的其他聚合外部,我有用于聚合db行的代码,并将该文本写入文件。

这是移动文件的文件监听器:

from("file://<path to temp folder>?delete=true&include=<filename>.*.TXT&doneFileName=done")
.to(file://<final filename with path>?fileExist=Append);

做这样的事情就是给我这个错误:

     Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Cannot store file: <folder-path>/filename.TXT] org.apache.camel.component.file.GenericFileOperationFailedException: Cannot store file: <folder-path>/filename.TXT
    at org.apache.camel.component.file.FileOperations.storeFile(FileOperations.java:292)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.file.GenericFileProducer.writeFile(GenericFileProducer.java:277)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:165)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.file.GenericFileProducer.process(GenericFileProducer.java:79)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.Pipeline.process(Pipeline.java:121)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.Pipeline.process(Pipeline.java:83)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.seda.SedaConsumer.sendToConsumers(SedaConsumer.java:298)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.seda.SedaConsumer.doRun(SedaConsumer.java:207)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.seda.SedaConsumer.run(SedaConsumer.java:154)[209:org.apache.camel.camel-core:2.16.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[:1.8.0_144]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[:1.8.0_144]
    at java.lang.Thread.run(Thread.java:748)[:1.8.0_144]
Caused by: org.apache.camel.InvalidPayloadException: No body available of type: java.io.InputStream but has value: Total number of records discovered: 5

我做错了什么?任何输入都会有所帮助

PS:最新介绍给Apache Camel

1 个答案:

答案 0 :(得分:1)

我猜这个错误来自.toD("file://<to the temp file>")尝试编写文件,但发现错误的正文类型(字符串Total number of records discovered: 5而不是InputStream

我不明白你为什么在分割器里面有一个文件目的地而在它外面有一个文件目的地。

正如@ claus-ibsen建议尝试删除路线中的额外.aggregate(...)。要拆分和重新聚合,就足以在拆分器中引用聚合策略。克劳斯还指出了Camel docs

中的一个例子
from(<ROUTE_FETCH_RECORDS_AND_WRITE>)
    .setHeader(Exchange.FILE_PATH, constant("<path to temp folder>"))
    .setHeader(Exchange.FILE_NAME, constant("<filename>.txt"))
    .setBody(constant("<sql to fetch records>&outputType=StreamList))
    .to("jdbc:<endpoint>)
    .split(body(), <aggregationStrategy>)
        .streaming().parallelProcessing()
        // the processors below get individual parts 
        .<some processors>
    .end()
    // The end statement above ends split-and-aggregate. From here 
    // you get the re-aggregated result of the splitter.
    // So you can simply write it to a file and also write the done-file
    .to(...);

但是,如果需要控制聚合大小,则必须组合拆分器和聚合器。这看起来有点像这样

from(<ROUTE_FETCH_RECORDS_AND_WRITE>)
    .setHeader(Exchange.FILE_PATH, constant("<path to temp folder>"))
    .setHeader(Exchange.FILE_NAME, constant("<filename>.txt"))
    .setBody(constant("<sql to fetch records>&outputType=StreamList))
    .to("jdbc:<endpoint>)
    // No aggregationStrategy here so it is a standard splitter
    .split(body())
        .streaming().parallelProcessing()
        // the processors below get individual parts 
        .<some processors>
    .end()
    // The end statement above ends split. From here 
    // you still got individual records from the splitter.
    .to(seda:aggregate);

// new route to do the controlled aggregation
from("seda:aggregate")
    // constant(true) is the correlation predicate => collect all messages in 1 aggregation
    .aggregate(constant(true), new YourAggregationStrategy())
        .completionSize(500)
    // not sure if this 'end' is needed
    .end()
    // write files with 500 aggregated records here
    .to("...");