需要帮助!有人能引导我走向正确的道路吗?
以下是我的代码和日志的片段。
DataStream<ObjectNode> stream = env.addSource(KafkaConsumer.getKafkaConsumer());
DataStream<MyDataObject> dataStream = stream.flatMap(new DataTransformation());
我正在使用flatMapFunction来处理我的输入对象并获取多个对象。
以下是stackTrace:
java.lang.RuntimeException: Buffer pool is destroyed.
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:75) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:797) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:775) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
at com.data.transformation.DataTransformation.flatMap(DataTransformation.java:68) [eventproducer.jar:na]
at com.data.transformation.DataTransformation.flatMap(DataTransformation.java:23) [eventproducer.jar:na]
at org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:47) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:422) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:407) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:797) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:775) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.collectWithTimestamp(StreamSourceContexts.java:272) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:261) [flink-connector-kafka-base_2.10-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:88) [flink-connector-kafka-0.10_2.10-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:157) [flink-connector-kafka-0.9_2.10-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:255) [flink-connector-kafka-base_2.10-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:55) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272) [flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) [flink-dist_2.11-1.2.0.jar:1.2.0]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
Caused by: java.lang.IllegalStateException: Buffer pool is destroyed.
at org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBuffer(LocalBufferPool.java:149) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBlocking(LocalBufferPool.java:138) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:131) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:88) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.runtime.io.StreamRecordWriter.emit(StreamRecordWriter.java:86) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:72) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
... 22 common frames omitted
编辑: 只是为了获取更多信息,我使用collect()收集记录,然后将所有记录传递给下一个操作符以处理数据库插入操作。我在哪里使用flinks Cassandra Sink Connector。
答案 0 :(得分:0)
这可以提供帮助。 可能IO操作需要更多时间。 https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/asyncio.html