当使用TimeCharacteristic.IngestionTime设置StreamExecutionEnvironment时,Flink在流中添加重新平衡会导致作业失败

时间:2020-05-26 20:02:16

标签: apache-flink flink-streaming

我正在尝试运行流作业,该流作业使用来自Kafka的消息将其转换并下沉到Cassandra。

当前代码段失败

val env: StreamExecutionEnvironment = getExecutionEnv("dev")
    env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime)
.
.
.
.

  val source = env.addSource(kafkaConsumer)
              .uid("kafkaSource")
              .rebalance

    val transformedObjects = source.process(new EnrichEventWithIngestionTimestamp)
        .setParallelism(dataSinkParallelism)
    sinker.apply(transformedObjects,dataSinkParallelism)


  class EnrichEventWithIngestionTimestamp extends ProcessFunction[RawData, TransforemedObjects] {
    override def processElement(rawData: RawData,
                                context: ProcessFunction[RawData, TransforemedObjects]#Context,
                                collector: Collector[TransforemedObjects]): Unit = {
     val currentTimestamp=context.timerService().currentProcessingTime()
      context.timerService().registerProcessingTimeTimer(currentTimestamp)
      collector.collect(TransforemedObjects.fromRawData(rawData,currentTimestamp))
    }
}

但是如果注释rebalance,或者将作业更改为使用TimeCharacteristic.EventTime和水印分配(如休闲代码段中所示),则它将起作用。

val env: StreamExecutionEnvironment = getExecutionEnv("dev")
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
.
.

  val source = env.addSource(kafkaConsumer)
              .uid("kafkaSource")
              .rebalance
              .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessRawDataTimestampExtractor[RawData](Time.seconds(1)))


val transformedObjects = source.map(rawData=>TransforemedObjects.fromRawData(rawData))
        .setParallelism(dataSinkParallelism)
    sinker.apply(transformedObjects,dataSinkParallelism)

堆栈跟踪为:

java.lang.Exception: java.lang.RuntimeException: 1
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.checkThrowSourceExecutionException(SourceStreamTask.java:217)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.processInput(SourceStreamTask.java:133)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: 1
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:110)
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
    at org.apache.flink.streaming.api.collector.selector.DirectedOutput.collect(DirectedOutput.java:143)
    at org.apache.flink.streaming.api.collector.selector.DirectedOutput.collect(DirectedOutput.java:45)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$AutomaticWatermarkContext.processAndCollect(StreamSourceContexts.java:176)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$AutomaticWatermarkContext.processAndCollectWithTimestamp(StreamSourceContexts.java:194)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collectWithTimestamp(StreamSourceContexts.java:409)
    at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
    at org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:91)
    at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:156)
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:715)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:203)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.getBufferBuilder(RecordWriter.java:246)
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:169)
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:154)
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:120)
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
    ... 16 more

我做错什么了吗? 还是将TimeCharacteristic设置为IngestionTime时使用rebalance函数有局限性?

提前谢谢...

1 个答案:

答案 0 :(得分:0)

可以提供您正在使用的flink版本吗?

您的问题似乎与此Jira机票有关

https://issues.apache.org/jira/browse/FLINK-14087

您在任务中只使用一次rebalance吗? recordWriter可以共享相同的channelSelector,它决定将记录转发到的位置。您的堆栈跟踪显示它正在尝试选择出界通道。