Flink:如何保留和恢复ValueState

时间:2018-09-28 07:46:43

标签: scala apache-flink savepoints

我使用Flink丰富输入流

case class Input( key: String, message: String )

具有预先计算的分数

case class Score( key: String, score: Int )

并产生输出

case class Output( key: String, message: String, score: Int )

从Kafka主题中读取输入流和乐谱流,并将输出流也发布到Kafka

val processed = inputStream.connect( scoreStream )
                           .flatMap( new ScoreEnrichmentFunction )
                           .addSink( producer )

具有以下ScoreEnrichmentFunction:

class ScoreEnrichmentFunction extends RichCoFlatMapFunction[Input, Score, Output]
{
    val scoreStateDescriptor = new ValueStateDescriptor[Score]( "saved scores", classOf[Score] )
    lazy val scoreState: ValueState[Score] = getRuntimeContext.getState( scoreStateDescriptor )

    override def flatMap1( input: Input, out: Collector[Output] ): Unit = 
    {
        Option( scoreState.value ) match {
            case None => out.collect( Output( input.key, input.message, -1 ) )
            case Some( score ) => out.collect( Output( input.key, input.message, score.score ) )  
        }
    }

    override def flatMap2( score: Score, out: Collector[Output] ): Unit = 
    {
        scoreState.update( score )
    } 
}

这很好。但是,如果我采取了安全措施并取消了Flink作业,则从保存点恢复作业时,存储在ValueState中的分数将丢失。

据我了解,似乎需要使用CheckPointedFunction扩展ScoreEnrichmentFunction

class ScoreEnrichmentFunction extends RichCoFlatMapFunction[Input, Score, Output] with CheckpointedFunction

但是我很难理解如何实现快照状态和initializeState方法以与键控状态一起工作

override def snapshotState( context: FunctionSnapshotContext ): Unit = ???


override def initializeState( context: FunctionInitializationContext ): Unit = ???

请注意,我使用以下环境:

val env = StreamExecutionEnvironment.getExecutionEnvironment
    env.setParallelism( 2 )
    env.setBufferTimeout( 1 )
    env.enableCheckpointing( 1000 )
    env.getCheckpointConfig.enableExternalizedCheckpoints( ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION )
    env.getCheckpointConfig.setCheckpointingMode( CheckpointingMode.EXACTLY_ONCE )
    env.getCheckpointConfig.setMinPauseBetweenCheckpoints( 500 )
    env.getCheckpointConfig.setCheckpointTimeout( 60000 )
    env.getCheckpointConfig.setFailOnCheckpointingErrors( false )
    env.getCheckpointConfig.setMaxConcurrentCheckpoints( 1 )

1 个答案:

答案 0 :(得分:0)

我认为我找到了问题。我试图为检查点和保存点使用单独的目录,这导致保存点目录和FsStateBackend目录不同。

使用相同的目录

val backend = new FsStateBackend( "file:/data", true )
env.setStateBackend( backend )

以及获取保存点时

bin/flink cancel d75f4712346cadb4df90ec06ef257636 -s file:/data

解决了问题。