如何在flink的KeyedBroadcastProcessFunction中更新广播状态?

时间:2020-06-08 13:15:40

标签: apache-flink flink-streaming flink-cep flink-sql

我是Flink的新手,我正在使用apache flink进行模式匹配,其中模式列表以广播状态显示,并在processElements函数中遍历模式以找到匹配的模式,并且我正在从数据库中读取该模式并它的准时活动。以下是我的代码

MapState描述符和Side输出流如下

public static final MapStateDescriptor<String, String> ruleDescriptor=
        new MapStateDescriptor<String, String>("RuleSet", BasicTypeInfo.STRING_TYPE_INFO,
                BasicTypeInfo.STRING_TYPE_INFO);

public final static OutputTag<Tuple2<String, String>> unMatchedSideOutput =
        new OutputTag<Tuple2<String, String>>(
                "unmatched-side-output") {
        };

处理功能和广播功能如下:

@Override
public void processElement(Tuple2<String, String> inputValue, ReadOnlyContext ctx,Collector<Tuple2<String,String>> out) throws Exception {

for (Map.Entry<String, String> ruleSet:                ctx.getBroadcastState(broadcast.patternRuleDescriptor).immutableEntries()) {

String ruleName = ruleSet.getKey();


//If the rule in ruleset is matched then send output to main stream and break the program
if (this.rule) {
out.collect(new Tuple2<>(inputValue.f0, inputValue.f1));
break;
}
}

// Writing output to sideout if no rule is matched 
ctx.output(Output.unMatchedSideOutput, new Tuple2<>("No Rule Detected", inputValue.f1));
}

@Override
public void processBroadcastElement(Tuple2<String, String> ruleSetConditions, Context ctx, Collector<Tuple2<String,String>> out) throws Exception {            ctx.getBroadcastState(broadcast.ruleDescriptor).put(ruleSetConditions.f0,
                    ruleSetConditions.f1);
}

主要功能如下

   public static void main(String[] args) throws Exception {

        //Initiate a datastream environment
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        //Reads incoming data for upstream
        DataStream<String> incomingSignal =
                env.readTextFile(....);

        //Reads the patterns available in configuration file
        DataStream<String> ruleStream =
                env.readTextFile();


        //Generate a key,value pair of set of patterns where key is pattern name and value is pattern condition
        DataStream<Tuple2<String, String>> ruleStream =
                rawPatternStream.flatMap(new FlatMapFunction<String, Tuple2<String, String>>() {
            @Override
            public void flatMap(String ruleCondition, Collector<Tuple2<String, String>> out) throws Exception {

                    String rules[] = ruleCondition.split[","];
                    out.collect(new Tuple2<>(rules[0], rules[1]));
                }
            }
        });

        //Broadcast the patterns to all the flink operators which will be stored in flink operator memory
        BroadcastStream<Tuple2<String, String>>ruleBroadcast = ruleStream.broadcast(ruleDescriptor);

        /*Creating keystream based on sourceName as key */
        DataStream<Tuple2<String, String>> matchSignal =
                incomingSignal.map(new MapFunction<String, Tuple2<String, String>>() {
                    @Override
                    public Tuple2<String, String> map(String incomingSignal) throws Exception {
                        String sourceName = ingressSignal.split[","][0]

                        return new Tuple2<>(sourceName, incomingSignal);
                    }
                }).keyBy(0).connect(ruleBroadcast).process(new KeyedBroadCastProcessFunction());


        matchSignal.print("RuleDetected=>");
}

我有几个问题

1)当前我正在从数据库中读取规则,当flink作业在集群中运行时,如何更新广播状态;如果我从kafka主题中获取了新的规则集,如何在processBroadcast方法中更新广播状态在KeyedBroadcasrProcessFunction中 2)更新广播状态后,是否需要重新启动flink作业?

请帮我解决上述问题

1 个答案:

答案 0 :(得分:0)

设置或更新广播状态的唯一方法是使用processBroadcastElementBroadcastProcessFunction的{​​{1}}方法。您需要做的就是使您的应用程序适应流式规则中的规则流,而不是从文件中一次读取规则。

广播状态是哈希图。如果您的广播流包括一个新的键/值对,它使用与先前广播事件相同的键,则新值将替换旧的键/值对。否则,您将获得一个全新的条目。

如果将readFileKeyedBroadcastProcessFunction一起使用,则每次修改文件时,其全部内容都会重新分配。您可以使用该机制来更新规则集。

相关问题