我正在尝试将我的应用程序从flink流处理转换为flink批处理。
对于flink数据流,我从具有多个JSON对象的预定义文件中读取字符串,并从Json Objects到tuple3收集器执行平面映射(第一个元素 - 来自json对象的一个字段,第二个元素 - 另一个来自json对象的字段,第三个元素 - 实际的json对象数据。)
DataStream<Tuple3<String, Integer, ObjectNode>> transformedSource = source.flatMap(new FlatMapFunction<String, Tuple3<String, Integer, ObjectNode>>() {
@Override
public void flatMap(String value, Collector<Tuple3<String, Integer, ObjectNode>> out) throws Exception {
ObjectNode record = mapper.readValue(value, ObjectNode.class);
JsonNode customer = record.get("customer");
JsonNode deviceId = record.get("id");
if (customer != null && deviceId != null) {
out.collect(Tuple3.of(customer.asText(), deviceId.asInt(), record));
}
}
});
然后,在窗口中执行第一个key元素和元组元素。
WindowedStream<Tuple3<String, Integer,ObjectNode>, Tuple, TimeWindow> combinedData = transformedSource
.keyBy(0, 1)
.timeWindow(Time.seconds(5));
对于flink批处理,如何进行DataSet Batch的KeyBy,DataSet中有一个等效的KeyBy方法
DataSet<String> source = env.readTextFile("file:///path /to/ file");
DataSet<Tuple3<String, Integer, ObjectNode>> transformedSource = source.flatMap(new FlatMapFunction<String, Tuple3<String, Integer, ObjectNode>>() {
@Override
public void flatMap(String value, Collector<Tuple3<String, Integer, ObjectNode>> out) throws Exception {
ObjectNode record = mapper.readValue(value, ObjectNode.class);
JsonNode customer = record.get("customer");
JsonNode deviceId = record.get("id");
if (customer != null && deviceId != null) {
out.collect(Tuple3.of(customer.asText(), deviceId.asInt(), record));
}
}
});