下面是一个flink程序(Java),它从文件中读取推文,提取哈希标记,计算每个哈希标记的重复次数,最后写入文件。
现在在这个程序中有一个大小为20秒的滑动窗口,可以滑动5秒。在接收器中,所有输出数据都将写入名为 outfile 的文件中。意味着每5秒钟就会触发一个窗口并将数据写入outfile。
我的问题:
我想要为每个窗口触发(每5秒钟一次)数据写入新文件。 (而不是附加在同一个文件中)。 请指导它在哪里以及如何完成?我是否需要使用自定义触发器或任何有关接收器的配置?或其他什么?
代码:
<!-- language: lang-java -->
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.getConfig().setAutoWatermarkInterval(100);
env.enableCheckpointing(5000,CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setMinPauseBetweenCheckpoints(5000);
String path = "C:\\Users\\eventTime";
// Reading data from files of folder eventTime.
DataStream<String> streamSource = env.readFile(new TextInputFormat(new Path(path)), path, FileProcessingMode.PROCESS_CONTINUOUSLY, 1000).uid("read-1");
//Extracting the hash tags of tweets
DataStream<Tuple3<String, Integer, Long>> mapStream = streamSource.map(new ExtractHashTagFunction());
//generating watermarks and extracting the timestamps from tweets
DataStream<Tuple3<String, Integer, Long>> withTimestampsAndWatermarks = mapStream.assignTimestampsAndWatermarks(new MyTimestampsAndWatermarks());
KeyedStream<Tuple3<String, Integer, Long>,Tuple> keyedStream = withTimestampsAndWatermarks.keyBy(0);
//Using sliding window of 20 seconds which slide by 5 seconds.
SingleOutputStreamOperator<Tuple4<String, Integer, Long, String>> aggregatedStream = keyedStream.**window(SlidingEventTimeWindows.of(Time.seconds(20),Time.seconds(5)))**
.aggregate(new AggregateHashTagCountFunction()).uid("agg-123");
aggregatedStream.writeAsText("C:\\Users\\outfile", WriteMode.NO_OVERWRITE).setParallelism(1).uid("write-1");
env.execute("twitter-analytics");
答案 0 :(得分:3)
如果您对内置接收器不满意,可以定义自定义接收器:
stream.addSink(new MyCustomSink ...)
MyCustomSink
应实施SinkFunction
您的自定义接收器将包含FileWriter,例如一个柜台。
每次调用接收器时,它都会写入"/path/to/file + counter.yourFileExtension"