为什么只有在使用ProcessingTime输入第二个事件后CEP才会打印第一个事件?

时间:2018-06-12 17:07:42

标签: apache-flink flink-cep

我向kafka发送了一个isStart为true的事件,并让Flink从kafka中消耗了该事件,同时将TimeCharacteristic设置为ProcessingTime并设置在(Time.seconds(5))内,所以我期望CEP打印事件5秒后,我发送了第一个事件,但它没有,并且只有在我将第二个事件发送到kafka之后才会打印第一个事件。为什么它在发送两个事件后才打印第一个事件?是不是应该在5秒后打印事件我在使用ProcessingTime时发送了第一个事件?

以下是代码:

public class LongRidesWithKafka {
private static final String LOCAL_ZOOKEEPER_HOST = "localhost:2181";
private static final String LOCAL_KAFKA_BROKER = "localhost:9092";
private static final String RIDE_SPEED_GROUP = "rideSpeedGroup";
private static final int MAX_EVENT_DELAY = 60; // rides are at most 60 sec out-of-order.

public static void main(String[] args) throws Exception {
    final int popThreshold = 1; // threshold for popular places
    // set up streaming execution environment
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
    Properties kafkaProps = new Properties();
    //kafkaProps.setProperty("zookeeper.connect", LOCAL_ZOOKEEPER_HOST);
    kafkaProps.setProperty("bootstrap.servers", LOCAL_KAFKA_BROKER);
    kafkaProps.setProperty("group.id", RIDE_SPEED_GROUP);
    // always read the Kafka topic from the start
    kafkaProps.setProperty("auto.offset.reset", "earliest");

    // create a Kafka consumer
    FlinkKafkaConsumer011<TaxiRide> consumer = new FlinkKafkaConsumer011<>(
            "flinktest",
            new TaxiRideSchema(),
            kafkaProps);
    // assign a timestamp extractor to the consumer
    //consumer.assignTimestampsAndWatermarks(new CustomWatermarkExtractor());
    DataStream<TaxiRide> rides = env.addSource(consumer);

    DataStream<TaxiRide> keyedRides = rides.keyBy("rideId");
    // A complete taxi ride has a START event followed by an END event
    Pattern<TaxiRide, TaxiRide> completedRides =
            Pattern.<TaxiRide>begin("start")
                    .where(new SimpleCondition<TaxiRide>() {
                        @Override
                        public boolean filter(TaxiRide ride) throws Exception {
                            return ride.isStart;
                        }
                    })
                    .next("end")
                    .where(new SimpleCondition<TaxiRide>() {
                        @Override
                        public boolean filter(TaxiRide ride) throws Exception {
                            return !ride.isStart;
                        }
                    });

    // We want to find rides that have NOT been completed within 120 minutes
    PatternStream<TaxiRide> patternStream = CEP.pattern(keyedRides, completedRides.within(Time.seconds(5)));

    OutputTag<TaxiRide> timedout = new OutputTag<TaxiRide>("timedout") {
    };
    SingleOutputStreamOperator<TaxiRide> longRides = patternStream.flatSelect(
            timedout,
            new LongRides.TaxiRideTimedOut<TaxiRide>(),
            new LongRides.FlatSelectNothing<TaxiRide>()
    );
    longRides.getSideOutput(timedout).print();
    env.execute("Long Taxi Rides");
}

public static class TaxiRideTimedOut<TaxiRide> implements PatternFlatTimeoutFunction<TaxiRide, TaxiRide> {
    @Override
    public void timeout(Map<String, List<TaxiRide>> map, long l, Collector<TaxiRide> collector) throws Exception {
        TaxiRide rideStarted = map.get("start").get(0);
        collector.collect(rideStarted);
    }
}

public static class FlatSelectNothing<T> implements PatternFlatSelectFunction<T, T> {
    @Override
    public void flatSelect(Map<String, List<T>> pattern, Collector<T> collector) {
    }
}

private static class TaxiRideTSExtractor extends AscendingTimestampExtractor<TaxiRide> {
    private static final long serialVersionUID = 1L;

    @Override
    public long extractAscendingTimestamp(TaxiRide ride) {

        //  Watermark Watermark = getCurrentWatermark();

        if (ride.isStart) {
            return ride.startTime.getMillis();
        } else {
            return ride.endTime.getMillis();
        }
    }
}


private static class CustomWatermarkExtractor implements AssignerWithPeriodicWatermarks<TaxiRide> {

    private static final long serialVersionUID = -742759155861320823L;

    private long currentTimestamp = Long.MIN_VALUE;

    @Override
    public long extractTimestamp(TaxiRide ride, long previousElementTimestamp) {
        // the inputs are assumed to be of format (message,timestamp)

        if (ride.isStart) {
            this.currentTimestamp = ride.startTime.getMillis();
            return ride.startTime.getMillis();
        } else {
            this.currentTimestamp = ride.endTime.getMillis();
            return ride.endTime.getMillis();
        }
    }

    @Nullable
    @Override
    public Watermark getCurrentWatermark() {
        return new Watermark(currentTimestamp == Long.MIN_VALUE ? Long.MIN_VALUE : currentTimestamp - 1);
    }
}

}

1 个答案:

答案 0 :(得分:2)

原因是Flink的CEP库目前仅检查时间戳是否有其他元素到达并被处理。潜在的假设是你有稳定的事件流。

我认为这是Flink的CEP库的限制。为了正常工作,Flink应该使用arrivalTime + timeout注册处理时间计时器,如果没有事件到达,则触发模式的超时。