为什么Apache Flink会从数据流中删除事件?

时间:2018-03-14 13:06:49

标签: java junit apache-flink flink-streaming

在以下单元测试用例中,生成 numberOfElements 指定的某些事件并将其作为数据流提供。该单位案件在该行随机失败。

  

assertEquals(numberOfElements,CollectSink.values.size());

解释为什么Apache Flink正在跳过事件。

import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.SinkFunction;
import org.junit.Before;
import org.junit.Test;

import java.util.ArrayList;
import java.util.List;

import static java.lang.Thread.sleep;
import static org.junit.Assert.assertEquals;

public class FlinkTest {

StreamExecutionEnvironment env;

@Before
public void setup() {
    env = StreamExecutionEnvironment.createLocalEnvironment();
}

@Test
public void testStream1() throws Exception {
    testStream();
}

@Test
public void testStream2() throws Exception {
    testStream();
}

@Test
public void testStream3() throws Exception {
    testStream();
}

@Test
public void testStream4() throws Exception {
    testStream();
}


@Test
public void testStream() throws Exception {

    final int numberOfElements = 50;

    DataStream<Tuple2<String, Integer>> tupleStream = env.fromCollection(getCollectionOfBucketImps(numberOfElements));
    CollectSink.values.clear();
    tupleStream.addSink(new CollectSink());
    env.execute();
    sleep(2000);

    assertEquals(numberOfElements, getCollectionOfBucketImps(numberOfElements).size());
    assertEquals(numberOfElements, CollectSink.values.size());
}


public static List<Tuple2<String, Integer>> getCollectionOfBucketImps(int numberOfElements) throws InterruptedException {
    List<Tuple2<String, Integer>> records = new ArrayList<>();
    for (int i = 0; i < numberOfElements; i++) {
        records.add(new Tuple2<>(Integer.toString(i % 10), i));
    }
    return records;
}

// create a testing sink
private static class CollectSink implements SinkFunction<Tuple2<String, Integer>> {

    public static final List<Tuple2<String, Integer>> values = new ArrayList<>();

    @Override
    public synchronized void invoke(Tuple2<String, Integer> value, Context context) throws Exception {
        values.add(value);
    }
 }
}

例如,testStreamX案例中的任何一个都是随机失败的。

上下文:代码以8作为并行性setu运行,因为它运行的cpu有8个核心

2 个答案:

答案 0 :(得分:2)

我不知道你的工作的并列性(我想这是Flink可以分配的最大值)。看起来您可以在接收器的添加值上设置Race条件。

<强>解决方案

我已经运行了你的示例代码,将环境并行度设置为1,一切正常。有关测试的文档示例使用此解决方案link to documentation

@Before
public void setup() {
    env = StreamExecutionEnvironment.createLocalEnvironment();
    env.setParallelism(1);
}

更好

您只能在接收器操作符上将并行性设置为1,并保持管道其余部分的并行性。在下面的示例中,我添加了一个额外的map函数,其强制并行度为8,用于tha map运算符。

public void testStream() throws Exception {

    final int numberOfElements = 50;

    DataStream<Tuple2<String, Integer>> tupleStream = env.fromCollection(getCollectionOfBucketImps(numberOfElements));
    CollectSink.values.clear();
    tupleStream
            .map(new MapFunction<Tuple2<String,Integer>, Tuple2<String,Integer>>() {
                @Override
                public Tuple2<String,Integer> map(Tuple2<String, Integer> stringIntegerTuple2) throws Exception {

                    stringIntegerTuple2.f0 += "- concat something";

                    return stringIntegerTuple2;
                }
            }).setParallelism(8)
            .addSink(new CollectSink()).setParallelism(1);
    env.execute();
    sleep(2000);

    assertEquals(numberOfElements, getCollectionOfBucketImps(numberOfElements).size());
    assertEquals(numberOfElements, CollectSink.values.size());
}

答案 1 :(得分:0)

当环境的parallellism大于1时,将存在CollectSink的多个实例,这有可能导致竞争状况。

这些是避免出现竞争状况的解决方案:

  1. 在类对象上同步
private static class CollectSink implements SinkFunction<Tuple2<String, Integer>> {

    public static final List<Tuple2<String, Integer>> values = new ArrayList<>();

    @Override
    public void invoke(Tuple2<String, Integer> value, Context context) throws Exception {
        synchronized(CollectSink.class) {
            values.add(value);
        }
    }
 }
  1. Collections.synchronizedList()
import java.util.Collections;
private static class CollectSink implements SinkFunction<Tuple2<String, Integer>> {

    public static final List<Tuple2<String, Integer>> values = Collections.synchronizedList(new ArrayList<>());

    @Override
    public void invoke(Tuple2<String, Integer> value, Context context) throws Exception {
        values.add(value);
    }
 }