Apache Flink:运行许多作业时的性能问题

时间:2018-04-13 00:43:20

标签: apache-flink flink-streaming flink-sql

对于大量的Flink SQL查询(下面的100个),Flink命令行客户端在Yarn群集上失败并且“JobManager在600000 ms内没有响应”,即该作业永远不会在群集上启动。

  • JobManager日志在最后一个TaskManager启动后没有任何内容,除了 DEBUG记录“具有ID 5cd95f89ed7a66ec44f2d19eca0592f7的作业 在JobManager中找到“,表明它可能卡住了(创建了 ExecutionGraph?)。
  • 与本地独立的java程序相同 (最初的CPU很高)
  • 注意:structStream中的每一行包含515 列(许多最终为null)包括具有raw的列 信息。
  • 在YARN群集中,我们为TaskManager指定18GB,18GB 对于JobManager,每个5个插槽和725的并行性(分区 在我们的卡夫卡来源中。)

Flink SQL查询:

select count (*), 'idnumber' as criteria, Environment, CollectedTimestamp, 
       EventTimestamp, RawMsg, Source 
from structStream
where Environment='MyEnvironment' and Rule='MyRule' and LogType='MyLogType' 
      and Outcome='Success'
group by tumble(proctime, INTERVAL '1' SECOND), Environment, 
         CollectedTimestamp, EventTimestamp, RawMsg, Source

代码

public static void main(String[] args) throws Exception {
    FileSystems.newFileSystem(KafkaReadingStreamingJob.class
                             .getResource(WHITELIST_CSV).toURI(), new HashMap<>());

    final StreamExecutionEnvironment streamingEnvironment = getStreamExecutionEnvironment();
    final StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(streamingEnvironment);

    final DataStream<Row> structStream = getKafkaStreamOfRows(streamingEnvironment);
    tableEnv.registerDataStream("structStream", structStream);
    tableEnv.scan("structStream").printSchema();

    for (int i = 0; i < 100; i++) {
        for (String query : Queries.sample) {
            // Queries.sample has one query that is above. 
            Table selectQuery = tableEnv.sqlQuery(query);

            DataStream<Row> selectQueryStream =                                                 
                               tableEnv.toAppendStream(selectQuery, Row.class);
            selectQueryStream.print();
        }
    }

    // execute program
    streamingEnvironment.execute("Kafka Streaming SQL");
}

private static DataStream<Row> getKafkaStreamOfRows(StreamExecutionEnvironment environment) throws Exception {
    Properties properties = getKafkaProperties();

    // TestDeserializer deserializes the JSON to a ROW of string columns (515)
    // and also adds a column for the raw message. 
    FlinkKafkaConsumer011 consumer = new         
         FlinkKafkaConsumer011(KAFKA_TOPIC_TO_CONSUME, new TestDeserializer(getRowTypeInfo()), properties);
    DataStream<Row> stream = environment.addSource(consumer);

    return stream;
}

private static RowTypeInfo getRowTypeInfo() throws Exception {
    // This has 515 fields. 
    List<String> fieldNames = DDIManager.getDDIFieldNames();
    fieldNames.add("rawkafka"); // rawMessage added by TestDeserializer
    fieldNames.add("proctime");

    // Fill typeInformationArray with StringType to all but the last field which is of type Time
    .....
    return new RowTypeInfo(typeInformationArray, fieldNamesArray);
}

private static StreamExecutionEnvironment getStreamExecutionEnvironment() throws IOException {
    final StreamExecutionEnvironment env =                      
    StreamExecutionEnvironment.getExecutionEnvironment(); 
    env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);

    env.enableCheckpointing(60000);
    env.setStateBackend(new FsStateBackend(CHECKPOINT_DIR));
    env.setParallelism(725);
    return env;
}

private static DataStream<Row> getKafkaStreamOfRows(StreamExecutionEnvironment environment) throws Exception {
    Properties properties = getKafkaProperties();

    // TestDeserializer deserializes the JSON to a ROW of string columns (515)
    // and also adds a column for the raw message. 
    FlinkKafkaConsumer011 consumer = new FlinkKafkaConsumer011(KAFKA_TOPIC_TO_CONSUME, new  TestDeserializer(getRowTypeInfo()), properties);
    DataStream<Row> stream = environment.addSource(consumer);

    return stream;
}

private static RowTypeInfo getRowTypeInfo() throws Exception {
    // This has 515 fields. 
    List<String> fieldNames = DDIManager.getDDIFieldNames();
    fieldNames.add("rawkafka"); // rawMessage added by TestDeserializer
    fieldNames.add("proctime");

    // Fill typeInformationArray with StringType to all but the last field which is of type Time
    .....
    return new RowTypeInfo(typeInformationArray, fieldNamesArray);
}

private static StreamExecutionEnvironment getStreamExecutionEnvironment() throws IOException {
    final StreamExecutionEnvironment env =     StreamExecutionEnvironment.getExecutionEnvironment(); 
    env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);

    env.enableCheckpointing(60000);
    env.setStateBackend(new FsStateBackend(CHECKPOINT_DIR));
    env.setParallelism(725);
    return env;
}

1 个答案:

答案 0 :(得分:0)

这让我看起来好像JobManager因过多并发运行的作业而过载。我建议将这些工作分配给更多的JobManagers / Flink集群。