我试图在我的Flink工作中与我的Kafka源并行,但到目前为止我失败了。
我为我的Kafka制作人设置了4个分区:
$ ./bin/kafka-topics.sh --describe --zookeeper X.X.X.X:2181 --topic mytopic
Topic:mytopic PartitionCount:4 ReplicationFactor:1 Configs:
Topic: mytopic Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: mytopic Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: mytopic Partition: 2 Leader: 0 Replicas: 0 Isr: 0
Topic: mytopic Partition: 3 Leader: 0 Replicas: 0 Isr: 0
我的scala代码如下:
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(4)
env.getConfig.setGlobalJobParameters(params)
// **** Kafka CONNECTION ****
val properties = new Properties();
properties.setProperty("bootstrap.servers", params.get("server"));
properties.setProperty("group.id", "test");
// **** Get KAFKA source ****
val stream: DataStream[String] = env.addSource(new FlinkKafkaConsumer010[String](params.get("topic"), new SimpleStringSchema(), properties))
我在YARN上工作:
$ ./bin/flink run -m yarn-cluster -yn 4 -yjm 8192 -ynm test -ys 1 -ytm 8192 myjar.jar --server X.X.X.X:9092 --topic mytopic
我尝试了很多东西,但我的源代码没有并行化:
有几个Kafka分区,至少应该有多少个插槽/任务管理器,对吧?
答案 0 :(得分:2)
我有同样的问题。我建议你检查两件事。
producer.send(new ProducerRecord<String,String>("topicName","yourKey","yourMessage")
到
producer.send(new ProducerRecord<String,String>("topicName",null,"yourMessage")