My code script:
// build input steeam from kafka topic
JavaInputDStream<ConsumerRecord<String, String>> stream1 = MyKafkaUtils.
buildInputStream(KafkaConfig.kafkaFlowGrouppId, topic1, ssc);
JavaInputDStream<ConsumerRecord<String, String>> stream2 = MyKafkaUtils.
buildInputStream(KafkaConfig.kafkaFlowGrouppId, topic2, ssc);
JavaInputDStream<ConsumerRecord<String, String>> stream3 = MyKafkaUtils.
buildInputStream(KafkaConfig.kafkaFlowGrouppId, topic3, ssc);
...
// aggregate kafka message use spark sql
result1 = process(stream1);
result2 = process(stream2);
result3 = process(stream3);
...
// write result to kafka kafka
writeToKafka(result1);
writeToKafka(result2);
writeToKafka(result3);
// save offset to zookeeper
saveOffset(stream1);
saveOffset(stream2);
saveOffset(stream3);
spark web ui信息: enter image description here