不消耗Spark流式传输Kafka消息

时间:2018-01-16 17:32:11

标签: java apache-spark apache-kafka spark-streaming

我希望使用Spark(1.6.2)Streaming从Kafka中的主题(经纪人v 0.10.2.1 )接收消息。

我正在使用 Receiver 方法。代码如下代码:

public static void main(String[] args) throws Exception
{
    SparkConf sparkConf = new SparkConf().setAppName("SimpleStreamingApp");
    JavaStreamingContext javaStreamingContext = new JavaStreamingContext(sparkConf, new Duration(5000));
    //
    Map<String, Integer> topicMap = new HashMap<>();
    topicMap.put("myTopic", 1);
    //
    String zkQuorum = "host1:port1,host2:port2,host3:port3";
    //
    Map<String, String> kafkaParamsMap = new HashMap<>();
    kafkaParamsMap.put("bootstraps.server", zkQuorum);
    kafkaParamsMap.put("metadata.broker.list", zkQuorum);
    kafkaParamsMap.put("zookeeper.connect", zkQuorum);
    kafkaParamsMap.put("group.id", "group_name");
    kafkaParamsMap.put("security.protocol", "SASL_PLAINTEXT");
    kafkaParamsMap.put("security.mechanism", "GSSAPI");
    kafkaParamsMap.put("ssl.kerberos.service.name", "kafka");
    kafkaParamsMap.put("key.deserializer", "kafka.serializer.StringDecoder");
    kafkaParamsMap.put("value.deserializer", "kafka.serializer.DefaultDecoder");
    //
    JavaPairReceiverInputDStream<byte[], byte[]> stream = KafkaUtils.createStream(javaStreamingContext,
                            byte[].class, byte[].class,
                            DefaultDecoder.class, DefaultDecoder.class,
                            kafkaParamsMap,
                            topicMap,
                            StorageLevel.MEMORY_ONLY());

    VoidFunction<JavaPairRDD<byte[], byte[]>> voidFunc = new VoidFunction<JavaPairRDD<byte[], byte[]>> ()
    {
       public void call(JavaPairRDD<byte[], byte[]> rdd) throws Exception
       {
          List<Tuple2<byte[], byte[]>> all = rdd.collect();
          System.out.println("size of red: " + all.size());
       }
    }

    stream.forEach(voidFunc);

    javaStreamingContext.start();
    javaStreamingContext.awaitTermination();
}

访问Kafka kerberized 。当我启动时

spark-submit --verbose --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=jaas.conf" --files jaas.conf,privKey.der --principal <accountName> --keytab <path to keytab file> --master yarn --jars <comma separated path to all jars> --class <fully qualified java main class> <path to jar file containing main class>

    来自Kafka的
  1. VerifiableProperties类记录了kafkaParams hashmap中包含的属性的警告消息:
  2. INFO KafkaReceiver: connecting to zookeeper: <the correct zookeeper quorum provided in kafkaParams map>
    
    VerifiableProperties: Property auto.offset.reset is overridden to largest
    VerifiableProperties: Property enable.auto.commit is not valid.
    VerifiableProperties: Property sasl.kerberos.service.name is not valid
    VerifiableProperties: Property key.deserializer is not valid
    ...
    VerifiableProperties: Property zookeeper.connect is overridden to ....
    

    我认为因为这些属性不被接受,所以它可能会影响流处理。

    **当我以群集模式--master yarn启动时,这些警告消息不会出现**

    1. 稍后,我会看到按照配置每5秒重复一次以下日志:

      INFO BlockRDD: Removing RDD 4 from persistence list

      INFO KafkaInputDStream: Removing blocks of RDD BlockRDD[4] at createStream at ...

      INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()

      INFO ... INFO BlockManager: Removing RDD 4

    2. 但是,我没有在控制台上看到任何实际的消息

      问题:为什么我的代码不打印任何实际消息?

      我的gradle依赖项是:

      compile group: 'org.apache.spark', name: 'spark-core_2.10', version: '1.6.2'
      compile group: 'org.apache.spark', name: 'spark-streaming_2.10', version: '1.6.2'
      compile group: 'org.apache.spark', name: 'spark-streaming-kafka_2.10', version: '1.6.2'
      

2 个答案:

答案 0 :(得分:0)

stream是JavaPairReceiverInputDStream的一个对象。将其转换为Dstream并使用foreachRDD打印从Kafka消耗的消息

答案 1 :(得分:0)

Spark 1.6.2不支持kafka 0.10,只支持kafka0.8。对于kafka 0.10,你应该使用spark 2