从kafka处理json时的Spark Session空指针

时间:2016-11-09 12:08:42

标签: apache-spark

我正在尝试处理来自kafka的json消息。当我遍历流中的RDD并尝试使用SparkSession来读取json字符串时,我看到一个空指针异常。有人可以看到这里有什么问题:

    SparkSession spark = SparkSession
  .builder().master("local[*]")
              .appName("ABC")
              .config("spark.some.config.option", "some-value")
                  .getOrCreate();
JavaStreamingContext jssc = new JavaStreamingContext(new StreamingContext(spark.sparkContext(),Durations.seconds(2)));

        // Kafka params code here.....not shown

            JavaInputDStream<ConsumerRecord<String, String>> istream1 = KafkaUtils.createDirectStream(
                    jssc,
                    LocationStrategies.PreferConsistent(),
                    ConsumerStrategies.<String, String>Subscribe(Arrays.asList(topic1), kafkaParams)
                );


istream1.foreachRDD(rdd ->{ 
                rdd.foreach(consumerRecord ->{

                    Dataset<Row> rawData = spark.read().json(consumerRecord.value());
                  rawData.createOrReplaceTempView("sample");
                  Dataset<Row> resultsDF = spark.sql("SELECT alert_id, date from sample");
                  resultsDF.show();
                });
            });

我看到我无法在foreachRDD部分使用spark会话或该会话的一部分上下文(获取空指针)。

  

引起:java.lang.NullPointerException         在org.apache.spark.sql.SparkSession.sessionState $ lzycompute(SparkSession.scala:112)         在org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:110)         在org.apache.spark.sql.DataFrameReader。(DataFrameReader.scala:535)         在org.apache.spark.sql.SparkSession.read(SparkSession.scala:595)         在com.ibm.sifs.evidence.SpoofingEvidence.lambda $ 1(SpoofingEvidence.java:99)         在org.apache.spark.api.java.JavaRDDLike $$ anonfun $ foreach $ 1.apply(JavaRDDLike.scala:350)         在org.apache.spark.api.java.JavaRDDLike $$ anonfun $ foreach $ 1.apply(JavaRDDLike.scala:350)         在scala.collection.Iterator $ class.foreach(Iterator.scala:893)         在org.apache.spark.streaming.kafka010.KafkaRDD $ KafkaRDDIterator.foreach(KafkaRDD.scala:193)         在org.apache.spark.rdd.RDD $$ anonfun $ foreach $ 1 $$ anonfun $ apply $ 27.apply(RDD.scala:875)         在org.apache.spark.rdd.RDD $$ anonfun $ foreach $ 1 $$ anonfun $ apply $ 27.apply(RDD.scala:875)         在org.apache.spark.SparkContext $$ anonfun $ runJob $ 5.apply(SparkContext.scala:1897)         在org.apache.spark.SparkContext $$ anonfun $ runJob $ 5.apply(SparkContext.scala:1897)         在org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)         在org.apache.spark.scheduler.Task.run(Task.scala:85)         在org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:274)         ......还有3个

0 个答案:

没有答案