我的Spark Streaming应用程序中的java.lang.NullPointerException

时间:2015-06-22 10:49:01

标签: scala apache-spark spark-streaming apache-spark-sql

我的spark应用程序需要处理数据流。 为此,我使用两个spark模块:流模块和sql模块。 特别是我需要使用sql模块,因为我必须查询从流中获取的每个记录,在本地Metastore中的hive表。

MAIN PROBLEM如下:在流处理开始之后(通过流上下文的方法启动),我无法使用sqlContext。当我在流处理期间尝试使用sqlContext时,会引发以下错误:

15/06/22 12:41:15 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)
java.lang.NullPointerException
    at org.apache.spark.sql.SQLContext.currentSession(SQLContext.scala:897)
    at org.apache.spark.sql.SQLContext.conf(SQLContext.scala:73)
    at org.apache.spark.sql.SQLContext.getConf(SQLContext.scala:106)
    at org.apache.spark.sql.hive.HiveContext.hiveMetastoreVersion(HiveContext.scala:114)
    at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:176)
    at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:175)
    at org.apache.spark.sql.hive.HiveContext$$anon$2.<init>(HiveContext.scala:370)
    at org.apache.spark.sql.hive.HiveContext.catalog$lzycompute(HiveContext.scala:370)
    at org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:369)
    at org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:71)
    at org.apache.spark.sql.SQLContext.tableNames(SQLContext.scala:787)
    at Test$.getDangerousness(test.scala:84)
    at Test$$anonfun$5.apply(test.scala:126)
    at Test$$anonfun$5.apply(test.scala:126)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1272)
    at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1272)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
    at org.apache.spark.scheduler.Task.run(Task.scala:70)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
15/06/22 12:41:15 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 2, localhost): java.lang.NullPointerException
    at org.apache.spark.sql.SQLContext.currentSession(SQLContext.scala:897)
    at org.apache.spark.sql.SQLContext.conf(SQLContext.scala:73)
    at org.apache.spark.sql.SQLContext.getConf(SQLContext.scala:106)
    at org.apache.spark.sql.hive.HiveContext.hiveMetastoreVersion(HiveContext.scala:114)
    at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:176)
    at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:175)
    at org.apache.spark.sql.hive.HiveContext$$anon$2.<init>(HiveContext.scala:370)
    at org.apache.spark.sql.hive.HiveContext.catalog$lzycompute(HiveContext.scala:370)
    at org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:369)
    at org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:71)
    at org.apache.spark.sql.SQLContext.tableNames(SQLContext.scala:787)
    at Test$.getDangerousness(test.scala:84)
    at Test$$anonfun$5.apply(test.scala:126)
    at Test$$anonfun$5.apply(test.scala:126)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1272)
    at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1272)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
    at org.apache.spark.scheduler.Task.run(Task.scala:70)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

`

其中Test是主类,getDangerousness是尝试使用sqlContext的方法。

提前致谢。

1 个答案:

答案 0 :(得分:0)

我在page找到了解决方案。 Spark不支持嵌套RDD或引用其他RDD的用户定义函数。