空指针异常仅在运行jar文件时发生-Scala Spark

时间:2019-06-14 19:50:28

标签: java scala apache-spark jar sbt

我有一个大数据分析项目。我使用Spark并使用Scala编写。

当我使用sbt run运行项目时,它运行良好,并给出了我想要的结果。 之后,我使用sbt assembly构建了jar文件,并使用java -jar my.jar运行了它。 但是进程停止了,并给了我一个空指针异常。

谁能解释我为什么会这样?请。

我附上了堆栈跟踪以供参考。

2019-06-15 18:49:56 DEBUG BlockManager:58 - Getting local block broadcast_0
2019-06-15 18:49:56 DEBUG BlockManager:58 - Level for block broadcast_0 is StorageLevel(disk, memory, deserialized, 1 replicas)
2019-06-15 18:49:57 INFO  CodecPool:179 - Got brand-new decompressor [.gz]
2019-06-15 18:49:57 DEBUG TaskMemoryManager:427 - unreleased 8.0 MB memory from org.apache.spark.sql.catalyst.expressions.VariableLengthRowBasedKeyValueBatch@43d7b5b2
2019-06-15 18:49:57 DEBUG TaskMemoryManager:427 - unreleased 256.0 KB memory from org.apache.spark.unsafe.map.BytesToBytesMap@6aee5557
2019-06-15 18:49:57 DEBUG TaskMemoryManager:434 - unreleased page: org.apache.spark.unsafe.memory.MemoryBlock@3517be49 in task 0
2019-06-15 18:49:57 DEBUG TaskMemoryManager:434 - unreleased page: org.apache.spark.unsafe.memory.MemoryBlock@295da0ba in task 0
2019-06-15 18:49:57 ERROR Executor:91 - Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.NullPointerException
    at org.apache.hadoop.io.compress.GzipCodec.createInputStream(GzipCodec.java:153)
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:102)
    at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.<init>(HadoopFileLinesReader.scala:46)
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:127)
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:124)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(generated.java:568)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:587)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2019-06-15 18:49:57 DEBUG TaskSchedulerImpl:58 - parentName: , name: TaskSet_0.0, runningTasks: 0
2019-06-15 18:49:57 WARN  TaskSetManager:66 - Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.NullPointerException
    at org.apache.hadoop.io.compress.GzipCodec.createInputStream(GzipCodec.java:153)
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:102)
    at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.<init>(HadoopFileLinesReader.scala:46)
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:127)
    at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:124)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:128)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:182)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithKeys_0$(generated.java:568)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(generated.java:587)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

1 个答案:

答案 0 :(得分:1)

Spark应用程序应使用spark-submit来启动。 参见here

因此,您应该做的是打包您的应用程序(jar),并使用spark-submit在spark上下文中运行该jar。

Sbt检测到您的spark应用程序(取决于您的sbt设置),或者只是将您的主要方法作为简单的scala应用程序运行(再次取决于您的设置)。

无论如何,这就是我所提供的信息。请提供更多信息,以获得更好的答案。