SparkR Null Pointer尝试创建数据框时出现异常

时间:2015-12-05 18:55:07

标签: r apache-spark sparkr

当尝试在sparkR中创建数据帧时,我收到有关空指针异常的错误。我已粘贴我的代码,以及下面的错误消息。我是否需要安装更多软件包才能运行此代码?

CODE

SPARK_HOME <- "C:\\Users\\erer\\Downloads\\spark-1.5.2-bin-hadoop2.4\\spark-1.5.2-bin-hadoop2.4"
Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.2.0" "sparkr-shell"')
library(SparkR, lib.loc = "C:\\Users\\erer\\Downloads\\spark-1.5.2-bin-hadoop2.4\\R\\lib")
    library(SparkR)
        library(rJava)

        sc <- sparkR.init(master = "local", sparkHome = SPARK_HOME)
        sqlContext <- sparkRSQL.init(sc)

        localDF <- data.frame(name=c("John", "Smith", "Sarah"), age=c(19, 23, 18))
        df <- createDataFrame(sqlContext, localDF)

错误:

Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) : 
  org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): java.lang.NullPointerException

        at java.lang.ProcessBuilder.start(Unknown Source)

        at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)

        at org.apache.hadoop.util.Shell.run(Shell.java:418)

        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)

        at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873)

        at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853)

        at org.apache.spark.util.Utils$.fetchFile(Utils.scala:381)

        at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:405)

        at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:397)

        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:7

1 个答案:

答案 0 :(得分:1)

您需要将库SparkR指向lib.loc参数中指定的本地SparkR代码的目录(如果您下载了Spark二进制文件,则已经为您填充了SPARK_HOME / R / lib):

`library(SparkR, lib.loc = "/home/kris/spark/spark-1.5.2-bin-hadoop2.6/R/lib")`

另见R-bloggers关于如何从Rstudio运行Spark的本教程:http://www.r-bloggers.com/sparkr-with-rstudio-in-ubuntu-12-04/