无法从SparkR创建的DataFrame

时间:2016-07-25 21:47:17

标签: r hadoop apache-spark hive sparkr

我有以下简单的SparkR程序,即创建SparkR DataFrame并从中检索/收集数据。

Sys.setenv(HADOOP_CONF_DIR = "/etc/hadoop/conf.cloudera.yarn")
Sys.setenv(SPARK_HOME = "/home/user/Downloads/spark-1.6.1-bin-hadoop2.6")
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
sc <- sparkR.init(master="yarn-client",sparkEnvir = list(spark.shuffle.service.enabled=TRUE,spark.dynamicAllocation.enabled=TRUE,spark.dynamicAllocation.initialExecutors="40"))
hiveContext <- sparkRHive.init(sc)

n = 1000
x = data.frame(id = 1:n, val = rnorm(n))
xs <- createDataFrame(hiveContext, x)

xs

head(xs)
collect(xs)

我能够成功创建它并查看信息,但任何与获取数据相关的操作都会抛出错误。

  

16/07/25 16:33:59 WARN TaskSetManager:阶段17.0中丢失的任务0.3(TID 86,wlos06.nrm.minn.seagate.com):java.net.SocketTimeoutException:接受超时       at java.net.PlainSocketImpl.socketAccept(Native Method)       at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)       at java.net.ServerSocket.implAccept(ServerSocket.java:530)       在java.net.ServerSocket.accept(ServerSocket.java:498)       在org.apache.spark.api.r.RRDD $ .createRWorker(RRDD.scala:432)       在org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:63)       在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)       在org.apache.spark.rdd.RDD.iterator(RDD.scala:270)       在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)       在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)       在org.apache.spark.rdd.RDD.iterator(RDD.scala:270)       在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)       在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)       在org.apache.spark.rdd.RDD.iterator(RDD.scala:270)       在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)       在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)       在org.apache.spark.rdd.RDD.iterator(RDD.scala:270)       在org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)       在org.apache.spark.scheduler.Task.run(Task.scala:89)       在org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:214)       在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)       at java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:615)       在java.lang.Thread.run(Thread.java:745)

     

16/07/25 16:33:59 ERROR TaskSetManager:阶段17.0中的任务0失败了4次;堕胎   16/07/25 16:33:59错误RBackendHandler:org.apache.spark.sql.api.r.SQLUtils上的dfToCols失败   invokeJava出错(isStatic = TRUE,className,methodName,...):     org.apache.spark.SparkException:由于阶段失败导致作业中止:阶段17.0中的任务0失败4次,最近失败:阶段17.0中丢失任务0.3(TID 86,wlos06.nrm.minn.seagate.com):java .net.SocketTimeoutException:接受超时       at java.net.PlainSocketImpl.socketAccept(Native Method)       at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)       at java.net.ServerSocket.implAccept(ServerSocket.java:530)       在java.net.ServerSocket.accept(ServerSocket.java:498)       在org.apache.spark.api.r.RRDD $ .createRWorker(RRDD.scala:432)       在org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:63)       在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)       在org.apache.spark.rdd.RDD.iterator(RDD.scala:270)       在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)       在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)       在org.apache.spark.rdd.RDD.iterator(RDD.scala:270)       在org.apache.spark.rdd.MapPartitionsRDD.compute(MapPar

如果我通过sparkR命令行执行它,如下所示,它将被执行。

~/Downloads/spark-1.6.1-bin-hadoop2.6/bin/sparkR --master yarn-client

但是当我通过R和sparkR.init((master =“yarn-client”)执行它时,它会抛出错误。

有人可以帮助解决这些错误吗?

1 个答案:

答案 0 :(得分:6)

添加此行有所不同:

Sys.setenv("SPARKR_SUBMIT_ARGS"="--master yarn-client sparkr-shell")

以下是完整代码:

Sys.setenv(HADOOP_CONF_DIR = "/etc/hadoop/conf.cloudera.yarn")
Sys.setenv(SPARK_HOME = "/home/user/Downloads/spark-1.6.1-bin-hadoop2.6")
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
Sys.setenv("SPARKR_SUBMIT_ARGS"="--master yarn-client sparkr-shell")
sc <- sparkR.init(sparkEnvir = list(spark.shuffle.service.enabled=TRUE,spark.dynamicAllocation.enabled=TRUE,spark.dynamicAllocation.initialExecutors="40"))
hiveContext <- sparkRHive.init(sc)

n = 1000
x = data.frame(id = 1:n, val = rnorm(n))
xs <- createDataFrame(hiveContext, x)

xs

head(xs)
collect(xs)