SparkR 2.2.0。写AVRO失败了

时间:2017-04-24 18:14:49

标签: r apache-spark avro sparkr

我对Spark相对较新,从SparkR访问它,并试图将AVRO文件写入磁盘,但我一直收到错误Task failed while writing rows

我正在运行SparkR 2.2.0 -SNAPSHOT,Scala版本2.11.8,并通过以下方式启动我的SparkR会话:

sparkR.session(master = "spark://[some ip here]:7077",
           appName = "nateSparkRAVROTest",
           sparkHome = "/home/ubuntu/spark",
           enableHiveSupport = FALSE,
           sparkConfig = list(spark.executor.memory="28g"),
           sparkPackages =c("org.apache.hadoop:hadoop-aws:2.7.3", "com.amazonaws:aws-java-sdk-pom:1.10.34", "com.databricks:spark-avro_2.11:3.2.0"))

我想知道我是否需要设置或安装任何特殊的东西?我在会话启动命令中包含了com.databricks:spark-avro_2.11:3.2.0包,看到它在启动会话时下载了包,并且我尝试通过此命令编写AVRO文件:

SparkR::write.df(myFormalClassSparkDataFrameObject, path = "/home/nathan/SparkRAVROTest/", source = "com.databricks.spark.avro", mode="overwrite")

我希望有更多使用SparkR经验的人遇到此错误并提供一些见解。感谢您的时间。

亲切的问候, 内特

1 个答案:

答案 0 :(得分:0)

我能够在我的Spark配置中使用com.databricks:spark-avro_2.11:4.0.0让它工作。

示例SparkR配置帮助了这个:

SparkR::sparkR.session(master="local[*]", 
                   sparkConfig = list(spark.driver.memory="14g",
                                      spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version="2",
                                      spark.hadoop.mapreduce.fileoutputcommitter.marksuccessfuljobs = "FALSE",
                                      spark.kryoserializer.buffer.max="1024m",
                                      spark.speculation="FALSE",
                                      spark.referenceTracking="FALSE"
                   ), 
                   sparkPackages =c("org.apache.hadoop:hadoop-aws:2.7.3",
                    "com.amazonaws:aws-java-sdk:1.7.4",
                     "com.amazonaws:aws-java-sdk-pom:1.11.221",
                     "com.databricks:spark-avro_2.11:4.0.0",
                      "org.apache.httpcomponents:httpclient:4.5.2"))