我的spark r 1.6代码在spark2.0中不起作用,我做了必要的更改,比如创建sparkr.session()
而不是sparkr.init()
而没有传递sqlcontext参数等......
在下面的代码中,我将几个文件夹中的数据加载到数据框
在spark1.6中的read.df有效
sales <- read.df(sqlContext, path= "gs://dev.appspot.com/myData/2014/20*,gs://dev.appspot.com/myData/2015/20*", source = "com.databricks.spark.csv", delimiter
="\t")
spark2.0中的read.df不起作用
sales <- read.df("gs://dev.appspot.com/myData/2014/20*,gs://dev.appspot.c
om/myData/2015/20*", source = "com.databricks.spark.csv", delimiter="\t")
以上行引发以下错误:
6/09/25 19:28:52 ERROR org.apache.spark.api.r.RBackendHandler: loadDF on org.apache.spark.sql.api.r.SQLUtils faile d Error in invokeJava(isStatic = TRUE, className, methodName, ...) : org.apache.spark.sql.AnalysisException: **Path does not exist: gs://dev.appspot.com/myData/2014/ 20*,gs://dev.appspot.com/myData/2015/20***;
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$12.apply(DataSource.scala:357)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$12.apply(DataSource.scala:350)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122 Calls: read.df -> dispatchFunc -> f -> callJStatic -> invokeJava Execution halted 16/09/25 19:28:53 INFO org.spark_project.jetty.server.ServerConnector: Stopped ServerConnector@148bd6fd{HTTP/1.1}{0 .0.0.0:4040}
答案 0 :(得分:1)
spark2.0 read.df在读取文件名中带有“,”(逗号)的文件时失败。
我生成的数据文件有逗号 文件名,类似于这些201448-0,004 201448-0,005 201448-0,006
在通过该问题进行调试的痛苦时间之后,当我从文件名中删除“,”时,它终于开始读取数据。