callJMethod出错(sqlContext," parquetFile",路径):无效的jobj 1.如果SparkR重新启动,则需要重新执行Spark操作

时间:2015-07-20 09:31:28

标签: apache-spark yarn sparkr

我想通过SparkR shell在yarn-client上运行sparkR。所以我这样做:

./sparkR

sparkR.stop();
sc <- sparkR.init(master="yarn-client",appName="SparkR-Parquet-example2", sparkHome = Sys.getenv("SPARK_HOME"),sparkExecutorEnv = list(HADOOP_CONF_DIR=”/etc/hadoop/conf.cloudera.yarn”,YARN_CONF_DIR=”/etc/hadoop/conf.cloudera.yarn”))
sqlContext <- sparkRSQL.init(sc)
path<-"hdfs://year=2015/month=1/day=9"
AppDF <- parquetFile(sqlContext, path)

Error in callJMethod(sqlContext, "parquetFile", paths) : 
Invalid jobj 1. If SparkR was restarted, Spark operations need to be re-executed.

我是新来的火花,任何人都可以帮忙解决它吗?

我使用的是spark-1.4.0-bin-hadoop2.6

0 个答案:

没有答案