我已经稍微调整了spark的示例,以便在ec2集群上工作 通过hdfs。但我只是得到了保存的例子 镶木地板文件。
library(SparkR)
# Initialize SparkContext and SQLContext
sc <- sparkR.init()
sqlContext <- sparkRSQL.init(sc)
# Create a simple local data.frame
localDF <- data.frame(name=c("John", "Smith", "Sarah"), age=c(19, 23, 18))
# Create a DataFrame from a JSON file
peopleDF <- jsonFile(sqlContext, file.path("/people.json"))
# Register this DataFrame as a table.
registerTempTable(peopleDF, "people")
# SQL statements can be run by using the sql methods provided by sqlContext
teenagers <- sql(sqlContext, "SELECT name FROM people WHERE age >= 13 AND age <= 19")
# Store the teenagers in a table
saveAsParquetFile(teenagers, file.path("/teenagers"))
# Stop the SparkContext now
sparkR.stop()
当我使用saveDF
代替saveAsParquetFile
时,我只会得到一个
hdfs中的空文件。
drwxr-xr-x - root supergroup 0 2015-07-23 15:14 /teenagers
如何将数据框存储为文本文件(json / csv /...)?
答案 0 :(得分:1)
Spark 2.x
在Spark 2.0或更高版本中是内置的csv
编写器,不需要外部依赖项:
write.df(teenagers, "teenagers", "csv", "error")
Spark 1.x
您可以使用spark-csv
:
Sys.setenv('SPARKR_SUBMIT_ARGS' =
'"--packages" "com.databricks:spark-csv_2.10:1.1.0" "sparkr-shell"')
sqlContext <- sparkRSQL.init(sc)
... # The rest of your code
write.df(teenagers, "teenagers", "com.databricks.spark.csv", "error")
在交互模式下,您已使用--packages
启动SparkR shell:
bin/sparkR --packages com.databricks:spark-csv_2.10:1.1.0