Spark根据字段将文件拆分为多个文件夹

时间:2017-04-03 10:35:26

标签: scala apache-spark amazon-s3 split pyspark

我正在尝试将基于列的一组S3文件拆分为基于单个列的文件夹。我不确定下面代码的问题。

column 1, column 2
20130401, value1
20130402, value2
20130403, value3

val newDataDF = sqlContext.read.parquet("s3://xxxxxxx-bucket/basefolder/")
    newDataDF.cache()
    val uniq_days = newDataDF.select(newDataDF("column1")).distinct.show()
    uniq_days.cache()
    uniq_days.foreach(x => {newDataDF.filter(newDataDF("column1") === x).write.save(s"s3://xxxxxx-bucket/partitionedfolder/$x/")})
你能帮帮忙吗?即使是pyspark版也行。 我正在寻找以下结构。

s3://xxxxxx-bucket/partitionedfolder/20130401/part-***

    column 1, column 2
    20130401, value 1
s3://xxxxxx-bucket/partitionedfolder/20130402/part-***

    column 1, column 2
    20130402, value 1
s3://xxxxxx-bucket/partitionedfolder/20130403/part-***

    column 1, column 2
    20130403, value 1

这是错误

org.apache.spark.SparkException: Job aborted due to stage failure: Task 22 in stage 82.0 failed 4 times, most recent failure: Lost task 22.3 in stage 82.0 (TID 2753

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
Caused by: java.lang.NullPointerException

使用当前解决方案进行更新:

val newDataDF = sqlContext.read.parquet("s3://xxxxxx-bucket/basefolder/")
newDataDF.cache()
val uniq_days = newDataDF.select(newDataDF("column1")).distinct.rdd.map(_.getString(0)).collect().toList
uniq_days.foreach(x => {newDataDF.filter(newDataDF("column1") === x).write.save(s"s3://xxxxxx-bucket/partitionedfolder/$x/")})

1 个答案:

答案 0 :(得分:2)

我想你错过了" s"在保存。 :)

http://docs.scala-lang.org/overviews/core/string-interpolation.html#the-s-string-interpolator

变化:

write.save("s3://xxxxxx-bucket/partitionedfolder/$x/")})

致:

write.save(s"s3://xxxxxx-bucket/partitionedfolder/$x/")})

还有更多问题,show永远不会返回任何值。

变化:

val uniq_days = newDataDF.select(newDataDF("mevent_day")).distinct.show()
uniq_days.cache()

要:

val uniq_days = newDataDF.select(newDataDF("mevent_day")).distinct.rdd.map(_.getString(0)).collect().toList