在Spark中使用Scala中的递归Union构建RDD

时间:2014-07-24 15:46:12

标签: scala recursion functional-programming apache-spark rdd

所以我对函数式编程和Spark和Scala相当新,所以请原谅我,如果这很明显......但基本上我有一个符合某些标准的HDFS文件列表,例如:

    val List = (
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000140_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=03/000258_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=05/000270_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000297_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=30/000300_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000362_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=29/000365_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000397_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=15/000436_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=16/000447_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=01/000529_0",
"hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/partday=17/000585_0" )

我现在需要建立一个RDD来处理这个列表...我的想法是使用递归联盟...基本上是一个类似的函数:

def dostuff(line: String): (org.apache.spark.rdd.RDD[String]) = {
      val x = sc.textFile(line)
      val x:org.apache.spark.rdd.RDD[String] = sc.textFile(x) ++ sc.textFile(line)
}

然后只需通过地图应用它:

val RDD_list = List.map(l => l.dostuff)

1 个答案:

答案 0 :(得分:3)

您可以将所有文件读入单个RDD,如下所示:

val sc = new SparkContext(...)
sc.textFile("hdfs:///hive/some.db/BigAssHiveTable/partyear=2014/partmonth=06/*/*")
  .map(line => ...)