Spark RDD:如何将Map中的值连接到RDD中的行

时间:2017-10-27 02:36:37

标签: scala csv apache-spark mapping hdfs

我有一个csv文件,我将其作为RDD加载到Spark中:

val home_rdd = sc.textFile("hdfs://path/to/home_data.csv")
val home_parsed = home_rdd.map(row => row.split(",").map(_.trim))
val home_header = home_parsed.first
val home_data = home_parsed.filter(_(0) != home_header(0))

home_data然后是:

scala> home_data
res17: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[3] at filter at <console>:30

scala> home_data.take(3)
res20: Array[Array[String]] = Array(Array("7129300520", "20141013T000000", 221900, "3", "1", 1180, 5650, "1", 0, 0, 3, 7, 1180, 0, 1955, 0, "98178", 47.5112, -122.257, 1340, 5650), Array("6414100192", "20141209T000000", 538000, "3", "2.25", 2570, 7242, "2", 0, 0, 3, 7, 2170, 400, 1951, 1991, "98125", 47.721, -122.319, 1690, 7639), Array("5631500400", "20150225T000000", 180000, "2", "1", 770, 10000, "1", 0, 0, 3, 6, 770, 0, 1933, 0, "98028", 47.7379, -122.233, 2720, 8062))

我还有一个带有RDD加载的邻居的csv,然后用来创建一个Map[String,String]的地图:

val zip_rdd = sc.textFile("hdfs://path/to/zipcodes.csv")
val zip_parsed = zip_rdd.map(row => row.split(",").map(_.trim))
val zip_header = zip_parsed.first
val zip_data = zip_parsed.filter(_(0) != zip_header(0))
val zip_map = zip_data.map(row => (row(0), row(1))).collectAsMap
val zip_ind = home_header.indexOf("zipcode") //to get the zipcode column in home_data

其中:

scala> zip_map.take(3)
res21: scala.collection.Map[String,String] = Map(98151 -> Seattle, 98052 -> Redmond, 98104 -> Seattle)

我接下来要做的是遍历home_data并使用每行中的zipcode值(zip_ind = 16)从zip_map获取邻域值并附加值到行尾。

val zip_processed = home_data.map(row => row :+ zip_map.get(row(zip_ind)))

但每次从zip_map获取时,某些内容都会失败,因此它只会将None附加到home_data中每行的末尾

scala> zip_processed.take(3)
res19: Array[Array[java.io.Serializable]] = Array(Array("7129300520", "20141013T000000", 221900, "3", "1", 1180, 5650, "1", 0, 0, 3, 7, 1180, 0, 1955, 0, "98178", 47.5112, -122.257, 1340, 5650, None), Array("6414100192", "20141209T000000", 538000, "3", "2.25", 2570, 7242, "2", 0, 0, 3, 7, 2170, 400, 1951, 1991, "98125", 47.721, -122.319, 1690, 7639, None), Array("5631500400", "20150225T000000", 180000, "2", "1", 770, 10000, "1", 0, 0, 3, 6, 770, 0, 1933, 0, "98028", 47.7379, -122.233, 2720, 8062, None))

我正在尝试对此进行调试,但我不确定为什么它会在zip_map.get(row(zip_ind))失败。

我对Scala相当绿色,所以也许我做了一些不好的假设,但试图找出如何更好地理解map函数中发生的事情。

1 个答案:

答案 0 :(得分:1)

当没有匹配时,Map.get()返回getOrElse。您可以使用val home_data = sc.parallelize(Array( Array("7129300520", "20141013T000000", 221900, "3", "1", 1180, 5650, "1", 0, 0, 3, 7, 1180, 0, 1955, 0, "98178", 47.5112, -122.257, 1340, 5650), Array("6414100192", "20141209T000000", 538000, "3", "2.25", 2570, 7242, "2", 0, 0, 3, 7, 2170, 400, 1951, 1991, "98125", 47.721, -122.319, 1690, 7639), Array("5631500400", "20150225T000000", 180000, "2", "1", 770, 10000, "1", 0, 0, 3, 6, 770, 0, 1933, 0, "98028", 47.7379, -122.233, 2720, 8062) )) val zip_ind = 16 val zip_map: Map[String, String] = Map("98178" -> "A", "98028" -> "B") val zip_processed = home_data.map(row => row :+ zip_map.getOrElse(row(zip_ind).toString, "N/A")) zip_processed.collect // res1: Array[Array[Any]] = Array( // Array(7129300520, 20141013T000000, 221900, 3, 1, 1180, 5650, 1, 0, 0, 3, 7, 1180, 0, 1955, 0, 98178, 47.5112, -122.257, 1340, 5650, A), // Array(6414100192, 20141209T000000, 538000, 3, 2.25, 2570, 7242, 2, 0, 0, 3, 7, 2170, 400, 1951, 1991, 98125, 47.721, -122.319, 1690, 7639, N/A), // Array(5631500400, 20150225T000000, 180000, 2, 1, 770, 10000, 1, 0, 0, 3, 6, 770, 0, 1933, 0, 98028, 47.7379, -122.233, 2720, 8062, B) // ) 使用后备附加Map值:

{{1}}