我是Spark和Scala的新手,我试图在Spark中练习join命令。
我有两个csv文件:
Ads.csv
5de3ae82-d56a-4f70-8738-7e787172c018,AdProvider1
f1b6c6f4-8221-443d-812e-de857b77b2f4,AdProvider2
aca88cd0-fe50-40eb-8bda-81965b377827,AdProvider1
940c138a-88d3-4248-911a-7dbe6a074d9f,AdProvider3
983bb5e5-6d5b-4489-85b3-00e1d62f6a3a,AdProvider3
00832901-21a6-4888-b06b-1f43b9d1acac,AdProvider1
9a1786e1-ab21-43e3-b4b2-4193f572acbc,AdProvider1
50a78218-d65a-4574-90de-0c46affbe7f3,AdProvider5
d9bb837f-c85d-45d4-95f2-97164c62aa42,AdProvider4
611cf585-a8cf-43e9-9914-c9d1dc30dab5,AdProvider1
Impression.csv是:
5de3ae82-d56a-4f70-8738-7e787172c018,Publisher1
f1b6c6f4-8221-443d-812e-de857b77b2f4,Publisher2
aca88cd0-fe50-40eb-8bda-81965b377827,Publisher1
940c138a-88d3-4248-911a-7dbe6a074d9f,Publisher3
983bb5e5-6d5b-4489-85b3-00e1d62f6a3a,Publisher3
00832901-21a6-4888-b06b-1f43b9d1acac,Publisher1
9a1786e1-ab21-43e3-b4b2-4193f572acbc,Publisher1
611cf585-a8cf-43e9-9914-c9d1dc30dab5,Publisher1
我想加入第一个ID作为键和两个值。
所以我这样读了它们:
val ads = sc.textFile("ads.csv")
ads: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[1] at textFile at <console>:21
val impressions = sc.textFile("impressions.csv")
impressions: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[3] at textFile at <console>:21
好的,所以我必须制作键值对: val adPairs = ads.map(line =&gt; line.split(&#34;,&#34;)) val impressionPairs = impressions.map(line =&gt; line.split(&#34;,&#34;))
res11: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[6] at map at <console>:23
res13: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[7] at map at <console>:23
但我无法加入他们:
val result = impressionPairs.join(adPairs)
<console>:29: error: value join is not a member of org.apache.spark.rdd.RDD[Array[String]]
val result = impressionPairs.join(adPairs)
我是否需要将这些对转换为另一种格式?
答案 0 :(得分:3)
你几乎就在那里,但你需要的是将Array [String]转换为键值对,如下所示:
val adPairs = ads.map(line => {
val substrings = line.split(",")
(substrings(0), substrings(1))
})
(和impressionPairs
相同)
这将为您提供RDD[(String, String)]
类型的rdds,然后可以加入:)