我有两个完全相同的数据框用于比较测试
df1
------------------------------------------
year | state | count2 | count3 | count4|
2014 | NJ | 12332 | 54322 | 53422 |
2014 | NJ | 12332 | 53255 | 55324 |
2015 | CO | 12332 | 53255 | 55324 |
2015 | MD | 14463 | 76543 | 66433 |
2016 | CT | 14463 | 76543 | 66433 |
2016 | CT | 55325 | 76543 | 66433 |
------------------------------------------
df2
------------------------------------------
year | state | count2 | count3 | count4|
2014 | NJ | 12332 | 54322 | 53422 |
2014 | NJ | 65333 | 65555 | 125 |
2015 | CO | 12332 | 53255 | 55324 |
2015 | MD | 533 | 75 | 64524 |
2016 | CT | 14463 | 76543 | 66433 |
2016 | CT | 55325 | 76543 | 66433 |
------------------------------------------
我想与count2到count4上的这两个df进行比较,如果计数不匹配,则打印出一些消息说它不匹配。 这是我的尝试
val cols = df1.columns.filter(_ != "year").toList
def mapDiffs(name: String) = when($"l.$name" === $"r.$name", null).otherwise(array($"l.$name", $"r.$name")).as(name)
val result = df1.as("l").join(df2.as("r"), "year").select($"year" :: cols.map(mapDiffs): _*)
然后将其与具有相同数字的相同状态进行比较,它没有执行我想做的事情
------------------------------------------
year | state | count2 | count3 | count4|
2014 | NJ | 12332 | 54322 | 53422 |
2014 | NJ | no | no | no |
2015 | CO | 12332 | 53255 | 55324 |
2015 | MD | no | no | 64524 |
2016 | CT | 14463 | 76543 | 66433 |
2016 | CT | 55325 | 76543 | 66433 |
------------------------------------------
我希望结果如上所示,如何实现?
编辑,如果我只想在一个df中比较,还是在另一种情况下,col与cols我该怎么做? 喜欢
------------------------------------------
year | state | count2 | count3 | count4|
2014 | NJ | 12332 | 54322 | 53422 |
我想比较count3和count 4列与count2,显然count3和count 4与count 2不匹配,所以我希望结果是
-----------------------------------------------
year | state | count2 | count3 | count4 |
2014 | NJ | 12332 | mismatch | mismatch |
谢谢!
答案 0 :(得分:2)
join
上的数据框year
无法用于您的mapDiffs
方法。您需要在join
的df1和df2中有一个行标识列。
import org.apache.spark.sql.functions._
val df1 = Seq(
("2014", "NJ", "12332", "54322", "53422"),
("2014", "NJ", "12332", "53255", "55324"),
("2015", "CO", "12332", "53255", "55324"),
("2015", "MD", "14463", "76543", "64524"),
("2016", "CT", "14463", "76543", "66433"),
("2016", "CT", "55325", "76543", "66433")
).toDF("year", "state", "count2", "count3", "count4")
val df2 = Seq(
("2014", "NJ", "12332", "54322", "53422"),
("2014", "NJ", "12332", "53255", "125"),
("2015", "CO", "12332", "53255", "55324"),
("2015", "MD", "533", "75", "64524"),
("2016", "CT", "14463", "76543", "66433"),
("2016", "CT", "55325", "76543", "66433")
).toDF("year", "state", "count2", "count3", "count4")
如果您在rowId
的数据框中已经有一个行识别列(例如join
),请跳过此步骤:
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._
val rdd1 = df1.rdd.zipWithIndex.map{
case (row: Row, id: Long) => Row.fromSeq(row.toSeq :+ id)
}
val df1i = spark.createDataFrame( rdd1,
StructType(df1.schema.fields :+ StructField("rowId", LongType, false))
)
val rdd2 = df2.rdd.zipWithIndex.map{
case (row: Row, id: Long) => Row.fromSeq(row.toSeq :+ id)
}
val df2i = spark.createDataFrame( rdd2,
StructType(df2.schema.fields :+ StructField("rowId", LongType, false))
)
现在,定义mapDiffs
并将其按rowId
连接数据框后将其应用于选定的列:
def mapDiffs(name: String) =
when($"l.$name" === $"r.$name", $"l.$name").otherwise("no").as(name)
val cols = df1i.columns.filter(_.startsWith("count")).toList
val result = df1i.as("l").join(df2i.as("r"), "rowId").
select($"l.rowId" :: $"l.year" :: cols.map(mapDiffs): _*)
// +-----+----+------+------+------+
// |rowId|year|count2|count3|count4|
// +-----+----+------+------+------+
// | 0|2014| 12332| 54322| 53422|
// | 5|2016| 55325| 76543| 66433|
// | 1|2014| 12332| 53255| no|
// | 3|2015| no| no| 64524|
// | 2|2015| 12332| 53255| 55324|
// | 4|2016| 14463| 76543| 66433|
// +-----+----+------+------+------+
请注意,样本结果中df1和df2之间的差异似乎不止3个no
点。我已经修改了样本数据,使这三个点唯一不同。