我正在处理一个火花作业,该作业从两个不同的位置读取实木复合地板文件中的数据。这些实木复合地板文件是从两个不同的来源生成的,但是它们是同一来源。
我想比较这两个实木复合地板文件中的Dataset<Row>
,看是否有任何列值掉了。
是否可以比较两个数据集并显示不匹配的列?
Dataset<Row> parquetFile = spark
.read()
.parquet(""file"//file1.parquet);
答案 0 :(得分:0)
在很高的级别上,您可以尝试以下操作:
df1
和df2
中。df1
字段中加入df2
和id
到df3
df3
编写映射函数,在其中比较联接的左侧和右侧。 答案 1 :(得分:0)
我的解决方案在Scala中,但是您也可以在Java中解决它,因为想法是相同的。
在比较Spark中的两个数据集/数据帧时,我可以想到多种方式,可以执行df.except(两次,即A-B和B-A),然后将两个结果数据帧合并,但这是一个整体
下面的方法是最简单的方法,只涉及一次洗牌,即使有成千上万的列和数百万条记录,也像超级按钮一样工作:
< p> case class Person(name: String, age: Long)
import spark.implicits._
def main(args: Array[String]): Unit = {
val source = Seq(Person("Andy", 32), Person("Farhan", 26), Person("John", 23)).toDS().toDF
val target = Seq(Person("Andy", 32), Person("Farhan", 25), Person("John", 23)).toDS().toDF
compareTwoDatasets(spark, source, target, "name").show(10, false)
}
def compareTwoDatasets(spark: SparkSession, sourceDS: Dataset[Row], targetDS: Dataset[Row], uniqueColumnName: String) = {
val source = sourceDS.map(sourceRow => (sourceRow.getAs(uniqueColumnName).toString, sourceRow.mkString("|"))).toDF(uniqueColumnName, "source_record")
val target = targetDS.map(targetRow => (targetRow.getAs(uniqueColumnName).toString, targetRow.mkString("|"))).toDF(uniqueColumnName, "target_record")
val columns = sourceDS.columns
source
.join(target, uniqueColumnName)
.where($"source_record" =!= $"target_record")
.flatMap { row =>
val sourceArray = row.getAs[String]("source_record").split("\\|", -1)
val targetArray = row.getAs[String]("target_record").split("\\|", -1)
val commonValue = row.getAs[String](uniqueColumnName)
List(columns, sourceArray, targetArray)
.transpose
.filter(x => x(1) != x(2))
.map((commonValue, _))
}.toDF(uniqueColumnName, "mismatch_column_source_target")
}
输出:
+------+-----------------------------+
|name |mismatch_column_source_target|
+------+-----------------------------+
|Farhan|[age, 26, 25] |
+------+-----------------------------+
第二列中的值将是不匹配列名称,源值及其对应的目标值。
答案 2 :(得分:0)
我认为更好的答案,但使用DF和SCALA以及更通用的方法,因此也可以使用。
例如,模拟输入:
case class Person(personid: Int, personname: String, cityid: Int)
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.functions._
val df1 = Seq(
Person(0, "AgataZ", 0),
Person(1, "Iweta", 0),
Person(2, "Patryk", 2),
Person(9999, "Maria", 2),
Person(5, "John", 2),
Person(6, "Patsy", 2),
Person(7, "Gloria", 222),
Person(3333, "Maksym", 0)).toDF
val df2 = Seq(
Person(0, "Agata", 0),
Person(1, "Iweta", 0),
Person(2, "Patryk", 2),
Person(5, "John", 2),
Person(6, "Patsy", 333),
Person(7, "Gloria", 2),
Person(4444, "Hans", 3)).toDF
val joined = df1.join(df2, df1("personid") === df2("personid"), "outer")
val newNames = Seq("personId1", "personName1", "personCity1", "personId2", "personName2", "personCity2")
val df_Renamed = joined.toDF(newNames: _*)
// Some deliberate variation shown in approach for learning
val df_temp = df_Renamed.filter($"personCity1" =!= $"personCity2" || $"personName1" =!= $"personName2" || $"personName1".isNull || $"personName2".isNull || $"personCity1".isNull || $"personCity2".isNull).select($"personId1", $"personName1".alias("Name"), $"personCity1", $"personId2", $"personName2".alias("Name2"), $"personCity2"). withColumn("PersonID", when($"personId1".isNotNull, $"personId1").otherwise($"personId2"))
val df_final = df_temp.withColumn("nameChange ?", when($"Name".isNull or $"Name2".isNull or $"Name" =!= $"Name2", "Yes").otherwise("No")).withColumn("cityChange ?", when($"personCity1".isNull or $"personCity2".isNull or $"personCity1" =!= $"personCity2", "Yes").otherwise("No")).drop("PersonId1").drop("PersonId2")
df_final.show()
gives:
+------+-----------+------+-----------+--------+------------+------------+
| Name|personCity1| Name2|personCity2|PersonID|nameChange ?|cityChange ?|
+------+-----------+------+-----------+--------+------------+------------+
| Patsy| 2| Patsy| 333| 6| No| Yes|
|Maksym| 0| null| null| 3333| Yes| Yes|
| null| null| Hans| 3| 4444| Yes| Yes|
|Gloria| 222|Gloria| 2| 7| No| Yes|
| Maria| 2| null| null| 9999| Yes| Yes|
|AgataZ| 0| Agata| 0| 0| Yes| No|
+------+-----------+------+-----------+--------+------------+------------+