Spark:根据连接操作更新Dataframe

时间:2017-10-23 08:50:47

标签: scala apache-spark apache-spark-sql

我有一个数据框,每个日期都是最新的。 每天我需要将新的qte和新的ca添加到旧的并更新日期。 所以我需要更新已经存在的那些并添加新的。

这是一个我想在最后得到的例子:

val histocaisse = spark.read
      .format("csv")
      .option("header", "true") //reading the headers
      .load("C:/Users/MHT/Desktop/histocaisse_dte1.csv")

    val hist = histocaisse
      .withColumn("pos_id", 'pos_id.cast(LongType))
      .withColumn("article_id", 'pos_id.cast(LongType))
      .withColumn("date", 'date.cast(DateType))
      .withColumn("qte", 'qte.cast(DoubleType))
      .withColumn("ca", 'ca.cast(DoubleType))



    val histocaisse2 = spark.read
      .format("csv")
      .option("header", "true") //reading the headers

      .load("C:/Users/MHT/Desktop/histocaisse_dte2.csv")

    val hist2 = histocaisse2.withColumn("pos_id", 'pos_id.cast(LongType))
      .withColumn("article_id", 'pos_id.cast(LongType))
      .withColumn("date", 'date.cast(DateType))
      .withColumn("qte", 'qte.cast(DoubleType))
      .withColumn("ca", 'ca.cast(DoubleType))
    hist2.show(false)

+------+----------+----------+----+----+
|pos_id|article_id|date      |qte |ca  |
+------+----------+----------+----+----+
|1     |1         |2000-01-07|2.5 |3.5 |
|2     |2         |2000-01-07|14.7|12.0|
|3     |3         |2000-01-07|3.5 |1.2 |
+------+----------+----------+----+----+

+------+----------+----------+----+----+
|pos_id|article_id|date      |qte |ca  |
+------+----------+----------+----+----+
|1     |1         |2000-01-08|2.5 |3.5 |
|2     |2         |2000-01-08|14.7|12.0|
|3     |3         |2000-01-08|3.5 |1.2 |
|4     |4         |2000-01-08|3.5 |1.2 |
|5     |5         |2000-01-08|14.5|1.2 |
|6     |6         |2000-01-08|2.0 |1.25|
+------+----------+----------+----+----+

+------+----------+----------+----+----+
|pos_id|article_id|date      |qte |ca  |
+------+----------+----------+----+----+
|1     |1         |2000-01-08|5.0 |7.0 |
|2     |2         |2000-01-08|39.4|24.0|
|3     |3         |2000-01-08|7.0 |2.4 |
|4     |4         |2000-01-08|3.5 |1.2 |
|5     |5         |2000-01-08|14.5|1.2 |
|6     |6         |2000-01-08|2.0 |1.25|
+------+----------+----------+----+----+

为此我做了这个

val df = hist2.join(hist1, Seq("article_id", "pos_id"), "left")
  .select($"pos_id", $"article_id",
    coalesce(hist2("date"), hist1("date")).alias("date"),
    (coalesce(hist2("qte"), lit(0)) + coalesce(hist1("qte"), lit(0))).alias("qte"),
    (coalesce(hist2("ca"), lit(0)) + coalesce(hist1("ca"), lit(0))).alias("ca"))
  .orderBy("pos_id", "article_id")

// df.show()
|pos_id|article_id|      date| qte|  ca|
+------+----------+----------+----+----+
|     1|         1|2000-01-08| 5.0| 7.0|
|     2|         2|2000-01-08|29.4|24.0|
|     3|         3|2000-01-08| 7.0| 2.4|
|     4|         4|2000-01-08| 3.5| 1.2|
|     5|         5|2000-01-08|14.5| 1.2|
|     6|         6|2000-01-08| 2.0|1.25|
+------+----------+----------+----+----+

目标是更新信息,如果存在并添加新信息。但是,当我尝试案例hist为空时,我遇到了以下问题

Exception in thread "main" java.lang.UnsupportedOperationException: empty collection
    at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1321)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)

即使是第一张桌子为空的情况,我该怎么做呢

2 个答案:

答案 0 :(得分:0)

为此,您应该定义schema并在阅读csv文件时应用它。通过这样做,你甚至不需要铸造代码。 :)

在您的情况下,两个数据帧看起来都相同,因此您可以将模式创建为

import org.apache.spark.sql.types._
val schema = StructType(Seq(
  StructField("pos_id", LongType, true),
  StructField("article_id", LongType, true),
  StructField("date", DateType, true),
  StructField("qte", LongType, true),
  StructField("ca", DoubleType, true)
))

然后您可以使用schema作为

val hist1 = spark.read
  .format("csv")
  .option("header", "true") //reading the headers
  .schema(schema)
  .load("C:/Users/MHT/Desktop/histocaisse_dte1.csv")

val hist2 = spark.read
  .format("csv")
  .option("header", "true") //reading the headers
  .schema(schema)
  .load("C:/Users/MHT/Desktop/histocaisse_dte2.csv")

最后,您可以无错误地应用最终逻辑

答案 1 :(得分:0)

Databricks Spark运行时支持MERGE运算符

它允许您根据连接条件更新目标表

https://docs.databricks.com/spark/latest/spark-sql/language-manual/merge-into.html

MERGE INTO [db_name.]target_table [AS target_alias]
USING [db_name.]source_table [<time_travel_version>] [AS source_alias]
ON <merge_condition>
[ WHEN MATCHED [ AND <condition> ] THEN <matched_action> ]
[ WHEN MATCHED [ AND <condition> ] THEN <matched_action> ]
[ WHEN NOT MATCHED [ AND <condition> ]  THEN <not_matched_action> ]

基本上它将与仅更新子句

合并