Spark-sql Pivoting至少对于已获取的批量数据无法正常工作

时间:2019-02-11 08:31:06

标签: scala apache-spark apache-spark-sql datastax databricks

大多数情况下,数据透视无法正常工作,即增加源表记录。


source_df
+---------------+-------------------+--------------------+-------------------+-------------------+--------------+-----------------------+----------------------+-----------+--------------+-------------------+----------------+---------------+---------------+
|model_family_id|classification_type|classification_value|benchmark_type_code|          data_date|data_item_code|data_item_value_numeric|data_item_value_string|fiscal_year|fiscal_quarter|        create_date|last_update_date|create_user_txt|update_user_txt|
+---------------+-------------------+--------------------+-------------------+-------------------+--------------+-----------------------+----------------------+-----------+--------------+-------------------+----------------+---------------+---------------+
|              1|            COUNTRY|                 HKG|               MEAN|2017-12-31 00:00:00|   CREDITSCORE|                     13|                   bb-|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
|              1|            COUNTRY|                 HKG|            OBS_CNT|2017-12-31 00:00:00|   CREDITSCORE|                    649|                    aa|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
|              1|            COUNTRY|                 HKG|         OBS_CNT_CA|2017-12-31 00:00:00|   CREDITSCORE|                    649|                  null|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
|              1|            COUNTRY|                 HKG|       PERCENTILE_0|2017-12-31 00:00:00|   CREDITSCORE|                      3|                    aa|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
|              1|            COUNTRY|                 HKG|      PERCENTILE_10|2017-12-31 00:00:00|   CREDITSCORE|                      8|                  bbb+|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
|              1|            COUNTRY|                 HKG|     PERCENTILE_100|2017-12-31 00:00:00|   CREDITSCORE|                     23|                     d|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
|              1|            COUNTRY|                 HKG|      PERCENTILE_25|2017-12-31 00:00:00|   CREDITSCORE|                     11|                   bb+|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
|              1|            COUNTRY|                 HKG|      PERCENTILE_50|2017-12-31 00:00:00|   CREDITSCORE|                     14|                    b+|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
|              1|            COUNTRY|                 HKG|      PERCENTILE_75|2017-12-31 00:00:00|   CREDITSCORE|                     15|                     b|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
|              1|            COUNTRY|                 HKG|      PERCENTILE_90|2017-12-31 00:00:00|   CREDITSCORE|                     17|                  ccc+|       2017|             4|2018-03-31 14:04:18|            null|           LOAD|           null|
+---------------+-------------------+--------------------+-------------------+-------------------+--------------+-----------------------+----------------------+-----------+--------------+-------------------+----------------+---------------+---------------+

我尝试了以下代码

val pivot_df =  source_df.groupBy("model_family_id","classification_type","classification_value" ,"data_item_code","data_date","fiscal_year","fiscal_quarter" , "create_user_txt", "create_date")
                .pivot("benchmark_type_code" , 
                        Seq("mean","obs_cnt","obs_cnt_ca","percentile_0","percentile_10","percentile_25","percentile_50","percentile_75","percentile_90","percentile_100")
                      )
                .agg(  first(

                  when(  col("data_item_code") === "CREDITSCORE" ,  col("data_item_value_string"))
                  .otherwise(col("data_item_value_numeric"))
                )
              )  

我得到的结果不足,不确定我的代码有什么问题。


+---------------+-------------------+--------------------+--------------+-------------------+-----------+--------------+---------------+-------------------+----+-------+----------+------------+-------------+-------------+-------------+-------------+-------------+--------------+
|model_family_id|classification_type|classification_value|data_item_code|          data_date|fiscal_year|fiscal_quarter|create_user_txt|        create_date|mean|obs_cnt|obs_cnt_ca|percentile_0|percentile_10|percentile_25|percentile_50|percentile_75|percentile_90|percentile_100|
+---------------+-------------------+--------------------+--------------+-------------------+-----------+--------------+---------------+-------------------+----+-------+----------+------------+-------------+-------------+-------------+-------------+-------------+--------------+
|              1|            COUNTRY|                 HKG|   CREDITSCORE|2017-12-31 00:00:00|       2017|             4|           LOAD|2018-03-31 14:04:18|null|   null|      null|        null|         null|         null|         null|         null|         null|          null|
+---------------+-------------------+--------------------+--------------+-------------------+-----------+--------------+---------------+-------------------+----+-------+----------+------------+-------------+-------------+-------------+-------------+-------------+--------------+

我尝试过在数据透视函数中没有列的Seq。但仍然没有按预期进行,请提供任何帮助??

2)在when子句中,如果枢轴列为$“ benchmark_type_code” ==='OBS_CNT'| 'OBS_CNT',则应采用$ data_item_value_numeric。如何实现这一目标?

2 个答案:

答案 0 :(得分:0)

我不确定您的Spark版本是否为2.X。我的软件版本如下:  spark ==> 2.2.1  斯卡拉==> 2.11 根据以上所述,我得到了正确的答案:

+---------------+-------------------+--------------------+--------------+-------------------+-----------+--------------+---------------+-------------------+----+-------+----------+------------+-------------+--------------+-------------+-------------+-------------+-------------+
|model_family_id|classification_type|classification_value|data_item_code|          data_date|fiscal_year|fiscal_quarter|create_user_txt|        create_date|MEAN|OBS_CNT|OBS_CNT_CA|PERCENTILE_0|PERCENTILE_10|PERCENTILE_100|PERCENTILE_25|PERCENTILE_50|PERCENTILE_75|PERCENTILE_90|
+---------------+-------------------+--------------------+--------------+-------------------+-----------+--------------+---------------+-------------------+----+-------+----------+------------+-------------+--------------+-------------+-------------+-------------+-------------+
|              1|            COUNTRY|                 HKG|   CREDITSCORE|2017-12-31 00:00:00|       2017|             4|           LOAD|2018-03-31 14:04:18| bb-|     aa|          |          aa|         bbb+|             d|          bb+|           b+|            b|         ccc+|
+---------------+-------------------+--------------------+--------------+-------------------+-----------+--------------+---------------+-------------------+----+-------+----------+------------+-------------+--------------+-------------+-------------+-------------+-------------+

这是我的代码,您可以尝试

import spark.implicits._
source_df
    .groupBy($"model_family_id",$"classification_type",$"classification_value",$"data_item_code",$"data_date",$"fiscal_year",$"fiscal_quarter",$"create_user_txt",$"create_date")
    .pivot("benchmark_type_code")
    .agg(
      first(
        when($"data_item_code"==="CREDITSCORE", $"data_item_value_string")
          .otherwise($"data_item_value_numeric")
      )
    ).show()

答案 1 :(得分:0)

我们可以在以下条件下的when条件下使用when条件。

.agg(  first(
                  when(  col("data").isin("x","a","y","z")  ,
                   when(  col("code").isin("aa","bb")  ,  col("numeric")).otherwise(col("string"))
                          )
                 .otherwise(col("numeric"))
                )