Spark UDF检索Last非空值

时间:2019-04-22 12:44:27

标签: apache-spark apache-spark-sql

输入数据集

 Dataset<Row> inputDS = spark.read.format("avro").path("hdfs://namenode:8020/..")
 +---------------+---------------+----------------+-------+--------------+--------+
 |  time         | thingId       |     controller | module| variableName |  value |
 +---------------+---------------+----------------+-------+--------------+--------+
 |1554188264901  |  0002019000000|        0       | 0     |Value         |    5   |
 |1554188264901  |  0002019000000|        0       | 0     |SetPoint      |    7   |
 |1554188276412  |  0002019000000|        0       | 0     |Voltage       |    9   |
 |1554188276412  |  0002019000000|        0       | 0     |SetPoint      |    10  |  
 |1554188639406  |  0002019000000|        0       | 0     |SetPoint      |    6   |
 |1554188639407  |  0002019000000|        0       | 0     |Voltage       |    3   |
 +---------------+---------------+----------------+-------+--------------+--------+

中间数据集

 inputDS.createOrReplaceTempView("abc");
 Dataset<Row> intermediateDS<Row> =
 spark.sql("select time,thingId,controller,module,variableName,value,count(time) over (partition by time) as time_count from abc")
                                        .filter("time_count=1").drop("time_count");

 +---------------+---------------+----------------+-------+--------------+--------+
 |  time         | thingId       |     controller | module| variableName |  value |
 +---------------+---------------+----------------+-------+--------------+--------+
 |1554188639406  |  0002019000000|        0       | 0     |SetPoint      |    6   |
 |1554188639407  |  0002019000000|        0       | 0     |Voltage       |    3   |
 +---------------+---------------+----------------+-------+--------------+--------+

中间数据集只不过是像上面那样仅出现一次的时间列。

必需的输出数据集

 +---------------+---------------+----------------+-------+--------------+--------+
 |  time         | thingId       |     controller | module| variableName |  value |
 +---------------+---------------+----------------+-------+--------------+--------+
 |1554188639406  |  0002019000000|        0       | 0     |SetPoint      |    6   |
 |1554188639406  |  0002019000000|        0       | 0     |Voltage       |    9   |  // last non null value for the set (thingId, controller, module) and variableName='Voltage'
 |1554188639407  |  0002019000000|        0       | 0     |Voltage       |    3   |  
 |1554188639407  |  0002019000000|        0       | 0     |SetPoint      |    10  |  // last non null value for the set (thingId, controller, module) and variableName='SetPoint'
 +---------------+---------------+----------------+-------+--------------+--------+

要获得所需的输出,我尝试使用如下所示的UDF

 spark.udf().register("getLastvalue_udf",getValue,DataType.StringType);

 intermediateDS=intermediateDS.withColumn("Last_Value",callUDF("getLastvalue_udf",col("variableName")));

 UDF1<String,String> getValue = new UDF1<String,String>(){

 @Override
 public String call(String t1){

 String variableName="";

 if(t1=="SetPoint"){
 variableName="Voltage";
 }else{
 variableName="SetPoint";
 }

 String value = String.valueOf(spark.sql("SELECT LAST(value) OVER (order by time desc) as value from abc where "
  +" variableName="+ variableName +") limit 1")

 return value;
 }

但是UDF刚返回了[value:String]spark.sql()在UDF中不起作用。

1。)如何从UDF上方获取所需的输出,或通过其他解决方法建议我。

2。)是否可以在map函数内部调用spark sql? 谢谢。

1 个答案:

答案 0 :(得分:0)

Lag函数解决了从表中前一行返回值的情况

以下代码:

import static org.apache.spark.sql.expressions.Window;
import static org.apache.spark.sql.expressions.WindowSpec;
import static org.apache.spark.sql.functions;

WindowSpec lagWindow = Window.partitionBy("thingId","controller","module","variableName").orderBy("time");
DS.withColumn("value",when(col("value").equalTo(""),lag("value",1).over(lagWindow)).otherwise(col("value")));