如何使用SparkSQL筛选百分比输入值?

时间:2019-07-11 19:57:53

标签: apache-spark apache-spark-sql

我遇到这样的情况:

scala> val values = Seq((7,-1),(null,null),(1,0),(null,3),(2,5),(-1,null)).toDF("price","size")

scala> values.createOrReplaceTempView("mydata")

scala> sqlContext.sql("select percentile(price,0.5), percentile(size,0.5) from mydata").show()
+-----------------------------------------+----------------------------------------+
|percentile(price, CAST(0.5 AS DOUBLE), 1)|percentile(size, CAST(0.5 AS DOUBLE), 1)|
+-----------------------------------------+----------------------------------------+
|                                      1.5|                                     1.5|
+-----------------------------------------+----------------------------------------+

是否可以根据某种条件来过滤pricesize的值?例如,假设我只想包含> 0的值。在Postgres中,我可以执行以下操作:

select
   percentile_cont (0.5) within group (order by price) filter (where price > 0),
   percentile_cont (0.5) within group (order by size) filter (where size > 0)
from (values (7,-1),(null,null),(1,0),(null,3),(2,5),(-1,null)) T(price,size);

 percentile_cont | percentile_cont
-----------------+-----------------
               2 |               4

与SparkSQL类似吗?

1 个答案:

答案 0 :(得分:0)

我自己找到了解决方法:

sqlContext.sql("select percentile(case when price > 0 then price else null end,0.5) as median_price, percentile(case when size > 0 then size else null end, 0.5) as median_size from mydata").show()
+------------+-----------+
|median_price|median_size|
+------------+-----------+
|         2.0|        4.0|
+------------+-----------+