计算雪花中的警报洪水

时间:2020-03-24 08:05:52

标签: sql snowflake-cloud-data-platform snowflake-schema

我正在尝试在雪花中进行警报洪水计算。我使用雪花窗口函数创建了以下数据集。因此,如果该值大于或等于3,则警报泛洪将开始,对于下一个0值,它将结束。因此,在下面的示例中,警报洪水从“ 9:51”开始,在“ 9:54”结束,持续了3分钟;下一次洪水从“ 9:57”开始,在“ 10:02”结束,即持续5分钟。仅供参考,9:59的值为3,但是由于洪水已经开始,我们不必考虑它。下一次洪水位于10:03,但没有0值,因此我们必须考虑边缘值10:06。 因此,洪水的总时间为3 + 5 + 4 = 12分钟。

   DateTime    Value
3/10/2020 9:50  1
3/10/2020 9:51  3
3/10/2020 9:52  1
3/10/2020 9:53  2
3/10/2020 9:54  0
3/10/2020 9:55  0
3/10/2020 9:56  1
3/10/2020 9:57  3
3/10/2020 9:58  2
3/10/2020 9:59  3
3/10/2020 10:00 2
3/10/2020 10:01 2
3/10/2020 10:02 0
3/10/2020 10:03 3
3/10/2020 10:04 1
3/10/2020 10:05 1
3/10/2020 10:06 1

因此,简而言之,我期望输出低于

enter image description here

我尝试使用SQL进行测试,但是它没有提供正确的输出,它在第二次泛洪时间内失败(因为下一个0之前再次存在值3)

select t.*,
       (case when value >= 3
             then datediff(minute,
                           datetime,
                           min(case when value = 0 then datetime end) over (order by datetime desc)
                          )
        end) as diff_minutes
from t;

3 个答案:

答案 0 :(得分:2)

我并不是最引以为豪的这段代码,但是它可以工作并且可以作为一个起点。我敢肯定它可以清理或简化。而且我还没有评估较大表的性能。

我使用的主要见解是,如果将date_diff添加到日期,那么您会发现它们都添加到相同值的情况,这意味着它们都在计数到相同的“ 0”记录。希望这个概念对您有帮助。

此外,第一个cte是一种半hacky的方法,可在结果的末尾获得4。

--Add a fake zero at the end of the table to provide a value for
-- comparing high values that have not been resolved
-- added a flag so this fake value can be removed later
with fakezero as
(
SELECT datetime, value, 1 flag
FROM test

UNION ALL

SELECT dateadd(minute, 1, max(datetime)) datetime, 0 value, 0 flag
FROM test  
)

-- Find date diffs between high values and subsequent low values
,diffs as (
select t.*,
       (case when value >= 3
             then datediff(minute,
                           datetime,
                           min(case when value = 0 then datetime end) over (order by datetime desc)
                          )
        end) as diff_minutes
from fakezero t
)

--Fix cases where two High values are "resolved" by the same low value
--i.e. when adding the date_diff to the datetime results in the same timestamp
-- this means that the prior high value record that still hasn't been "resolved"
select
  datetime
  ,value
  ,case when 
      lag(dateadd(minute, diff_minutes, datetime)) over(partition by value order by datetime)
      = dateadd(minute, diff_minutes, datetime)
    then null 
    else diff_minutes 
  end as diff_minutes
from diffs
where flag = 1
order by datetime;

答案 1 :(得分:2)

WITH data as (
  select time::timestamp as time, value from values
    ('2020-03-10 9:50', 1 ),
    ('2020-03-10 9:51', 3 ),
    ('2020-03-10 9:52', 1 ),
    ('2020-03-10 9:53', 2 ),
    ('2020-03-10 9:54', 0 ),
    ('2020-03-10 9:55', 0 ),
    ('2020-03-10 9:56', 1 ),
    ('2020-03-10 9:57', 3 ),
    ('2020-03-10 9:58', 2 ),
    ('2020-03-10 9:59', 3 ),
    ('2020-03-10 10:00', 2 ),
    ('2020-03-10 10:01', 2 ),
    ('2020-03-10 10:02', 0 ),
    ('2020-03-10 10:03', 3 ),
    ('2020-03-10 10:04', 1 ),
    ('2020-03-10 10:05', 1 ),
    ('2020-03-10 10:06', 1 )
     s( time, value)
) 
select 
    a.time
    ,a.value
    ,min(trig_time)over(partition by reset_time_group order by time) as first_trigger_time
    ,iff(a.time=first_trigger_time, datediff('minute', first_trigger_time, reset_time_group), null) as trig_duration
from (
select d.time
   ,d.value 
   ,iff(d.value>=3,d.time,null) as trig_time
   ,iff(d.value=0,d.time,null) as reset_time
   ,max(time)over(order by time ROWS BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) as max_time
   ,coalesce(lead(reset_time)ignore nulls over(order by d.time), max_time) as lead_reset_time
   ,coalesce(reset_time,lead_reset_time) as reset_time_group
from data as d
) as a
order by time;

这给出了您似乎期望/描述的结果。

TIME                     VALUE  FIRST_TRIGGER_TIME         TRIG_DURATION
2020-03-10 09:50:00.000    1        
2020-03-10 09:51:00.000    3    2020-03-10 09:51:00.000    3
2020-03-10 09:52:00.000    1    2020-03-10 09:51:00.000    
2020-03-10 09:53:00.000    2    2020-03-10 09:51:00.000    
2020-03-10 09:54:00.000    0    2020-03-10 09:51:00.000    
2020-03-10 09:55:00.000    0        
2020-03-10 09:56:00.000    1        
2020-03-10 09:57:00.000    3    2020-03-10 09:57:00.000    5
2020-03-10 09:58:00.000    2    2020-03-10 09:57:00.000    
2020-03-10 09:59:00.000    3    2020-03-10 09:57:00.000    
2020-03-10 10:00:00.000    2    2020-03-10 09:57:00.000    
2020-03-10 10:01:00.000    2    2020-03-10 09:57:00.000    
2020-03-10 10:02:00.000    0    2020-03-10 09:57:00.000    
2020-03-10 10:03:00.000    3    2020-03-10 10:03:00.000    3
2020-03-10 10:04:00.000    1    2020-03-10 10:03:00.000    
2020-03-10 10:05:00.000    1    2020-03-10 10:03:00.000    
2020-03-10 10:06:00.000    1    2020-03-10 10:03:00.000    

因此,如何工作是找到触发时间和复位时间,然后针对最后一行边缘情况计算出max_time。之后,我们找到下一个reset_time转发,如果没有,则使用max_time,然后选择当前的重置时间或先前的lead_reset_time,对于您在此处所做的工作,此步骤可以忽略,因为您的数据无法触发和重置同一行。鉴于我们在触发器行上进行数学运算,因此重置行知道它属于哪个组并不重要。

然后,当我们达到嵌套/相关SQL的雪花限制时,我们进入一个新的选择层,并在reset_group中做一个分钟以查找第一个触发时间,然后将其与行时间进行比较并进行日期差异。

旁注date_diff在数学上有点天真,并且'2020-01-01 23:59:59''2020-01-02 00:00:01'相隔2秒,但相差1分钟相隔1小时1天,因为该函数将时间戳转换为选定的单位(并截断),然后将这些结果相区别。

要获得请求中要求的值为4的最终批次,请将lead_reset_time行更改为:

,coalesce(lead(reset_time)ignore nulls over(order by d.time), dateadd('minute', 1, max_time)) as lead_reset_time

如果您希望在以后的时间中没有数据,那么将max_time向前移动一分钟,则现有10:06的行状态在1分钟内有效。这不是我要怎么做...但是有您想要的代码。

答案 2 :(得分:1)

javascript udf版本:

list.map(acct => FeatureWhitelisting(acct.split(":")(0), ))

其中Flood_count()定义为:

select d, v, iff(3<=v and 1=row_number() over (partition by N order by d),
    count(*) over (partition by N), null) trig_duration
from t, lateral flood_count(t.v::float) 
order by d;

假设此输入:

create or replace function flood_count(V float) 
returns table (N float)
language javascript AS
$${

  initialize: function() { 
    this.n = 0 
    this.flood = false
  },

  processRow: function(row, rowWriter) { 
    if (3<=row.V && !this.flood) {
        this.flood = true
        this.n++
    }
    else if (0==row.V) this.flood=false
    rowWriter.writeRow({ N: this.flood ? this.n : null })  
  },

}$$;