Postgres-每n天分组

时间:2019-01-31 20:59:50

标签: sql postgresql

我可以每隔n天在postgres中分组一次吗?

如果我们有:

date       |  price
2018-01-01 |  10
2018-01-02 |  11
2018-01-03 |  10.5
.....

类似于每10天进行分组并获取价格列的平均值

3 个答案:

答案 0 :(得分:1)

这个怎么样?它会连续存储10天的时间段,而不会出现空白。

CREATE TABLE date (
  date  DATE             NOT NULL,
  price DOUBLE PRECISION NOT NULL
);

INSERT INTO date (date, price)
SELECT (now()::DATE) + s.i,
  s.i :: DOUBLE PRECISION
FROM generate_series(0, 1000) AS s(i);

SELECT ((extract(EPOCH FROM date) / (60 * 60 * 24)) :: BIGINT) / 10
    , avg(price) AS average_price
FROM date
GROUP BY 1
ORDER BY 1;

答案 1 :(得分:0)

example data:
ugautil=> select id,date, price from sales order by 1 limit 30;
id  |    date    | price  
-----+------------+--------
569 | 2018-01-01 | 296.01
570 | 2018-01-02 | 409.50
571 | 2018-01-03 |  46.73
572 | 2018-01-04 | 302.13
573 | 2018-01-05 | 313.83
574 | 2018-01-06 | 302.68
575 | 2018-01-07 | 359.53
576 | 2018-01-08 | 348.60
577 | 2018-01-09 | 376.09
578 | 2018-01-10 |  23.71
579 | 2018-01-11 | 470.93
580 | 2018-01-12 | 409.37
581 | 2018-01-13 | 160.95
582 | 2018-01-14 |  22.04
583 | 2018-01-15 | 295.15
584 | 2018-01-16 | 475.42
585 | 2018-01-17 | 399.37
586 | 2018-01-18 | 394.43
587 | 2018-01-19 |  91.97
588 | 2018-01-20 |  27.38
589 | 2018-01-21 | 286.23
590 | 2018-01-22 |  57.81
591 | 2018-01-23 | 486.14
592 | 2018-01-24 |  10.30
593 | 2018-01-25 | 423.67
594 | 2018-01-26 | 169.94
595 | 2018-01-27 | 152.08
596 | 2018-01-28 | 344.42
597 | 2018-01-29 | 448.63
598 | 2018-01-30 | 360.33
(30 rows)

Picking Jan 1, 2018 as start date. Every 10 days gives us an index number. 
only looking at first 3 groups in Jan
ugautil=> select floor((extract(epoch from date) - extract(epoch from date('2018-01-01')))/86400/10) as "ten_day_index", round(avg(price),2) from sales group by 1 order by 1 limit 3;
ten_day_index | round  
---------------+--------
           0 | 277.88
           1 | 274.70
           2 | 273.96
(3 rows)

ugautil=> delete from sales where id >= 569 and id <= 576;
DELETE 8
ugautil=> select id,date, price from sales order by 1 limit 30;
id  |    date    | price  
-----+------------+--------
577 | 2018-01-09 | 376.09
578 | 2018-01-10 |  23.71
579 | 2018-01-11 | 470.93
580 | 2018-01-12 | 409.37
581 | 2018-01-13 | 160.95
582 | 2018-01-14 |  22.04
583 | 2018-01-15 | 295.15
584 | 2018-01-16 | 475.42
585 | 2018-01-17 | 399.37
586 | 2018-01-18 | 394.43
587 | 2018-01-19 |  91.97
588 | 2018-01-20 |  27.38
589 | 2018-01-21 | 286.23
590 | 2018-01-22 |  57.81
591 | 2018-01-23 | 486.14
592 | 2018-01-24 |  10.30
593 | 2018-01-25 | 423.67
594 | 2018-01-26 | 169.94
595 | 2018-01-27 | 152.08
596 | 2018-01-28 | 344.42
597 | 2018-01-29 | 448.63
598 | 2018-01-30 | 360.33
599 | 2018-01-31 | 120.00
600 | 2018-02-01 | 328.08
601 | 2018-02-02 | 393.58
602 | 2018-02-03 |  52.04
603 | 2018-02-04 | 206.91
604 | 2018-02-05 | 194.20
605 | 2018-02-06 | 102.89
606 | 2018-02-07 | 146.78
(30 rows)

ugautil=> select floor((extract(epoch from date) - extract(epoch from date('2018-01-01')))/86400/10) as "ten_day_index", round(avg(price),2) from sales group by 1 order by 1 limit 3;
ten_day_index | round  
---------------+--------
           0 | 199.90
           1 | 274.70
           2 | 273.96
(3 rows)

只有1月9日和10日条目位于第0组的第一个平均值中

答案 2 :(得分:0)

这是蛮力的,我敢打赌,由于连接中的>和<,它的效率很低,但是从概念上讲,这听起来像是您想做的事情:

with intervals as (
  select start_date, start_date + interval '10 days' as end_date
  from generate_series (
    (select min (date) from price_data),
    (select max (date) from price_data),
    interval '10 days') gs (start_date)
)
select
  i.start_date, sum (p.price) / 10 as average
from
  price_data p
  join intervals i on
    p.date >= i.start_date and
    p.date <  i.end_date
group by
  i.start_date

这看起来很丑陋,但是我怀疑在大型数据集上它会运行得更快:

with intervals as (
  select
    start_date::date as start_date,
   (start_date + interval '10 days')::date as end_date
  from generate_series (
    (select min (date) from price_data),
    (select max (date) from price_data),
    interval '10 days') gs (start_date)
),
exploded_intervals as (
  select 
    start_date + i as snapshot_date, start_date, end_date
  from
    intervals i
    cross join generate_series (0, 9) gs (i)
)
select
  i.start_date, sum (p.price) / 10 as average
from
  price_data p
  join exploded_intervals i on
    p.date = i.snapshot_date
group by
  i.start_date

我不保证这些是最好的方法,但是它们是方法

简而言之,我将数据集中的最小和最大日期分成10天。前提是“每10天”的时钟从您的第一个日期开始。

从那里,我将实际数据分组到每个日期桶中,对价格进行求和并除以10。如果缺少日期,则应考虑到这一点。如果您在同一天有重复的商品……那么,这将人为地夸大您的“平均值”。如果您定义了有关如何处理骗子的规则,那么这很容易管理。