通过不工作获得累积和使用和分区

时间:2017-08-10 09:21:39

标签: sql sql-server

我正在尝试获取累计记录总数,直到2列的值发生变化。以下是我现在拥有的数据样本。



DT	                    ABBR_VOYAGE_OUT_N	 ABBR_VESSEL_M
2017-05-08 16:00:00.000	      0001W	       rmhp tmvpn
2017-05-08 16:00:00.000	      0001W	       rmhp tmvpn
2017-05-09 20:00:00.000	      0001W	       rmhp tmvpn
2017-05-08 16:00:00.000	      0002W	       rmhp hueml
2017-05-08 16:00:00.000	      0002W	       rmhp hueml
2017-05-09 20:00:00.000	      0002W	       rmhp hueml

ETB_DT	                No_of_records
2017-05-10 12:00:00.000	     17
2017-05-10 12:00:00.000	     17
2017-05-10 12:00:00.000	     10
2017-05-26 14:30:00.000	     10	       
2017-05-26 14:30:00.000	     10	       
2017-05-26 14:30:00.000	     10	       




我试图将number_of_records总结为累积,直到ABBR_VOYAGE_OUT_N和ABBR_VESSEL_M发生变化。

我已尝试过以下代码,但它无效。



select DT, ABBR_VOYAGE_OUT_N, ABBR_VESSEL_M,ETB_DT,No_of_records,
       sum(No_of_records) over (partition by ABBR_VOYAGE_OUT_N, ABBR_VESSEL_M order by ABBR_VOYAGE_OUT_N, ABBR_VESSEL_M ASC) as cumulative
from no_of_cntr
order by ABBR_VOYAGE_OUT_N,ABBR_VESSEL_M




它给了我以下输出。



DT	                    ABBR_VOYAGE_OUT_N	 ABBR_VESSEL_M
2017-05-08 16:00:00.000	      0001W	       rmhp tmvpn
2017-05-08 16:00:00.000	      0001W	       rmhp tmvpn
2017-05-09 20:00:00.000	      0001W	       rmhp tmvpn
2017-05-08 16:00:00.000	      0002W	       rmhp hueml
2017-05-08 16:00:00.000	      0002W	       rmhp hueml
2017-05-09 20:00:00.000	      0002W	       rmhp hueml

ETB_DT	                No_of_records	cumulative
2017-05-10 12:00:00.000	     17	          44
2017-05-10 12:00:00.000	     17	          44
2017-05-10 12:00:00.000	     10	          44
2017-05-26 14:30:00.000	     10	          30
2017-05-26 14:30:00.000	     10	          30
2017-05-26 14:30:00.000	     10	          30




以下是我想要获得的所需输出。



DT	                    ABBR_VOYAGE_OUT_N	 ABBR_VESSEL_M
2017-05-08 16:00:00.000	      0001W	       rmhp tmvpn
2017-05-08 16:00:00.000	      0001W	       rmhp tmvpn
2017-05-09 20:00:00.000	      0001W	       rmhp tmvpn
2017-05-08 16:00:00.000	      0002W	       rmhp hueml
2017-05-08 16:00:00.000	      0002W	       rmhp hueml
2017-05-09 20:00:00.000	      0002W	       rmhp hueml

ETB_DT	                 No_of_records	cumulative
2017-05-10 12:00:00.000	     17	          17
2017-05-10 12:00:00.000	     17	          34
2017-05-10 12:00:00.000	     10	          44
2017-05-26 14:30:00.000	     10	          10
2017-05-26 14:30:00.000	     10	          20
2017-05-26 14:30:00.000	     10	          30




你有什么想法我没有得到正确的输出吗?

2 个答案:

答案 0 :(得分:1)

您可以在开头添加一个cte,以便考虑分区必须完成的顺序:

DECLARE @t TABLE(
  Dt DATETIME
 ,ABBR_VOYAGE_OUT_N NVARCHAR(20)
 ,ABBR_VESSEL_M NVARCHAR(20)
 ,ETB_DT DATETIME
 ,No_of_records INT
)

INSERT INTO @t VALUES('2017-05-08 16:00:00.000', '0001W', 'rmhp tmvpn', '2017-05-10 12:00:00.000', 17);
INSERT INTO @t VALUES('2017-05-08 16:00:00.000', '0001W', 'rmhp tmvpn', '2017-05-10 12:00:00.000', 17);
INSERT INTO @t VALUES('2017-05-09 20:00:00.000', '0001W', 'rmhp tmvpn', '2017-05-10 12:00:00.000', 10);
INSERT INTO @t VALUES('2017-05-08 16:00:00.000', '0002W', 'rmhp hueml', '2017-05-26 14:30:00.000', 10);       
INSERT INTO @t VALUES('2017-05-08 16:00:00.000', '0002W', 'rmhp hueml', '2017-05-26 14:30:00.000', 10);       
INSERT INTO @t VALUES('2017-05-09 20:00:00.000', '0002W', 'rmhp hueml', '2017-05-26 14:30:00.000', 10);

WITH cte AS(
  SELECT DT, ABBR_VOYAGE_OUT_N, ABBR_VESSEL_M,ETB_DT,No_of_records, ROW_NUMBER() OVER (ORDER BY DT, ETB_DT) AS rn
    FROM @t
)
select DT, ABBR_VOYAGE_OUT_N, ABBR_VESSEL_M,ETB_DT,No_of_records,
       sum(No_of_records) over (partition by ABBR_VOYAGE_OUT_N, ABBR_VESSEL_M ORDER BY rn) as cumulative
from cte
order by ABBR_VOYAGE_OUT_N,ABBR_VESSEL_M

答案 1 :(得分:0)

我已经在我的系统上做了类似的事情......

我认为你需要看看"无限制前进和当前行之间的行"

这是我对我的订单的查询,希望它可以帮助您排序......

library(ggplot2)
library(plotly)
library(ggthemes)

p <- ggplot(try1, aes(x = as.factor(variable), y = as.factor(value)))  + 
     geom_point() +
     geom_line() +
     labs(title = try1$Indicator.Name[1], x = 'year', y = '%') +
     theme_economist()

ggplotly(p)