红移。我们如何才能将表从列自动转换为行?

时间:2018-09-04 21:59:31

标签: dynamic pivot-table multiple-columns amazon-redshift transpose

如何将Redshift表从列转换为行?

例如,如果我们有一个通用(未知)表,如下所示:

source table:

date        id      alfa                beta                gamma   ...                 omega
2018-08-03  1       1                   2                   3                           4
2018-08-03  2       4                   3                   2                           1
...
2018-09-04  1       3                   1                   2                           4
...

我们如何获得以下结果?

transposed table:

date        id      column_name     column_value
2018-08-03  1       alfa            1
2018-08-03  1       beta            2
...
2018-08-03  2       omega           1
...
2018-09-04  1       gamma           2
...

在目标表中,列数(alfa,beta,gamma,...,omega)都是动态的(因此,我们正在寻找一种没有case when映射的解决方案每个列都是必需的,因为我们想将此应用到几个不同的表中。

但是我们将在所有目标表中(以及最后在所有表中有主键或候选键)中具有date和id字段。

我们的Redshift版本是:

PostgreSQL 8.0.2, Redshift 1.0.3380

我们该怎么做?

2 个答案:

答案 0 :(得分:1)

您需要将列名称硬编码到查询中。

from resample.bootstrap import bootstrap
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression

def fitreg(A):
    scale = StandardScaler()
    reg = LinearRegression(fit_intercept=True)
    X_scale = scale.fit_transform(A[:, :A.shape[1]-2])
    y = A[:, A.shape[1]-1]
    reg.fit(X_scale, y)
    return {"coef": reg.coef_, "intercept": reg.intercept_}

boot_coef = bootstrap(a=df.join(y).values, f=fitreg, b=5000)

结果:

CREATE TABLE stack(date TEXT, id BIGINT, alpha INT, beta INT, gamma INT, omega INT);

INSERT INTO STACK VALUES('2018-08-03', 1, 1, 2, 3, 4);
INSERT INTO STACK VALUES('2018-08-03', 2, 4, 3, 2, 1);
INSERT INTO STACK VALUES('2018-08-04', 1, 3, 1, 2, 4);

SELECT
  date,
  id,
  col,
  col_value
FROM
(
SELECT date, id, alpha AS col_value, 'alpha' AS col FROM stack
UNION
SELECT date, id, beta  AS col_value, 'beta'  AS col FROM stack
UNION
SELECT date, id, gamma AS col_value, 'gamma' AS col FROM stack
UNION
SELECT date, id, omega AS col_value, 'omega' AS col FROM stack
) AS data
ORDER BY date, id, col

答案 1 :(得分:0)

代替未在评论中提供答案,这里是半伪代码来解释我是如何做到的,如果您需要更多信息/说明,请告诉我

# dictionary to define your target structure
target_d = {'date':'','id':'','column_name':'','column_value':''}

# dictionary for source structure
source_d = {'date':'date','id':'id','column_name1':'','column_name2':''....}

使用上面的这个字典,您要声明一个字段是否将被映射为动态字段,所有其他字段/列都将被透视,您可以使用源表DDL将其增强为动态

# assuming you already read your source data
# your while loop to go thru the coming data
while <your code here>
    # create a dict to process an incoming row
    curr_d = target_d.copy()

    curr_d['date'] = date from incoming record
    curr_d['id'] = id from incoming record

    # since we are going to create a row for each column name/value combos 
    # we need a new dict to hold the values

    out_d = curr_d

上面的这一行有两个目的,为输出行创建一个新字典,并保留输出行的持久性部分(即日期和ID)

    # rest of the fields are going to be pivoted now
    for afield in source_d:
        if afield not in source_d.values():
            curr_d['column_name'] = afield
            curr_d['column_value'] = column value from incoming record

        create a 'row' from your out_d dict
        write to output/ append to output data frame (if you use a data frame)

虽然循环将遍历您的源行,但是for循环将为目标的每个列名称/值组合创建一个新行

让我知道这是否对您有用。