我正在使用Google Big Query,我正试图从公共示例数据集中获取一个透视结果。
对现有表的简单查询是:
SELECT *
FROM publicdata:samples.shakespeare
LIMIT 10;
此查询返回以下结果集。
现在我要做的是,从表中获取结果,如果单词是勇敢的,选择“BRAVE”作为column_1,如果单词出现,请选择“ATTENDED”作为column_2,然后聚合这两个字的数量。
这是我正在使用的查询。
SELECT
(CASE WHEN word = 'brave' THEN 'BRAVE' ELSE '' END) AS column_1,
(CASE WHEN word = 'attended' THEN 'ATTENDED' ELSE '' END) AS column_2,
SUM (word_count)
FROM publicdata:samples.shakespeare
WHERE (word = 'brave' OR word = 'attended')
GROUP BY column_1, column_2
LIMIT 10;
但是,此查询返回数据
我在寻找的是
我知道这个数据集的这个数据集没有意义。但我只是以此为例来解释这个问题。如果你可以为我提供一些指示,那就太好了。
编辑:我也提到How to simulate a pivot table with BigQuery?,它似乎也有我在这里提到的相同问题。
答案 0 :(得分:10)
更新2019年:
由于这是一个很受欢迎的问题,让我更新一下#standardSQL以及一个更为一般的旋转案例。在这种情况下,我们有多行,每个传感器查看不同类型的属性。为了转动它,我们会做类似的事情:
#standardSQL
SELECT MoteName
, TIMESTAMP_TRUNC(Timestamp, hour) hour
, AVG(IF(SensorName LIKE '%altitude', Data, null)) altitude
, AVG(IF(SensorName LIKE '%light', Data, null)) light
, AVG(IF(SensorName LIKE '%mic', Data, null)) mic
, AVG(IF(SensorName LIKE '%temperature', Data, null)) temperature
FROM `data-sensing-lab.io_sensor_data.moscone_io13`
WHERE MoteName = 'XBee_40670F5F'
GROUP BY 1, 2
作为AVG()
的替代方案,您可以尝试MAX()
,ANY_VALUE()
等。
<强>以前强>:
我不确定你要做什么,但是:
SELECT NTH(1, words) WITHIN RECORD column_1, NTH(2, words) WITHIN RECORD column_2, f0_
FROM (
SELECT NEST(word) words, SUM(c)
FROM (
SELECT word, SUM(word_count) c
FROM publicdata:samples.shakespeare
WHERE word in ('brave', 'attended')
GROUP BY 1
)
)
更新:结果相同,查询更简单:
SELECT NTH(1, word) column_1, NTH(2, word) column_2, SUM(c)
FROM (
SELECT word, SUM(word_count) c
FROM publicdata:samples.shakespeare
WHERE word in ('brave', 'attended')
GROUP BY 1
)
答案 1 :(得分:3)
2021 年更新:
BigQuery 中引入了一个新的 PIVOT 运算符。
在使用 PIVOT 将销售额和季度轮换为 Q1、Q2、Q3、Q4 列之前:
产品 | 销售 | 季度 |
---|---|---|
羽衣甘蓝 | 51 | 第一季度 |
羽衣甘蓝 | 23 | 第二季度 |
羽衣甘蓝 | 45 | 第三季度 |
羽衣甘蓝 | 3 | 第四季度 |
苹果 | 77 | 第一季度 |
苹果 | 0 | 第二季度 |
苹果 | 25 | 第三季度 |
苹果 | 2 | 第四季度 |
在使用 PIVOT 将销售额和季度轮换为 Q1、Q2、Q3、Q4 列之后:
产品 | 第一季度 | Q2 | Q3 | 第四季度 |
---|---|---|---|---|
苹果 | 77 | 0 | 25 | 2 |
羽衣甘蓝 | 51 | 23 | 45 | 3 |
查询:
with Produce AS (
SELECT 'Kale' as product, 51 as sales, 'Q1' as quarter UNION ALL
SELECT 'Kale', 23, 'Q2' UNION ALL
SELECT 'Kale', 45, 'Q3' UNION ALL
SELECT 'Kale', 3, 'Q4' UNION ALL
SELECT 'Apple', 77, 'Q1' UNION ALL
SELECT 'Apple', 0, 'Q2' UNION ALL
SELECT 'Apple', 25, 'Q3' UNION ALL
SELECT 'Apple', 2, 'Q4')
SELECT * FROM
(SELECT product, sales, quarter FROM Produce)
PIVOT(SUM(sales) FOR quarter IN ('Q1', 'Q2', 'Q3', 'Q4'))
要动态构建列列表,请使用 execute immediate
:
execute immediate (
select '''
select *
from (select product, sales, quarter from Produce)
pivot(sum(sales) for quarter in ("''' || string_agg(distinct quarter, '", "' order by quarter) || '''"))
'''
from Produce
);
答案 2 :(得分:2)
受How to simulate a pivot table with BigQuery?的启发,使用subselect的以下请求会产生您想要的结果:
SELECT
MAX(column_1),
MAX(column_2),
SUM(wc),
FROM (
SELECT
(CASE WHEN word = 'brave' THEN 'BRAVE' ELSE '' END) AS column_1,
(CASE WHEN word = 'attended' THEN 'ATTENDED' ELSE '' END) AS column_2,
SUM (word_count) AS wc
FROM publicdata:samples.shakespeare
WHERE (word = 'brave' OR word = 'attended')
GROUP BY column_1, column_2
LIMIT 10
)
诀窍是MAX(NULL, 'ATTENDED', NULL, ...)
等于'ATTENDED'
。
答案 3 :(得分:2)
使用case / if语句创建透视列是一种方法。但如果旋转列的数量开始增长,则会非常烦人。为了解决这个问题,我使用python pandas创建了一个Python模块,它自动生成SQL查询,然后可以在BigQuery中运行。以下是对它的一个小介绍:
https://yashuseth.blog/2018/06/06/how-to-pivot-large-tables-in-bigquery
github关闭时的相关github代码:
import re
import pandas as pd
class BqPivot():
"""
Class to generate a SQL query which creates pivoted tables in BigQuery.
Example
-------
The following example uses the kaggle's titanic data. It can be found here -
`https://www.kaggle.com/c/titanic/data`
This data is only 60 KB and it has been used for a demonstration purpose.
This module comes particularly handy with huge datasets for which we would need
BigQuery(https://en.wikipedia.org/wiki/BigQuery).
>>> from bq_pivot import BqPivot
>>> import pandas as pd
>>> data = pd.read_csv("titanic.csv").head()
>>> gen = BqPivot(data=data, index_col=["Pclass", "Survived", "PassengenId"],
pivot_col="Name", values_col="Age",
add_col_nm_suffix=False)
>>> print(gen.generate_query())
select Pclass, Survived, PassengenId,
sum(case when Name = "Braund, Mr. Owen Harris" then Age else 0 end) as braund_mr_owen_harris,
sum(case when Name = "Cumings, Mrs. John Bradley (Florence Briggs Thayer)" then Age else 0 end) as cumings_mrs_john_bradley_florence_briggs_thayer,
sum(case when Name = "Heikkinen, Miss. Laina" then Age else 0 end) as heikkinen_miss_laina,
sum(case when Name = "Futrelle, Mrs. Jacques Heath (Lily May Peel)" then Age else 0 end) as futrelle_mrs_jacques_heath_lily_may_peel,
sum(case when Name = "Allen, Mr. William Henry" then Age else 0 end) as allen_mr_william_henry
from <--insert-table-name-here-->
group by 1,2,3
"""
def __init__(self, data, index_col, pivot_col, values_col, agg_fun="sum",
table_name=None, not_eq_default="0", add_col_nm_suffix=True, custom_agg_fun=None,
prefix=None, suffix=None):
"""
Parameters
----------
data: pandas.core.frame.DataFrame or string
The input data can either be a pandas dataframe or a string path to the pandas
data frame. The only requirement of this data is that it must have the column
on which the pivot it to be done.
index_col: list
The names of the index columns in the query (the columns on which the group by needs to be performed)
pivot_col: string
The name of the column on which the pivot needs to be done.
values_col: string
The name of the column on which aggregation needs to be performed.
agg_fun: string
The name of the sql aggregation function.
table_name: string
The name of the table in the query.
not_eq_default: numeric, optional
The value to take when the case when statement is not satisfied. For example,
if one is doing a sum aggregation on the value column then the not_eq_default should
be equal to 0. Because the case statement part of the sql query would look like -
... ...
sum(case when <pivot_col> = <some_pivot_col_value> then values_col else 0)
... ...
Similarly if the aggregation function is min then the not_eq_default should be
positive infinity.
add_col_nm_suffix: boolean, optional
If True, then the original values column name will be added as suffix in the new
pivoted columns.
custom_agg_fun: string, optional
Can be used if one wants to give customized aggregation function. The values col name
should be replaced with {}. For example, if we want an aggregation function like -
sum(coalesce(values_col, 0)) then the custom_agg_fun argument would be -
sum(coalesce({}, 0)).
If provided this would override the agg_fun argument.
prefix: string, optional
A fixed string to add as a prefix in the pivoted column names separated by an
underscore.
suffix: string, optional
A fixed string to add as a suffix in the pivoted column names separated by an
underscore.
"""
self.query = ""
self.index_col = list(index_col)
self.values_col = values_col
self.pivot_col = pivot_col
self.not_eq_default = not_eq_default
self.table_name = self._get_table_name(table_name)
self.piv_col_vals = self._get_piv_col_vals(data)
self.piv_col_names = self._create_piv_col_names(add_col_nm_suffix, prefix, suffix)
self.function = custom_agg_fun if custom_agg_fun else agg_fun + "({})"
def _get_table_name(self, table_name):
"""
Returns the table name or a placeholder if the table name is not provided.
"""
return table_name if table_name else "<--insert-table-name-here-->"
def _get_piv_col_vals(self, data):
"""
Gets all the unique values of the pivot column.
"""
if isinstance(data, pd.DataFrame):
self.data = data
elif isinstance(data, str):
self.data = pd.read_csv(data)
else:
raise ValueError("Provided data must be a pandas dataframe or a csv file path.")
if self.pivot_col not in self.data.columns:
raise ValueError("The provided data must have the column on which pivot is to be done. "\
"Also make sure that the column name in the data is same as the name "\
"provided to the pivot_col parameter.")
return self.data[self.pivot_col].astype(str).unique().tolist()
def _clean_col_name(self, col_name):
"""
The pivot column values can have arbitrary strings but in order to
convert them to column names some cleaning is required. This method
takes a string as input and returns a clean column name.
"""
# replace spaces with underscores
# remove non alpha numeric characters other than underscores
# replace multiple consecutive underscores with one underscore
# make all characters lower case
# remove trailing underscores
return re.sub("_+", "_", re.sub('[^0-9a-zA-Z_]+', '', re.sub(" ", "_", col_name))).lower().rstrip("_")
def _create_piv_col_names(self, add_col_nm_suffix, prefix, suffix):
"""
The method created a list of pivot column names of the new pivoted table.
"""
prefix = prefix + "_" if prefix else ""
suffix = "_" + suffix if suffix else ""
if add_col_nm_suffix:
piv_col_names = ["{0}{1}_{2}{3}".format(prefix, self._clean_col_name(piv_col_val), self.values_col.lower(), suffix)
for piv_col_val in self.piv_col_vals]
else:
piv_col_names = ["{0}{1}{2}".format(prefix, self._clean_col_name(piv_col_val), suffix)
for piv_col_val in self.piv_col_vals]
return piv_col_names
def _add_select_statement(self):
"""
Adds the select statement part of the query.
"""
query = "select " + "".join([index_col + ", " for index_col in self.index_col]) + "\n"
return query
def _add_case_statement(self):
"""
Adds the case statement part of the query.
"""
case_query = self.function.format("case when {0} = \"{1}\" then {2} else {3} end") + " as {4},\n"
query = "".join([case_query.format(self.pivot_col, piv_col_val, self.values_col,
self.not_eq_default, piv_col_name)
for piv_col_val, piv_col_name in zip(self.piv_col_vals, self.piv_col_names)])
query = query[:-2] + "\n"
return query
def _add_from_statement(self):
"""
Adds the from statement part of the query.
"""
query = "from {0}\n".format(self.table_name)
return query
def _add_group_by_statement(self):
"""
Adds the group by part of the query.
"""
query = "group by " + "".join(["{0},".format(x) for x in range(1, len(self.index_col) + 1)])
return query[:-1]
def generate_query(self):
"""
Returns the query to create the pivoted table.
"""
self.query = self._add_select_statement() +\
self._add_case_statement() +\
self._add_from_statement() +\
self._add_group_by_statement()
return self.query
def write_query(self, output_file):
"""
Writes the query to a text file.
"""
text_file = open(output_file, "w")
text_file.write(self.generate_query())
text_file.close()
答案 4 :(得分:1)
试试这个
SELECT sum(CASE WHEN word = 'brave' THEN word_count ELSE 0 END) AS brave , sum(CASE WHEN word = 'attended' THEN word_count ELSE 0 END) AS attended, SUM (word_count) as total_word_count FROM publicdata:samples.shakespeare WHERE (word = 'brave' OR word = 'attended')
答案 5 :(得分:1)
不是每个人都可以使用python或pandas(想想dataAnalysts和BI dudes :)) 这是标准SQL @ Bigquery中的动态数据透视过程。 它尚未聚合。 首先,您需要提供一个具有pe-KPI聚合值的表(如果需要)。 但会自动创建一个表格并生成所有数据透视列。
开始假设是您在输入表myDataset.myTable时像这样:
LONG,LAT,KPI,US,EUR
A,1,温度,78,45
A,1,压力120,114
B,1,temp,12,8
B,1,压力85,52
如果您像这样调用以下程序:
CALL warehouse.pivot ('myDataset','myTable',['LONG','LAT'], 'KPI');
您将获得一个名为myDataset.myTable_pivot的新表,如下所示:
LONG,LAT,temp_US,temp_EUR,pressure_US,pressure_EUR
A,1,78,45,120,114
B,1,12,8,85,52
这是代码:
create or replace procedure warehouse.pivot (dataset STRING, table_to_pivot STRING, ls_pks ARRAY<STRING>, pivot_column STRING)
BEGIN
DECLARE sql_pivot STRING;
DECLARE sql_pk_string STRING;
DECLARE sql_val_string STRING;
DECLARE sql_pivot_cols STRING DEFAULT "";
DECLARE pivot_cols_stmt STRING;
DECLARE pivot_ls_values ARRAY<STRING>;
DECLARE ls_pivot_value_columns ARRAY<STRING>;
DECLARE nb_pivot_col_values INT64;
DECLARE nb_pivot_val_values INT64;
DECLARE loop_index INT64 DEFAULT 0;
DECLARE loop2_index INT64 DEFAULT 0;
SET sql_pk_string= ( array_to_string(ls_pks,',') ) ;
/* get the values of pivot column to prepare the new columns in out put*/
SET pivot_cols_stmt = concat(
'SELECT array_agg(DISTINCT cast(', pivot_column ,' as string) ORDER BY ', pivot_column,' ) as pivot_ls_values, ',
'count(distinct ',pivot_column,') as nb_pivot_col_values ',
' FROM ', dataset,'.', table_to_pivot
);
EXECUTE IMMEDIATE pivot_cols_stmt into pivot_ls_values, nb_pivot_col_values;
/*get the name of value columns to preapre the new columns in output*/
set sql_val_string =concat(
"select array_agg(COLUMN_NAME) as ls_pivot_value_columns, count(distinct COLUMN_NAME) as nb_pivot_val_values ",
"FROM ",dataset,".INFORMATION_SCHEMA.COLUMNS where TABLE_NAME='",table_to_pivot,"' ",
"and COLUMN_NAME not in ('",array_to_string(ls_pks,"','"),"', '",pivot_column,"')"
);
EXECUTE IMMEDIATE sql_val_string
into ls_pivot_value_columns, nb_pivot_val_values ;
/*create statement to populate the new columns*/
while loop_index < nb_pivot_col_values DO
set loop2_index =0;
loop
SET sql_pivot_cols= concat (
sql_pivot_cols,
"max( ",
"if( ", pivot_column , "= '",pivot_ls_values[OFFSET (loop_index)],"' , ", ls_pivot_value_columns[OFFSET (loop2_index)], ", null) ",
") as ", pivot_ls_values[OFFSET (loop_index)], "_", ls_pivot_value_columns[OFFSET (loop2_index)],", "
);
SET loop2_index = loop2_index +1;
if loop2_index >= nb_pivot_val_values then
break;
end if;
END LOOP;
SET loop_index =loop_index+ 1;
END WHILE;
SET sql_pivot =concat (
"create or replace TABLE ", dataset,".",table_to_pivot,"_pivot as SELECT ",
sql_pk_string, ",", sql_pivot_cols, " FROM ",dataset,".", table_to_pivot ,
" GROUP BY ", sql_pk_string
);
EXECUTE IMMEDIATE sql_pivot;
END;
奇怪的事情:嵌套的while循环在BQ中不起作用。仅执行最后一个while循环。这就是为什么在过程代码中混合使用WHILE和LOOP
答案 6 :(得分:0)
还有COUNTIF
https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#countif
SELECT COUNTIF(x<0) AS num_negative, COUNTIF(x>0) AS num_positive
FROM UNNEST([5, -2, 3, 6, -10, NULL, -7, 4, 0]) AS x;