我想计算最近90天每个商家的平均支出。 我一直在使用pyspark SQL:
df_spark = df_spark.withColumn("t_unix", F.unix_timestamp(df_spark['date']))
windowSpec = Window.orderBy("t_unix").partitionBy("merchant").rangeBetween(-3 * 30 * 24 * 3600, -1)
average_spending = F.avg(df_spark['amount']).over(windowSpec)
df = df_spark.withColumn("average_spending", average_spending)
df.select('merchant', 'date', "amount", "average_spending").show(5)
+---------+-------------------+-------+----------------+
| merchant|date |amount |average_spending|
+---------+-------------------+-------+----------------+
| 26 |2017-01-01 01:11:06| 3 | null|
| 26 |2017-01-01 02:02:15| 54 | 3.0|
| 26 |2017-01-01 02:26:45| 6 | 28.5|
| 26 |2017-01-01 02:40:37| 4 | 21.0|
| 26 |2017-01-01 02:41:51| 85 | 16.75|
+---------+-------------------+-------+----------------+
only showing top 5 rows
现在我想在AWS Athena(Presto)中完成。
我尝试了以下查询:
但是我收到了错误消息:
Your query has the following error(s):
SYNTAX_ERROR: line 7:24: Unexpected parameters (varchar(3), integer, varchar) for function date_add. Expected: date_add(varchar(x), bigint, date) , date_add(varchar(x), bigint, time) , date_add(varchar(x), bigint, time with time zone) , date_add(varchar(x), bigint, timestamp) , date_add(varchar(x), bigint, timestamp with time zone)
但是在date_add('day',-90,“ date”)中,我希望“ date”作为该行的当前时间戳,而不是静态时间戳。
SELECT
"date",
"merchant",
"amount",
AVG("amount")
FROM "table"
WHERE ("date" BETWEEN date_add('day', -90, "date") and "date")
GROUP BY "merchant"
ORDER BY "date"
LIMIT 5
但是我收到了错误消息:
Your query has the following error(s):
SYNTAX_ERROR: line 7:24: Unexpected parameters (varchar(3), integer, varchar) for function date_add. Expected: date_add(varchar(x), bigint, date) , date_add(varchar(x), bigint, time) , date_add(varchar(x), bigint, time with time zone) , date_add(varchar(x), bigint, timestamp) , date_add(varchar(x), bigint, timestamp with time zone)
但是在 date_add('day',-90,“ date”)中,我希望将“ date”作为该行的当前时间戳,而不是静态时间戳。
我再次尝试:
SELECT
unix_date,
merchant,
amount,
AVG(amount)
OVER
( PARTITION BY merchant
ORDER BY unix_date
RANGE BETWEEN INTERVAL '90' DAY PRECEDING AND CURRENT ROW
) AVG_S
FROM ...;
但是我收到了错误消息:
SYNTAX_ERROR: line 5:4: Window frame start value type must be INTEGER or BIGINT(actual interval day to second)
这里有一个类似的未解决问题: Presto SQL window aggregate looking back x hours/minutes/seconds
答案 0 :(得分:0)
如果每个商人×日期有一个数据点,则可以使用Presto's window functions
轻松完成。SELECT
date, merchant, amount,
avg(amount) OVER (
PARTITION BY merchant
ORDER BY date ASC
ROWS 89 PRECEDING) -- 89 preceding rows + the current row
FROM ...
ORDER BY date ASC -- not necessary, but you likely want the data to be sorted as well
如果每个商家和日期的数据点数量不同,则可以执行以下操作:
SELECT
c.merchant, c.date, c.amount, avg(preceding.amount)
FROM your_table c
JOIN your_table preceding ON c.merchant = preceding.merchant
AND preceding.date BETWEEN (c.date - INTERVAL '89' DAY) AND c.date
GROUP BY c.merchant, c.date, c.amount
答案 1 :(得分:0)
这一直在为我工作。
CREATE TABLE IF NOT EXISTS full_year_query_parquet
WITH (format = 'PARQUET',
parquet_compression = 'SNAPPY',
external_location='s3://your_s3_bucket/data') AS
SELECT
a.merchant,
a.amount,
a.date,
avg(preceding.amount)
FROM "your_table" as a
JOIN "your_table" as preceding ON a.merchant = preceding.merchant
AND preceding.date > DATE_ADD('day', -90, a.date)
AND preceding.date < a.date
GROUP BY a.merchant, a.amount, a.date