我面临以下查询的性能问题,其中同一个表多次自连接。如何在同一个表上避免多个连接?
INSERT INTO "TEMP"."TABLE2"
SELECT
T1."PRODUCT_SNO"
,T2."PRODUCT_SNO"
,T3."PRODUCT_SNO"
,T4."PRODUCT_SNO"
,((COUNT(DISTINCT T1."ACCESS_METHOD_ID")(FLOAT)) /
(MAX(T5.GROUP_NUM(FLOAT))))
FROM
"TEMP"."TABLE1" T1
,"TEMP"."TABLE1" T2
,"TEMP"."TABLE1" T3
,"TEMP"."TABLE1" T4
,"TEMP"."_TWM_GROUP_COUNT" T5
WHERE
T1."ACCESS_METHOD_ID" = T2."ACCESS_METHOD_ID"
AND T2."ACCESS_METHOD_ID" = T3."ACCESS_METHOD_ID"
AND T3."ACCESS_METHOD_ID" = T4."ACCESS_METHOD_ID"
AND T1."SUBSCRIPTION_DATE" < T2."SUBSCRIPTION_DATE"
AND T2."SUBSCRIPTION_DATE" < T3."SUBSCRIPTION_DATE"
AND T3."SUBSCRIPTION_DATE" < T4."SUBSCRIPTION_DATE"
GROUP BY 1, 2, 3, 4;
这需要3小时才能完成。以下是对它的解释:
1) First, we lock a distinct TEMP."pseudo table" for write on a
RowHash to prevent global deadlock for
TEMP.TABLE2.
2) Next, we lock a distinct TEMP."pseudo table" for read on a
RowHash to prevent global deadlock for TEMP.T5.
3) We lock TEMP.TABLE2 for write, we lock
TEMP.TABLE1 for access, and we lock TEMP.T5 for read.
4) We do an all-AMPs RETRIEVE step from TEMP.T5 by way of an
all-rows scan with no residual conditions into Spool 4 (all_amps),
which is duplicated on all AMPs. The size of Spool 4 is estimated
with high confidence to be 48 rows (816 bytes). The estimated
time for this step is 0.01 seconds.
5) We execute the following steps in parallel.
1) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of
an all-rows scan, which is joined to TEMP.T4 by way of an
all-rows scan with no residual conditions. Spool 4 and
TEMP.T4 are joined using a product join, with a join
condition of ("(1=1)"). The result goes into Spool 5
(all_amps), which is built locally on the AMPs. Then we do a
SORT to order Spool 5 by the hash code of (
TEMP.T4.ACCESS_METHOD_ID). The size of Spool 5 is
estimated with high confidence to be 8,051,801 rows (
233,502,229 bytes). The estimated time for this step is 1.77
seconds.
2) We do an all-AMPs JOIN step from TEMP.T2 by way of a
RowHash match scan with no residual conditions, which is
joined to TEMP.T1 by way of a RowHash match scan with no
residual conditions. TEMP.T2 and TEMP.T1 are joined
using a merge join, with a join condition of (
"(TEMP.T1.ACCESS_METHOD_ID = TEMP.T2.ACCESS_METHOD_ID)
AND (TEMP.T1.SUBSCRIPTION_DATE <
TEMP.T2.SUBSCRIPTION_DATE)"). The result goes into Spool
6 (all_amps), which is built locally on the AMPs. The size
of Spool 6 is estimated with low confidence to be 36,764,681
rows (1,213,234,473 bytes). The estimated time for this step
is 4.12 seconds.
6) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of a
RowHash match scan, which is joined to TEMP.T3 by way of a
RowHash match scan with no residual conditions. Spool 5 and
TEMP.T3 are joined using a merge join, with a join condition
of ("(TEMP.T3.SUBSCRIPTION_DATE < SUBSCRIPTION_DATE) AND
(TEMP.T3.ACCESS_METHOD_ID = ACCESS_METHOD_ID)"). The result
goes into Spool 7 (all_amps), which is built locally on the AMPs.
The size of Spool 7 is estimated with low confidence to be
36,764,681 rows (1,360,293,197 bytes). The estimated time for
this step is 4.14 seconds.
7) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of a
RowHash match scan, which is joined to Spool 7 (Last Use) by way
of a RowHash match scan. Spool 6 and Spool 7 are joined using a
merge join, with a join condition of ("(SUBSCRIPTION_DATE <
SUBSCRIPTION_DATE) AND ((ACCESS_METHOD_ID = ACCESS_METHOD_ID) AND
((ACCESS_METHOD_ID = ACCESS_METHOD_ID) AND ((ACCESS_METHOD_ID =
ACCESS_METHOD_ID) AND (ACCESS_METHOD_ID = ACCESS_METHOD_ID ))))").
The result goes into Spool 3 (all_amps), which is built locally on
the AMPs. The result spool file will not be cached in memory.
The size of Spool 3 is estimated with low confidence to be
766,489,720 rows (29,893,099,080 bytes). The estimated time for
this step is 1 minute and 21 seconds.
8) We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by
way of an all-rows scan , grouping by field1 (
TEMP.T1.PRODUCT_SNO ,TEMP.T2.PRODUCT_SNO
,TEMP.T3.PRODUCT_SNO ,TEMP.T4.PRODUCT_SNO
,TEMP.T1.ACCESS_METHOD_ID). Aggregate Intermediate Results
are computed globally, then placed in Spool 9. The aggregate
spool file will not be cached in memory. The size of Spool 9 is
estimated with low confidence to be 574,867,290 rows (
46,564,250,490 bytes). The estimated time for this step is 6
minutes and 38 seconds.
9) We do an all-AMPs SUM step to aggregate from Spool 9 (Last Use) by
way of an all-rows scan , grouping by field1 (
TEMP.T1.PRODUCT_SNO ,TEMP.T2.PRODUCT_SNO
,TEMP.T3.PRODUCT_SNO ,TEMP.T4.PRODUCT_SNO). Aggregate
Intermediate Results are computed globally, then placed in Spool
11. The size of Spool 11 is estimated with low confidence to be
50,625 rows (3,695,625 bytes). The estimated time for this step
is 41.87 seconds.
10) We do an all-AMPs RETRIEVE step from Spool 11 (Last Use) by way of
an all-rows scan into Spool 1 (all_amps), which is redistributed
by the hash code of (TEMP.T1.PRODUCT_SNO,
TEMP.T2.PRODUCT_SNO, TEMP.T3.PRODUCT_SNO,
TEMP.T4.PRODUCT_SNO) to all AMPs. Then we do a SORT to order
Spool 1 by row hash. The size of Spool 1 is estimated with low
confidence to be 50,625 rows (1,873,125 bytes). The estimated
time for this step is 0.04 seconds.
11) We do an all-AMPs MERGE into TEMP.TABLE2 from
Spool 1 (Last Use). The size is estimated with low confidence to
be 50,625 rows. The estimated time for this step is 1 second.
12) We spoil the parser's dictionary cache for the table.
13) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> No rows are returned to the user as the result of statement 1.
收集所有必需的统计数据。
答案 0 :(得分:0)
我必须承认我不是Teradata的专家,但我做了快速检查,你可以使用ANSI JOIN语法。
所以首先我重写了你的查询,以便我理解它:
INSERT INTO
"TEMP"."TABLE2"
SELECT
T1."PRODUCT_SNO",
T2."PRODUCT_SNO",
T3."PRODUCT_SNO",
T4."PRODUCT_SNO",
((COUNT(DISTINCT T1."ACCESS_METHOD_ID")(FLOAT)) /
(MAX(T5.GROUP_NUM(FLOAT))))
FROM
"TEMP"."TABLE1" T1
INNER JOIN "TEMP"."TABLE1" T2 ON T2."ACCESS_METHOD_ID" = T1."ACCESS_METHOD_ID"
AND T2."SUBSCRIPTION_DATE" > T1."SUBSCRIPTION_DATE"
INNER JOIN "TEMP"."TABLE1" T3 ON T3."ACCESS_METHOD_ID" = T2."ACCESS_METHOD_ID"
AND T3."SUBSCRIPTION_DATE" > T2."SUBSCRIPTION_DATE"
INNER JOIN "TEMP"."TABLE1" T4 ON T4."ACCESS_METHOD_ID" = T3."ACCESS_METHOD_ID"
AND T4."SUBSCRIPTION_DATE" > T3."SUBSCRIPTION_DATE"
CROSS JOIN "TEMP"."_TWM_GROUP_COUNT" T5
GROUP BY
T1."PRODUCT_SNO",
T2."PRODUCT_SNO",
T3."PRODUCT_SNO",
T4."PRODUCT_SNO";
请注意,其中许多更改只是个人偏好,但其他更改将“允许您的查询进入21世纪”; P
现在我可以阅读你的SQL我可以对你实际想要实现的目标做出一些假设:
这真的不是很多,但我可以看到一些你可以寻求优化的地方:
这可能不是很多帮助,但如果没有一些样本数据等等,真的还不够......