多次连接在同一个表上的性能问题

时间:2017-07-03 14:19:17

标签: sql performance teradata

我面临以下查询的性能问题,其中同一个表多次自连接。如何在同一个表上避免多个连接?

INSERT INTO "TEMP"."TABLE2"
SELECT
T1."PRODUCT_SNO"
,T2."PRODUCT_SNO"
,T3."PRODUCT_SNO"
,T4."PRODUCT_SNO"
,((COUNT(DISTINCT T1."ACCESS_METHOD_ID")(FLOAT)) / 
   (MAX(T5.GROUP_NUM(FLOAT))))
FROM
"TEMP"."TABLE1" T1
,"TEMP"."TABLE1" T2
,"TEMP"."TABLE1" T3
,"TEMP"."TABLE1" T4
,"TEMP"."_TWM_GROUP_COUNT" T5
WHERE
      T1."ACCESS_METHOD_ID" = T2."ACCESS_METHOD_ID"
  AND T2."ACCESS_METHOD_ID" = T3."ACCESS_METHOD_ID"
  AND T3."ACCESS_METHOD_ID" = T4."ACCESS_METHOD_ID"
  AND T1."SUBSCRIPTION_DATE" < T2."SUBSCRIPTION_DATE"
  AND T2."SUBSCRIPTION_DATE" < T3."SUBSCRIPTION_DATE"
  AND T3."SUBSCRIPTION_DATE" < T4."SUBSCRIPTION_DATE"
GROUP BY 1, 2, 3, 4;

这需要3小时才能完成。以下是对它的解释:

1) First, we lock a distinct TEMP."pseudo table" for write on a
     RowHash to prevent global deadlock for
     TEMP.TABLE2. 
  2) Next, we lock a distinct TEMP."pseudo table" for read on a
     RowHash to prevent global deadlock for TEMP.T5. 
  3) We lock TEMP.TABLE2 for write, we lock
     TEMP.TABLE1 for access, and we lock TEMP.T5 for read. 
  4) We do an all-AMPs RETRIEVE step from TEMP.T5 by way of an
     all-rows scan with no residual conditions into Spool 4 (all_amps),
     which is duplicated on all AMPs.  The size of Spool 4 is estimated
     with high confidence to be 48 rows (816 bytes).  The estimated
     time for this step is 0.01 seconds. 
  5) We execute the following steps in parallel. 
       1) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of
          an all-rows scan, which is joined to TEMP.T4 by way of an
          all-rows scan with no residual conditions.  Spool 4 and
          TEMP.T4 are joined using a product join, with a join
          condition of ("(1=1)").  The result goes into Spool 5
          (all_amps), which is built locally on the AMPs.  Then we do a
          SORT to order Spool 5 by the hash code of (
          TEMP.T4.ACCESS_METHOD_ID).  The size of Spool 5 is
          estimated with high confidence to be 8,051,801 rows (
          233,502,229 bytes).  The estimated time for this step is 1.77
          seconds. 
       2) We do an all-AMPs JOIN step from TEMP.T2 by way of a
          RowHash match scan with no residual conditions, which is
          joined to TEMP.T1 by way of a RowHash match scan with no
          residual conditions.  TEMP.T2 and TEMP.T1 are joined
          using a merge join, with a join condition of (
          "(TEMP.T1.ACCESS_METHOD_ID = TEMP.T2.ACCESS_METHOD_ID)
          AND (TEMP.T1.SUBSCRIPTION_DATE <
          TEMP.T2.SUBSCRIPTION_DATE)").  The result goes into Spool
          6 (all_amps), which is built locally on the AMPs.  The size
          of Spool 6 is estimated with low confidence to be 36,764,681
          rows (1,213,234,473 bytes).  The estimated time for this step
          is 4.12 seconds. 
  6) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of a
     RowHash match scan, which is joined to TEMP.T3 by way of a
     RowHash match scan with no residual conditions.  Spool 5 and
     TEMP.T3 are joined using a merge join, with a join condition
     of ("(TEMP.T3.SUBSCRIPTION_DATE < SUBSCRIPTION_DATE) AND
     (TEMP.T3.ACCESS_METHOD_ID = ACCESS_METHOD_ID)").  The result
     goes into Spool 7 (all_amps), which is built locally on the AMPs. 
     The size of Spool 7 is estimated with low confidence to be
     36,764,681 rows (1,360,293,197 bytes).  The estimated time for
     this step is 4.14 seconds. 
  7) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of a
     RowHash match scan, which is joined to Spool 7 (Last Use) by way
     of a RowHash match scan.  Spool 6 and Spool 7 are joined using a
     merge join, with a join condition of ("(SUBSCRIPTION_DATE <
     SUBSCRIPTION_DATE) AND ((ACCESS_METHOD_ID = ACCESS_METHOD_ID) AND
     ((ACCESS_METHOD_ID = ACCESS_METHOD_ID) AND ((ACCESS_METHOD_ID =
     ACCESS_METHOD_ID) AND (ACCESS_METHOD_ID = ACCESS_METHOD_ID ))))"). 
     The result goes into Spool 3 (all_amps), which is built locally on
     the AMPs.  The result spool file will not be cached in memory. 
     The size of Spool 3 is estimated with low confidence to be
     766,489,720 rows (29,893,099,080 bytes).  The estimated time for
     this step is 1 minute and 21 seconds. 
  8) We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by
     way of an all-rows scan , grouping by field1 (
     TEMP.T1.PRODUCT_SNO ,TEMP.T2.PRODUCT_SNO
     ,TEMP.T3.PRODUCT_SNO ,TEMP.T4.PRODUCT_SNO
     ,TEMP.T1.ACCESS_METHOD_ID).  Aggregate Intermediate Results
     are computed globally, then placed in Spool 9.  The aggregate
     spool file will not be cached in memory.  The size of Spool 9 is
     estimated with low confidence to be 574,867,290 rows (
     46,564,250,490 bytes).  The estimated time for this step is 6
     minutes and 38 seconds. 
  9) We do an all-AMPs SUM step to aggregate from Spool 9 (Last Use) by
     way of an all-rows scan , grouping by field1 (
     TEMP.T1.PRODUCT_SNO ,TEMP.T2.PRODUCT_SNO
     ,TEMP.T3.PRODUCT_SNO ,TEMP.T4.PRODUCT_SNO).  Aggregate
     Intermediate Results are computed globally, then placed in Spool
     11.  The size of Spool 11 is estimated with low confidence to be
     50,625 rows (3,695,625 bytes).  The estimated time for this step
     is 41.87 seconds. 
 10) We do an all-AMPs RETRIEVE step from Spool 11 (Last Use) by way of
     an all-rows scan into Spool 1 (all_amps), which is redistributed
     by the hash code of (TEMP.T1.PRODUCT_SNO,
     TEMP.T2.PRODUCT_SNO, TEMP.T3.PRODUCT_SNO,
     TEMP.T4.PRODUCT_SNO) to all AMPs.  Then we do a SORT to order
     Spool 1 by row hash.  The size of Spool 1 is estimated with low
     confidence to be 50,625 rows (1,873,125 bytes).  The estimated
     time for this step is 0.04 seconds. 
 11) We do an all-AMPs MERGE into TEMP.TABLE2 from
     Spool 1 (Last Use).  The size is estimated with low confidence to
     be 50,625 rows.  The estimated time for this step is 1 second. 
 12) We spoil the parser's dictionary cache for the table. 
 13) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> No rows are returned to the user as the result of statement 1. 

收集所有必需的统计数据。

1 个答案:

答案 0 :(得分:0)

我必须承认我不是Teradata的专家,但我做了快速检查,你可以使用ANSI JOIN语法。

所以首先我重写了你的查询,以便我理解它:

INSERT INTO 
    "TEMP"."TABLE2"
SELECT
    T1."PRODUCT_SNO",
    T2."PRODUCT_SNO",
    T3."PRODUCT_SNO",
    T4."PRODUCT_SNO",
    ((COUNT(DISTINCT T1."ACCESS_METHOD_ID")(FLOAT)) / 
        (MAX(T5.GROUP_NUM(FLOAT))))
FROM
    "TEMP"."TABLE1" T1
    INNER JOIN "TEMP"."TABLE1" T2 ON T2."ACCESS_METHOD_ID" = T1."ACCESS_METHOD_ID" 
        AND T2."SUBSCRIPTION_DATE" > T1."SUBSCRIPTION_DATE"
    INNER JOIN "TEMP"."TABLE1" T3 ON T3."ACCESS_METHOD_ID" = T2."ACCESS_METHOD_ID" 
        AND T3."SUBSCRIPTION_DATE" > T2."SUBSCRIPTION_DATE"
    INNER JOIN "TEMP"."TABLE1" T4 ON T4."ACCESS_METHOD_ID" = T3."ACCESS_METHOD_ID" 
        AND T4."SUBSCRIPTION_DATE" > T3."SUBSCRIPTION_DATE"
    CROSS JOIN "TEMP"."_TWM_GROUP_COUNT" T5
GROUP BY 
    T1."PRODUCT_SNO",
    T2."PRODUCT_SNO",
    T3."PRODUCT_SNO",
    T4."PRODUCT_SNO";

请注意,其中许多更改只是个人偏好,但其他更改将“允许您的查询进入21世纪”; P

现在我可以阅读你的SQL我可以对你实际想要实现的目标做出一些假设:

  • 你有一些包含产品的表,每个产品都有序列号,“访问方法”(不知道这是什么?)和订阅日期;
  • 您正在查找具有相同“访问方法”的产品,然后将它们链接到订阅日期顺序中,然后显示链中每个产品的序列号;
  • 每条链必须正好4个产品。不知道如果一个链中的产品少于或多于4个会发生什么(我可以看到,如果链中的产品少于4个,那么这将被丢弃);
  • 你还有一个衡量标准,可以改变这种逻辑。现在,您要计算每个链的不同访问方法的数量,并将其除以来自另一个我们一无所知的表的数字。

这真的不是很多,但我可以看到一些你可以寻求优化的地方:

  • 您只使用_TMW_GROUP_COUNT表来表示一件事,即MAX(GROUP_NUM)。所以你可以在主查询之前解决这个问题,然后不需要这个可能很昂贵的JOIN。我不知道你怎么能用Teradata做到这一点,但在其他SQL变种中你可以把它粘在变量中,使用公用表表达式,使用子查询等。如果那个表中有很多行,那么就有了优化器可能会运行您的查询x次,然后丢弃x-1结果集!
  • 任何非等连接都会效率低下,但看起来好像你可以避免这些。如果您的表未被SUBSCRIPTION_DATE索引,则可能有助于对表中的数据进行预排序,添加数字顺序号(在SQL的其他变体中,这将是ROW_NUMBER()OVER(ORDER BY SUBSCRIPTION_DATE)类型语法那么你的日期比较可以是数字比较;
  • 显然索引在这里很重要;
  • 最后,您可以将查询分成几个阶段,从T1到T2连接开始,然后使用它作为(T1到T2)到T3连接的基础等。这可能没有帮助,但它可能值得一个尝试?

这可能不是很多帮助,但如果没有一些样本数据等等,真的还不够......