用于DELETE性能问题的Oracle分区解决方案

时间:2011-04-27 13:56:20

标签: oracle oracle11g

这是Strategy to improve Oracle DELETE performance的后续问题。回顾一下,我们有一个大型DB,其中包含表示来自优化系统的1D到4D输出数据的表层次结构。读取和写入这些数据的速度很快,为我们各种系统利用这些信息提供了便利的手段。

然而,删除未使用的数据已经变成了熊。当前表层次结构如下所示。

/* Metadata tables */
Case(CaseId, DeleteFlag, ...) On Delete Cascade CaseId
OptimizationRun(OptId, CaseId, ...) On Delete Cascade OptId
OptimizationStep(StepId, OptId, ...) On Delete Cascade StepId

/* Data tables */
Files(FileId, CaseId, Blob) /* deletes are near instantateous here */

/* Data per run */
OnedDataX(OptId, ...)
TwoDDataY1(OptId, ...) /* packed representation of a 1D slice */

/* Data not only per run, but per step */
TwoDDataY2(StepId, ...)  /* packed representation of a 1D slice */
ThreeDDataZ(StepId, ...) /* packed representation of a 2D slice */
FourDDataZ(StepId, ...)  /* packed representation of a 3D slice */
/* ... About 10 or so of these tables exist */

我正在寻找的是一种对Case数据进行分区的方法,以便我可以删除与案例相关的分区以删除其数据。理想情况下,OptimizationRun会有一个基于CaseId的间隔分区,这会过滤到其子节点。但是,11g不支持INTERVAL和REF分区的组合。

我相当肯定ENABLE ROW MOVEMENT根据数据库大小和表空间在ASSM中的要求是不可能的。可能是OptimizationRun上的RANGE分区和其余的REF分区?

我猜这个策略我需要一个触发器来完成类似下面的操作:

CREATE OR REPLACE TRIGGER Case_BeforeInsert_MakePartitions
BEFORE INSERT
    ON Case
    FOR EACH ROW
DECLARE
    v_PartName varchar(64)       := 'CASE_OPTPART_' || :new.CaseId;
    v_PartRange Case.CaseId%type := :new.CaseId
BEGIN
    -- Take :new.CaseId and create the partition
    ALTER TABLE OptimizationRun
        ADD PARTITION v_PartName
        VALUES LESS THAN ( v_PartRange );
END;

然后是删除之前的必要触发器:

CREATE OR REPLACE TRIGGER Case_BeforeDelete_RemovePartitions
BEFORE DELETE
    ON Case
    FOR EACH ROW
DECLARE
    v_PartName varchar(64) := 'CASE_OPTPART_' || :old.CaseId;
BEGIN
    -- Drop the partitions associated with the case
    ALTER TABLE OptimizationRun
        DROP PARTITION v_PartName;
END;

好主意?或者这是SNL Bad Idea Jeans商业广告中的一个想法吗?

更新,尺寸参考

  • 1D数据表~1.7G
  • 2D数据表~12.5G
  • 3D数据表~117.3G
  • 4D数据表~315.2G

2 个答案:

答案 0 :(得分:4)

我非常确定您使用分区处理正确的轨道来处理删除性能问题。但是,我认为你不能将它与触发器混合在一起。带触发器的复杂逻辑一直困扰着我,但除此之外,还有你可能会遇到的问题:

  • DDL语句会破坏事务逻辑,因为Oracle会在任何DDL语句之前执行当前事务的提交。
  • 幸运的是,您无法在触发器中提交(因为Oracle处于操作过程中并且数据库未处于一致状态)。
  • 使用自治事务来执行DDL对于插入来说是一个(差的?)解决方法,但不太适用于DELETE,因为这可能会干扰ON DELETE CASCADE逻辑。

编写代码并更容易维护处理删除和创建分区的过程会更容易,例如:

CREATE PROCEDURE add_case (case_id, ...) AS
BEGIN
   EXECUTE IMMEDIATE 'ALTER TABLE OptimizationRun ADD partition...';
   /* repeat for each child table */
   INSERT INTO Case VALUES (...);
END;

关于分区的丢弃,您必须检查它是否与参照完整性一起使用。在父表子关系中删除父表分区之前,可能需要禁用外键约束。

另请注意,分区删除后,全局索引将处于不可用状态。除非在drop语句中指定UPDATE GLOBAL,否则你将不得不重建它们(显然这会自动重建它们,但需要更多时间)。

答案 1 :(得分:1)

不可能 - 您不能在行级触发器中发出类似的DDL。

[可能的设计问题评论编辑,已解决]

您是否考虑过并行化脚本?而不是依赖于删除级联的清扫工,而是利用DBMS_SCHEDULER来并行化作业。您可以安全地对依赖关系树的同一级别的表运行并行删除。

begin
  dbms_scheduler.create_program
    (program_name => 'snapshot_purge_cases',
     program_type => 'PLSQL_BLOCK',
     program_action => 
      'BEGIN
         delete from purge$Case;
         insert into purge$Case
         select CaseId 
           from Case
          where deleteFlag = 1;

         delete from purge$Opt;
         insert into purge$Opt
         select OptId 
           from OptimizationRun
          where CaseId in (select CaseId from purge$Case);

         delete from purge$Step;
         insert into purge$Step
         select StepId 
           from OptimizationStep
          where OptId in (select OptId from purge$Opt);

         commit;
       END;',
     enabled => true,
     comments => 'Program to snapshot keys for purging';           
    );

  dbms_scheduler.create_program 
    (program_name => 'purge_case',
     program_type => 'PLSQL_BLOCK',
     program_action => 'BEGIN 
                          loop
                            delete from Case 
                             where CaseId in (select Case from purge$Case)
                            where rownum <= 50000;
                            exit when sql%rowcount = 0;
                            commit;
                          end loop;
                          commit;
                        END;',
     enabled => true,
     comments => 'Program to purge the Case Table'
    );

  -- repeat for each table being purged

end;
/

仅设置程序。我们接下来需要做的是建立一个工作链,以便我们将它们放在一起。

BEGIN
  dbms_scheduler.create_chain 
   (chain_name => 'purge_case_chain');
END;
/

现在我们使用之前的程序在作业链中创建步骤:

BEGIN
  dbms_scheduler.define_chain_step
   (chain_name => 'purge_case_chain',
    step_name  => 'step_snapshot_purge_cases',
    program_name => 'snapshot_purge_cases'
   );

  dbms_scheduler.define_chain_step
   (chain_name => 'purge_case_chain',
    step_name  => 'step_purge_cases',
    program_name => 'purge_case'
   );

  -- repeat for every table
END;
/

现在我们必须将连锁步骤链接在一起。工作会像以下那样散开:

  1. 快照CaseIdsOptIdsStepIds进行清除。
  2. 清除依赖于OptimizationStep.
  3. 的所有表格
  4. 清除依赖于OptimizationRun.
  5. 的所有表格
  6. 清除依赖于Case.
  7. 的所有表格
  8. 清除Case.
  9. 那么代码就是:

    begin
      dbms_scheduler.define_chain_rule
       (chain_name => 'purge_case_chain',
        condition  => 'TRUE',
        action     => 'START step_snapshot_purge_cases',
        rule_name  => 'rule_snapshot_purge_cases'
       );
    
      -- repeat for every table dependent on OptimizationStep
      dbms_scheduler.define_chain_rule
       (chain_name => 'purge_case_chain',
        condition  => 'step_snapshot_purge_cases COMPLETED',
        action     => 'START step_purge_TwoDDataY2',
        rule_name  => 'rule_purge_TwoDDataY2'
       );
    
      -- repeat for every table dependent on OptimizationRun     
      dbms_scheduler.define_chain_rule
       (chain_name => 'purge_case_chain',
        condition  => 'step_purge_TwoDDataY2  COMPLETED and
                       step_purge_ThreeDDataZ COMPLETED and
                       ... ',
        action     => 'START step_purge_OnedDataX',
        rule_name  => 'rule_purge_OnedDataX'
       );
    
      -- repeat for every table dependent on Case  
      dbms_scheduler.define_chain_rule
       (chain_name => 'purge_case_chain',
        condition  => 'step_purge_OneDDataX  COMPLETED and
                       step_purge_TwoDDataY1 COMPLETED and
                       ... ',
        action     => 'START step_purge_Files',
        rule_name  => 'rule_purge_Files'
       );
    
      dbms_scheduler.define_chain_rule
       (chain_name => 'purge_case_chain',
        condition  => 'step_purge_Files           COMPLETED and
                       step_purge_OptimizationRun COMPLETED and 
                       ... ',
        action     => 'START step_purge_Case',
        rule_name  => 'rule_purge_Case'
       );
    
      -- add a rule to end the chain
      dbms_scheduler.define_chain_rule
       (chain_name => 'purge_case_chain',
        condition  => 'step_purge_Case COMPLETED',
        action     => 'END',
        rule_name  => 'rule_purge_Case'
       );
    
    end;
    /
    

    启用工作链:

    BEGIN
      DBMS_SCHEDULER.enable ('purge_case_chain');
    END;
    /
    

    您可以手动运行链:

    BEGIN
      DBMS_SCHEDULER.RUN_CHAIN
       (chain_name => 'chain_purge_case',
        job_name   => 'chain_purge_case_run'
       );
    END;
    /
    

    或者创建一份工作来安排它:

    BEGIN
      DBMS_SCHEDULER.CREATE_JOB (
        job_name        => 'job_purge_case',
        job_type        => 'CHAIN',
        job_action      => 'chain_purge_case',
        repeat_interval => 'freq=daily',
        start_date      => ...
        end_date        => ...
        enabled         => TRUE);
    END;
    /
    
相关问题