Oracle将blob数据从一个表复制到另一个表

时间:2011-08-23 12:42:54

标签: copy oracle10g rows

我有一个带有blob列的表(超过1.000.000行和60GB数据)。我想将表的大多数行(而不是所有行)移动到另一个表。我试过insert into X select from y命令,但速度太慢了。

最快的方法是什么?

我有Oracle 10或11。

3 个答案:

答案 0 :(得分:3)

使用/ * + append * / hint传递归档日志 当您使用提示时,oracle不会创建存档日志

insert  /*+ append */ into TABLE1
select  *
from    TABLE2

答案 1 :(得分:1)

为时已晚,无法提出建议,但如果您的新(目标)表具有约束或索引或触发器,则可能有助于上述解决方案,然后尝试先删除/禁用它们,然后加载大量数据并最终创建/启用约束,索引和触发器并返回并分析表索引。只有在您只需复制一次批量数据时,才会建议此省时解决方案。因为在表DBMS中插入新记录可确保约束,检查和索引降低速度

答案 2 :(得分:0)

好的,所以我们不了解您的系统因此很难告诉您。您的问题实际上取决于您的环境。无论如何,这里有一些测试,以显示使用您的方法与另一种方法所需的时间和资源:

假设您的方法是方法1,另一种方法是方法2.

20:43:24 SQL> set autotrace on;
20:43:30 SQL> alter session set SQL_TRACE=TRUE;

Session altered.

20:43:39 SQL> --let's make sure we are reading from disk (for arguments sake)
20:43:45 SQL> alter system flush shared_pool;

System altered.

20:43:45 SQL> alter system flush buffer_cache;

System altered.

20:43:45 SQL> 
20:43:45 SQL> --clear my examples
20:43:45 SQL> drop table t;
drop table t
           *
ERROR at line 1:
ORA-00942: table or view does not exist


20:43:49 SQL> drop table u;
drop table u
           *
ERROR at line 1:
ORA-00942: table or view does not exist


20:43:49 SQL> 
20:43:49 SQL> --create table u - we will populate this with random numbers
20:43:49 SQL> create table u (y varchar2(4000));

Table created.

20:43:50 SQL> 
20:43:50 SQL> --insert 1 million rows of random numbers
20:43:50 SQL> insert into u
20:43:50   2  (select dbms_random.normal
20:43:50   3  from dual
20:43:50   4  CONNECT BY level <= 1000000);

1000000 rows created.


Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825

------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | U    |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=1000000)


Statistics
----------------------------------------------------------
       4175  recursive calls
      58051  db block gets
      13118  consistent gets
         47  physical reads
   54277624  redo size
        675  bytes sent via SQL*Net to client
        647  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         56  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

20:44:21 SQL> 
20:44:21 SQL> --create table t - we will populate this from table u
20:44:21 SQL> create table t (x varchar2(4000));

Table created.

20:44:21 SQL> 
20:44:21 SQL> --let's make sure we are reading from disk (for arguments sake)
20:44:21 SQL> alter system flush shared_pool;

System altered.

20:44:21 SQL> alter system flush buffer_cache;

System altered.

20:44:26 SQL> 
20:44:26 SQL> --insert data from u to t (this is how you said you did this)
20:44:26 SQL> insert into t (select * from u);

1000000 rows created.


Execution Plan
----------------------------------------------------------
Plan hash value: 537870620

---------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |   997K|  1905M|  1750   (1)| 00:00:21 |
|   1 |  LOAD TABLE CONVENTIONAL | T    |       |       |            |          |
|   2 |   TABLE ACCESS FULL      | U    |   997K|  1905M|  1750   (1)| 00:00:21 |
---------------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
       5853  recursive calls
      58201  db block gets
      24213  consistent gets
       6551  physical reads
   54591764  redo size
        681  bytes sent via SQL*Net to client
        599  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         57  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

20:44:41 SQL> 
20:44:41 SQL> 
20:44:41 SQL> --now let's start over with a different method
20:44:41 SQL> drop table t;

Table dropped.

20:44:48 SQL> drop table u;

Table dropped.

20:44:50 SQL> 
20:44:50 SQL> --create table u - we will populate this with random numbers
20:44:50 SQL> create table u (y varchar2(4000));

Table created.

20:44:51 SQL> 
20:44:51 SQL> --insert 1 million rows of random numbers
20:44:51 SQL> insert into u
20:44:51   2  (select dbms_random.normal
20:44:51   3  from dual
20:44:51   4  CONNECT BY level <= 1000000);

1000000 rows created.


Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825

------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | U    |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=1000000)


Statistics
----------------------------------------------------------
       2908  recursive calls
      58153  db block gets
      12831  consistent gets
         10  physical reads
   54284104  redo size
        683  bytes sent via SQL*Net to client
        647  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         31  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

20:45:20 SQL> 
20:45:20 SQL> --let's make sure we are reading from disk (for arguments sake)
20:45:20 SQL> alter system flush shared_pool;

System altered.

20:45:20 SQL> alter system flush buffer_cache;

System altered.

20:45:25 SQL> 
20:45:25 SQL> --create table t using table u
20:45:25 SQL> create table t as (select * from u);

Table created.

20:45:36 SQL> 
20:45:36 SQL> drop table t;

Table dropped.

20:45:41 SQL> drop table u;

Table dropped.

20:45:41 SQL> 
20:45:41 SQL> commit;

Commit complete.

20:45:41 SQL> spool off

好的,所以我们关心我们测试的两种方法,即

insert into t (select * from u);

我们得到了一个autotrace答案和

create table t as (select * from u);

我们没有得到自动跟踪。

幸运的是我还运行了sql_trace并且我选择了一个统计数据的TKprof。

这就是我得到的: for“insert into t(select * from u);”:

********************************************************************************

SQL ID: bjdnhkhq8r6h4
Plan Hash: 537870620
insert into t (select * from u)



call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.03          2          2          0           0
Execute      1      1.74       7.67       6201      22538      58121     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.74       7.71       6203      22540      58121     1000000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=0 pr=0 pw=0 time=0 us)
      1   TABLE ACCESS FULL U (cr=4 pr=5 pw=0 time=0 us cost=1750 size=1997891896 card=997948)

********************************************************************************

并且“创建表格t(选择*来自你)”我们得到:

********************************************************************************

SQL ID: asawpwvdj1nbv
Plan Hash: 2321469388
create table t as (select * from u)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.03          2          2          1           0
Execute      1      0.90       2.68       6372      12823       8573     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.90       2.71       6374      12825       8574     1000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD AS SELECT  (cr=13400 pr=6382 pw=6370 time=0 us)
1000000   TABLE ACCESS FULL U (cr=12640 pr=6370 pw=0 time=349545 us cost=1750 size=2159012856 card=1078428)

********************************************************************************

那么这告诉我们什么? 好: -Method 2总体时间比方法1减少约65%(对于100万行总共减少5秒) -Method 2整体占用的CPU时间比方法1少约48% - 使用方法2比使用方法1解析更多的磁盘 - 方法2检索的缓冲区少于方法1

希望这可以帮助你:)