2011-08-23 208 views
1

我有一个包含blob列的表(超过1.000.000行和60GB数据)。我想将表的大部分行(不是所有行)移到另一个表中。我试过insert into X select from y命令,但它太慢了。Oracle从一个表复制blob数据到另一个表

最快的方法是什么?

我有甲骨文10或11

+0

您可能想要在http://dba.stackexchange.com/上提出您的问题。 –

+1

@ismail - 你能定义“太慢”吗?复制60 GB数据需要多长时间?你需要多长时间?什么是等待事件? 60 GB是否包含表格段的大小?还是LOB细分市场?或者两个部分的总大小? –

+0

这是需要整晚(18:00-08:00)复制它是否正常? –

回答

3

使用/ * +追加* /提示通过归档日志 当你使用提示oracle dosent创建arcive日志

insert /*+ append */ into TABLE1 
select * 
from TABLE2 
0

好了,我们不知道你的系统,所以很难给多告诉你。你的问题真的取决于你的环境。无论如何,这里有一些测试反正表明它需要使用你的方法与其他方法的时间和资源:

比方说,你的方法是方法1,另一种方法是方法2

20:43:24 SQL> set autotrace on; 
20:43:30 SQL> alter session set SQL_TRACE=TRUE; 

Session altered. 

20:43:39 SQL> --let's make sure we are reading from disk (for arguments sake) 
20:43:45 SQL> alter system flush shared_pool; 

System altered. 

20:43:45 SQL> alter system flush buffer_cache; 

System altered. 

20:43:45 SQL> 
20:43:45 SQL> --clear my examples 
20:43:45 SQL> drop table t; 
drop table t 
      * 
ERROR at line 1: 
ORA-00942: table or view does not exist 


20:43:49 SQL> drop table u; 
drop table u 
      * 
ERROR at line 1: 
ORA-00942: table or view does not exist 


20:43:49 SQL> 
20:43:49 SQL> --create table u - we will populate this with random numbers 
20:43:49 SQL> create table u (y varchar2(4000)); 

Table created. 

20:43:50 SQL> 
20:43:50 SQL> --insert 1 million rows of random numbers 
20:43:50 SQL> insert into u 
20:43:50 2 (select dbms_random.normal 
20:43:50 3 from dual 
20:43:50 4 CONNECT BY level <= 1000000); 

1000000 rows created. 


Execution Plan 
---------------------------------------------------------- 
Plan hash value: 1236776825 

------------------------------------------------------------------------------ 
| Id | Operation      | Name | Rows | Cost (%CPU)| Time  | 
------------------------------------------------------------------------------ 
| 0 | INSERT STATEMENT    |  |  1 |  2 (0)| 00:00:01 | 
| 1 | LOAD TABLE CONVENTIONAL  | U |  |   |   | 
|* 2 | CONNECT BY WITHOUT FILTERING|  |  |   |   | 
| 3 | FAST DUAL     |  |  1 |  2 (0)| 00:00:01 | 
------------------------------------------------------------------------------ 

Predicate Information (identified by operation id): 
--------------------------------------------------- 

    2 - filter(LEVEL<=1000000) 


Statistics 
---------------------------------------------------------- 
     4175 recursive calls 
     58051 db block gets 
     13118 consistent gets 
     47 physical reads 
    54277624 redo size 
     675 bytes sent via SQL*Net to client 
     647 bytes received via SQL*Net from client 
      3 SQL*Net roundtrips to/from client 
     56 sorts (memory) 
      0 sorts (disk) 
    1000000 rows processed 

20:44:21 SQL> 
20:44:21 SQL> --create table t - we will populate this from table u 
20:44:21 SQL> create table t (x varchar2(4000)); 

Table created. 

20:44:21 SQL> 
20:44:21 SQL> --let's make sure we are reading from disk (for arguments sake) 
20:44:21 SQL> alter system flush shared_pool; 

System altered. 

20:44:21 SQL> alter system flush buffer_cache; 

System altered. 

20:44:26 SQL> 
20:44:26 SQL> --insert data from u to t (this is how you said you did this) 
20:44:26 SQL> insert into t (select * from u); 

1000000 rows created. 


Execution Plan 
---------------------------------------------------------- 
Plan hash value: 537870620 

--------------------------------------------------------------------------------- 
| Id | Operation    | Name | Rows | Bytes | Cost (%CPU)| Time  | 
--------------------------------------------------------------------------------- 
| 0 | INSERT STATEMENT   |  | 997K| 1905M| 1750 (1)| 00:00:21 | 
| 1 | LOAD TABLE CONVENTIONAL | T |  |  |   |   | 
| 2 | TABLE ACCESS FULL  | U | 997K| 1905M| 1750 (1)| 00:00:21 | 
--------------------------------------------------------------------------------- 

Note 
----- 
    - dynamic sampling used for this statement (level=2) 


Statistics 
---------------------------------------------------------- 
     5853 recursive calls 
     58201 db block gets 
     24213 consistent gets 
     6551 physical reads 
    54591764 redo size 
     681 bytes sent via SQL*Net to client 
     599 bytes received via SQL*Net from client 
      3 SQL*Net roundtrips to/from client 
     57 sorts (memory) 
      0 sorts (disk) 
    1000000 rows processed 

20:44:41 SQL> 
20:44:41 SQL> 
20:44:41 SQL> --now let's start over with a different method 
20:44:41 SQL> drop table t; 

Table dropped. 

20:44:48 SQL> drop table u; 

Table dropped. 

20:44:50 SQL> 
20:44:50 SQL> --create table u - we will populate this with random numbers 
20:44:50 SQL> create table u (y varchar2(4000)); 

Table created. 

20:44:51 SQL> 
20:44:51 SQL> --insert 1 million rows of random numbers 
20:44:51 SQL> insert into u 
20:44:51 2 (select dbms_random.normal 
20:44:51 3 from dual 
20:44:51 4 CONNECT BY level <= 1000000); 

1000000 rows created. 


Execution Plan 
---------------------------------------------------------- 
Plan hash value: 1236776825 

------------------------------------------------------------------------------ 
| Id | Operation      | Name | Rows | Cost (%CPU)| Time  | 
------------------------------------------------------------------------------ 
| 0 | INSERT STATEMENT    |  |  1 |  2 (0)| 00:00:01 | 
| 1 | LOAD TABLE CONVENTIONAL  | U |  |   |   | 
|* 2 | CONNECT BY WITHOUT FILTERING|  |  |   |   | 
| 3 | FAST DUAL     |  |  1 |  2 (0)| 00:00:01 | 
------------------------------------------------------------------------------ 

Predicate Information (identified by operation id): 
--------------------------------------------------- 

    2 - filter(LEVEL<=1000000) 


Statistics 
---------------------------------------------------------- 
     2908 recursive calls 
     58153 db block gets 
     12831 consistent gets 
     10 physical reads 
    54284104 redo size 
     683 bytes sent via SQL*Net to client 
     647 bytes received via SQL*Net from client 
      3 SQL*Net roundtrips to/from client 
     31 sorts (memory) 
      0 sorts (disk) 
    1000000 rows processed 

20:45:20 SQL> 
20:45:20 SQL> --let's make sure we are reading from disk (for arguments sake) 
20:45:20 SQL> alter system flush shared_pool; 

System altered. 

20:45:20 SQL> alter system flush buffer_cache; 

System altered. 

20:45:25 SQL> 
20:45:25 SQL> --create table t using table u 
20:45:25 SQL> create table t as (select * from u); 

Table created. 

20:45:36 SQL> 
20:45:36 SQL> drop table t; 

Table dropped. 

20:45:41 SQL> drop table u; 

Table dropped. 

20:45:41 SQL> 
20:45:41 SQL> commit; 

Commit complete. 

20:45:41 SQL> spool off 

好,所以我们关心的两个方法,我们测试即

insert into t (select * from u); 

,我们得到一个自动跟踪答案和

create table t as (select * from u); 

我们没有获得自动跟踪。

幸运的是,我也跑了sql_trace,我拿起了TKprof的统计。

这就是我得到: 为 “插入吨(选择从u *);”:

******************************************************************************** 

SQL ID: bjdnhkhq8r6h4 
Plan Hash: 537870620 
insert into t (select * from u) 



call  count  cpu elapsed  disk  query current  rows 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
Parse  1  0.00  0.03   2   2   0   0 
Execute  1  1.74  7.67  6201  22538  58121  1000000 
Fetch  0  0.00  0.00   0   0   0   0 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
total  2  1.74  7.71  6203  22540  58121  1000000 

Misses in library cache during parse: 1 
Misses in library cache during execute: 1 
Optimizer mode: ALL_ROWS 
Parsing user id: 91 

Rows  Row Source Operation 
------- --------------------------------------------------- 
     0 LOAD TABLE CONVENTIONAL (cr=0 pr=0 pw=0 time=0 us) 
     1 TABLE ACCESS FULL U (cr=4 pr=5 pw=0 time=0 us cost=1750 size=1997891896 card=997948) 

******************************************************************************** 

和 “创建表T为(SELECT * FROM U)” 我们得到:

******************************************************************************** 

SQL ID: asawpwvdj1nbv 
Plan Hash: 2321469388 
create table t as (select * from u) 


call  count  cpu elapsed  disk  query current  rows 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
Parse  1  0.00  0.03   2   2   1   0 
Execute  1  0.90  2.68  6372  12823  8573  1000000 
Fetch  0  0.00  0.00   0   0   0   0 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
total  2  0.90  2.71  6374  12825  8574  1000000 

Misses in library cache during parse: 1 
Optimizer mode: ALL_ROWS 
Parsing user id: 91 

Rows  Row Source Operation 
------- --------------------------------------------------- 
     0 LOAD AS SELECT (cr=13400 pr=6382 pw=6370 time=0 us) 
1000000 TABLE ACCESS FULL U (cr=12640 pr=6370 pw=0 time=349545 us cost=1750 size=2159012856 card=1078428) 

******************************************************************************** 

那么这告诉我们什么? well: -方法2比方法1花费了大约65%的总体时间(对于100万行,整体时间减少了) -方法2比方法1花费了大约48%的CPU时间总体上不足 - 稍微分析了更多磁盘与方法2比方法方法2共检索到1个少 -a很多缓冲区比方法1

希望这可以帮助你:)

1

它为时尚晚,无法提供建议,但如果新的(目标)表有约束或索引或触发器,则可能会帮助解决上述问题,然后尝试先删除/禁用它们,然后加载大量数据,最后创建/使您的约束,索引和触发器恢复并分析您的表索引。这种节省时间的解决方案仅在您只需复制一次批量数据时才会提示。同时在表DBMS中插入新记录可确保约束,检查和索引降低速度

相关问题