困扰很久的问题,几个大项目每天晚上定时expdp逻辑备份,有2个大表总是导出失败,提示快照过旧。
■expdp报错信息
ORA-31693: Table data object "xxx"."T_BPM_INTERFACE_INFO" failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-01555: snapshot too old: rollback segment number 10 with name "_SYSSMU10_3590435561$" too small ORA-31693: Table data object "xxx"."ORDER_SIGN" failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-01555: snapshot too old: rollback segment number 25 with name "_SYSSMU25_2496846052$" too small Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded Job "SYS"."SYS_EXPORT_SCHEMA_01" completed with 2 error(s) at Tue Oct 15 16:14:01 2019 elapsed 0 17:08:56
■失败的2个大表带lob字段,理论上通过修改retention可以推迟快照过旧错误的出现,但通过观察,实在是过于缓慢,700万行的表ORDER_SIGN,最终备份了17个小时后,失败。
select count(1) from "xxx"."ORDER_SIGN";
7085468
select count(1) from "xxx"."T_BPM_INTERFACE_INFO";
88504505
■ORDER_SIGN表备份到了02号dmp文件 [/usr/bin/ksh]$ls -l xxx.201910142305* -rw-r--r-- 1 xxx oinstall 139007 Oct 15 16:14 xxx.201910142305.log -rw-r----- 1 xxx oinstall 17443205120 Oct 15 16:14 xxx.201910142305_01.dmp -rw-r----- 1 xxx oinstall 102942642176 Oct 15 16:14 xxx.201910142305_02.dmp -rw-r----- 1 xxx oinstall 13191090176 Oct 15 16:14 xxx.201910142305_03.dmp -rw-r----- 1 xxx oinstall 30517211136 Oct 15 16:14 xxx.201910142305_04.dmp -rw-r----- 1 xxx oinstall 24942342144 Oct 15 16:14 xxx.201910142305_05.dmp -rw-r----- 1 xxx oinstall 15852589056 Oct 15 16:14 xxx.201910142305_06.dmp -rw-r----- 1 xxx oinstall 25592045568 Oct 15 16:14 xxx.201910142305_07.dmp -rw-r----- 1 xxx oinstall 12913737728 Oct 15 16:14 xxx.201910142305_08.dmp
经测试,尽管02号dmp文件已经导出了备份失败表的大量数据,但是仍然无法正常使用这些数据导入数据库。
■确认问题的根源如MOS下文所示
DataPump Export (EXPDP) Is Taking Long time To Export Tables With BASICFILES (Doc ID 1595380.1)
APPLIES TO Oracle Database - Enterprise Edition - Version 11.2.0.3 to 12.1.0.2 [Release 11.2 to 12.1] SYMPTOMS Datapump export is taking more than the double time after a database upgrade from 10.2.0.4 to 11.2.0.3. In our example, one table with a LOB column takes more than 18 hours. The rest of the objects are exported within 30 mins. CAUSE Testing internally made clear that a huge decrease in time could be achieved by transform of BasicFiles into SecureFiles even if the parallel option is still not usable. SOLUTION In our internal testing, the time for the export was reduced from 30 minutes to 6 minutes for 20 GB of data. Basicfiles: 20.53 GB 8752 rows => 10:38:17 - 11:09:32 around 30 minutes Securefiles: 20.53 GB 8752 rows => 14:30:51 - 14:36:41 around 6 minutes The solution is transform the table to SecureFiles and then redo the export process
一句话总结,即11g带lob字段的大表,如果将lob字段默认属性basicfiles转换为securefiles,则导出时间大大减少【a huge decrease in time】
■参考MOS下文进行迁移
HOW TO ONLINE CONVERT TO/FROM SECUREFILES FROM/TO BASICFILES (Doc ID 728758.1)
【以下步骤已测试】
-- CREATE OUR TABLE TO BE CONVERTED TO SECUREFILE CREATE TABLE cust ( c_id NUMBER PRIMARY KEY, c_zip NUMBER, c_name VARCHAR(30) DEFAULT NULL, c_lob CLOB ); -- INSERT A ROW INTO THIS TABLE INSERT INTO cust VALUES (1, 94065, 'hhh', 'ttt'); COMMIT; -- CREATE OUR 'INTERIM' TABLE CREATE TABLE cust_int ( c_id NUMBER NOT NULL, c_zip NUMBER, c_name VARCHAR(30) DEFAULT NULL, c_lob CLOB ) LOB(c_LOB) STORE AS SECUREFILE (NOCACHE FILESYSTEM_LIKE_LOGGING); -- START THE REDEFINITION DECLARE col_mapping VARCHAR2(1000); BEGIN -- map all the columns in the interim table to the original table col_mapping := 'c_id c_id , '|| 'c_zip c_zip , '|| 'c_name c_name, '|| 'c_lob c_lob'; DBMS_REDEFINITION.START_REDEF_TABLE('XXX', 'cust', 'cust_int', col_mapping); END; / -- COPY THE CONSTRAINTS FROM OUR ORIGINAL TABLE ... TO THE INTERIM TABLE DECLARE error_count pls_integer := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('XXX', 'cust', 'cust_int', dbms_redefinition.cons_orig_params, TRUE,TRUE,TRUE,FALSE, error_count); DBMS_OUTPUT.PUT_LINE('errors := ' || TO_CHAR(error_count)); END; / -- KEEPS THE INTERIM TABLE SYNCHRONIZED WITH THE ORIGINAL TABLE. BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE('XXX', 'cust', 'cust_int'); END; / -- FINISH OUR REDEFINITION WHICH WILL SWAP THE ORIGINAL TABLE AND THE INTERIM TABLE EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE('XXX', 'cust', 'cust_int'); -- DROP THE INTERIM TABLE (THE FORMER ORIGINAL TABLE) DROP TABLE cust_int; -- TRY TO INSERT A ROW INTO OUR NEW SECUREFILE TABLE TO PROVE THAT THE PRIMARY KEY WAS PROPERLY MOVED INSERT INTO cust VALUES(2, 94066, 'hhh', 'ttt'); -- VERIFY THAT OUR NEW TABLE IS NOW USING SecureFiles SELECT SECUREFILE FROM USER_LOBS WHERE TABLE_NAME = 'CUST';
■为减少对业务影响,在测试库做在线转换测试
■导出大表部分最新数据
schemas='XXX';tablename='ORDER_SIGN' parallel=2;dumpdir=TEMP02;NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;TIMESTAMP=`date +"%Y%m%d%H%M"` DUMPNAME=${schemas}_${tablename}_${TIMESTAMP} expdp '"/ as sysdba"' DIRECTORY=${dumpdir} JOB_NAME=${DUMPNAME} DUMPFILE=${DUMPNAME}.dmp LOGFILE=${DUMPNAME}.log COMPRESSION=ALL TABLES=${schemas}.${tablename} QUERY=${schemas}.${tablename}:\"WHERE id \> 11000000\" PARALLEL=${parallel} CLUSTER=N >>${DUMPNAME}.log 2>&1& . . exported "XXX"."ORDER_SIGN" 6.495 GB 512464 rows Job "SYS"."HBWFM_ORDER_SIGN_201910151907" successfully completed at Tue Oct 15 20:23:19 2019 elapsed 0 01:16:07
■导入
schemas='XXX';tablename='ORDER_SIGN' DUMPNAME=${schemas}_${tablename}_201910151907 PARALLEL=1;DUMPDIR=DUMP_DIR_NFS;NLS_LANG=AMERICAN_AMERICA.ZHS16GBK impdp '"/ as sysdba"' directory=${DUMPDIR} job_name=IMPDP_${DUMPNAME} dumpfile=${DUMPNAME}.dmp logfile=${DUMPNAME}.log_impdp Table_exists_action=replace TABLES=${schemas}.${tablename} parallel=${PARALLEL} cluster=N >/dev/null 2>&1& . . imported "XXX"."ORDER_SIGN" 6.495 GB 512464 rows Job "SYS"."IMPDP_HBWFM_ORDER_SIGN_201910151907" successfully completed at Tue Oct 15 21:04:56 2019 elapsed 0 00:34:42
■在线转换
【注:重定义耗时较长,在12c测试库,50万记录需要34分钟】
■大表占用表空间统计
T_BPM_INTERFACE_INFO 710G
ORDER_SIGN 300G
■由于2个大表太大,在线转换时间太长,可能长达6、7个小时或更多,因此需要采取新建表空间、新建临时表的方式逐步转换。待合适机会申请分配表空间,目前只能先关掉两个lob大表的逻辑备份任务。
文章评论
及时雨啊大师,真的好棒!!!
i am from Italy hello. Can you help me translate? /rardor
山西某项目的表T_BPM_INTERFACE_INFO的lob字段改为STORE AS SECUREFILE之后,expdp时间正常。