实战TTS迁移单个表空间8T大小bossdb

2023年 11月 23日 45.1k 0

实战TTS迁移单个表空间8T大小-bossdb

  • 环境说明
  • 基本检查
  • 源端配置
    • 创建文件系统
    • 表空间置为read only
    • 落地asm数据文件
    • 压缩数据文件
    • sftp传输
  • 目标端配置
    • 创建pdb
    • sftp上传数据文件和元数据文件
    • 创建必要的dblink和directory
    • 目标端创建用户并赋权
    • 目标端导入public对象
    • 目标端导入用户元数据
    • 表空间置为read write
    • 对象比对
    • 导入系统权限信息
    • 处理无效对象
    • 收集数据库和数据字典统计信息
    • 回收dba 权限
    • 修改用户默认表空间
    • STATISTICS/MARKER导入慢HANG排查
    • 创建合适还原点,方便随时回退。
    • 总结

参考上篇:https://www.modb.pro/db/1719560983037964288。还有一套类似的环境,仍然需要历史数据归档。

环境说明

源端环境:
跟上一套环境基本类似,2节点19c rac。小版本19.13.0.0,归档模式。现在由于存储限制,已经开始存储阈值告警,没有多余空间承载这么大容量。所以经过讨论,把这套库里面的历史数据表空间,8.8T左右,迁移至别的数据库里面,源端删除表空间,达到释放空间的目的。也就是历史数据归档。

qhbossdb2:/home/oracle(qhbossdb2)$cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.6 (Maipo)
qhbossdb2:/home/oracle(qhbossdb2)$sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Mon Nov 6 22:05:19 2023
Version 19.13.0.0.0

Copyright (c) 1982, 2021, Oracle. All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.13.0.0.0

此表空间8T左右,但是相比于上一套库,表空间里面的对象大大减少,根本没在一个数量级上。所以元数据比较小。

TBS_NAME TYPE TOTAL(GB) USAGE(GB) FREE(GB) FREE PCT % EXTENSIBLE(GB) MAX_SIZE(GB) USED PCT OF MAX % NO_AXF_NUM AXF_NUM
-------------------- -------------------- --------------- --------------- --------------- ---------- --------------- --------------- ----------------- ---------- -------
TBS_OLD_DATA PERMANENT 8,820.000 6,862.125 1,957.875 77.80 .000 8,820.000 77.80 292 2

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination +ARCHIVEDG
Oldest online log sequence 78617
Next log sequence to archive 78626
Current log sequence 78626
SQL> select count(*) from dba_tables where tablespace_name='TBS_OLD_DATA';

COUNT(*)
----------
10171

SQL> select count(*) from dba_indexes where tablespace_name='TBS_OLD_DATA';

COUNT(*)
----------
2629

目标端环境:
和上一套相同,里面单独创建一个pdb即可。
方案和上篇相同,由于有带宽限制,针对数据文件,先把数据文件从asm中拷贝到文件系统,再进行pigz压缩,然后通过网络进行传输。目标端先unpigz解压,然后从文件系统拷贝到asm中,整个过程就是这样的。
下面就详细说明下整个迁移过程:

基本检查

其实主要就是表空间自包含检查

SQL> exec dbms_tts.transport_set_check('TBS_OLD_DATA',true);

-- 查看结果,结果为空,表示为自包含
SQL> SQL> exec dbms_tts.transport_set_check('TBS_OLD_DATA',true);

-- 查看结果,结果为空,表示为自包含
col violations for a100
select * from transport_set_violations;

PL/SQL procedure successfully completed.

SQL> SQL> SQL> SQL>
VIOLATIONS
--------------------------------------------------------------------------------
ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DACT1 for TF_F_USER_SERV
STATE not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for TS_S_ADFEE_DAY_STAFF not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for SNAPSHOT_ACCOUN
TDEPOSIT not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for SNAPSHOT_ACCOUN
TDEPOSIT_SHENZW not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for SNAPSHOT_BILL n
ot contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for SNAPSHOT_BILL_S
HENZW not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for TS_S_BADFEE_DAY
_STAFF not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for TS_S_PFEE_DAY_S

TAFF not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for SNAPSHOT_LEAVER
EALFEE_BAK not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for SNAPSHOT_LEAVER
EALFEE not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for TS_S_WFEE_DAY_S
TAFF not contained in transportable set.

ORA-39921: Default Partition (Table) Tablespace TBS_ACT_DSTA for TS_S_DRTLFEE_DA
Y_STAFF not contained in transportable set.

12 rows selected.

和上篇不通,这个表空间都是报错:ORA-39921.表空间中涉及的表,都是分区表,里面有些分区是没有在我们要迁移的表空间中。所以我们要把这些不满足条件的分区,要么移动到待迁移的表空间中,要么把满足条件的表分区移动到其他表空间。
里面也有技巧,有的是空表,好处理,可以修改表的ATTRIBUTES或者重新根据ddl重建即可。即
ALTER TABLE “UIF_ACT1_STA”.“TS_S_ADFEE_DAY_STAFF” MODIFY DEFAULT ATTRIBUTES TABLESPACE TBS_ACT_DSTA;
如果不满足条件的分区上面有数据,就得采用move挪数据了。
经过处理,条件已满足:

col violations for a70
select * from transport_set_violations;

PL/SQL procedure successfully completed.

SQL> SQL> SQL> SQL>
no rows selected

源端配置

创建文件系统

23: 2097152 MB CANDIDATE /dev/asmdisk3 grid asmadmin
24: 2097152 MB CANDIDATE /dev/asmdisk4 grid asmadmin

有两块未用的大磁盘。

qhbossdb2:/home/grid(+ASM2)$df -h
Filesystem Size Used Avail Use% Mounted on
。。。。。。
/dev/mapper/vg_app-lv_app 8.0T 451G 7.6T 6% /bosstest
tmpfs 76G 0 76G 0% /run/user/8024

表空间置为read only

SQL> alter tablespace TBS_OLD_DATA read only;

Tablespace altered.

落地asm数据文件

总共287个datafile

SQL> select file_name from dba_data_files where tablespace_name='TBS_OLD_DATA'
FILE_NAME
--------------------------------------------------------------------------------
+DATADG1/QHBOSSDB/DATAFILE/tbs_old_data.851.1058627863
+DATADG1/QHBOSSDB/DATAFILE/tbs_old_data.852.1058628227
+DATADG1/QHBOSSDB/DATAFILE/tbs_old_data.853.1058628285
+DATADG1/QHBOSSDB/DATAFILE/tbs_old_data.854.1058628345
+DATADG1/QHBOSSDB/DATAFILE/tbs_old_data.913.1076455835
.....

ASMCMD> cp +DATADG1/QHBOSSDB/DATAFILE/tbs_old_data.383.1048698187 /bosstest/tbs_old_data_1.dbf
cp +DATADG1/QHBOSSDB/DATAFILE/tbs_old_data.384.1049306279 /bosstest/tbs_old_data_2.dbf
cp +DATADG2/QHBOSSDB/DATAFILE/tbs_old_data.298.1053711487 /bosstest/tbs_old_data_3.dbf
cp +DATADG2/QHBOSSDB/DATAFILE/tbs_old_data.299.1053711521 /bosstest/tbs_old_data_4.dbf
.......

压缩数据文件

此处分多个文件夹,采用pigz并行压缩:

qhbossdb2:/bosstest/1022(qhbossdb2)$pigz *

因为是核心生产库,注意观察cpu的使用率,如果使用率过高,注意-p调节cpu核心数。

sftp传输

此过程耗费了主要的时间,cp,压缩,落地传输,解压,cp大概经历了48h。

目标端配置

创建pdb

SQL> create pluggable database OLDBOSSDB admin user pdbadmin identified by xxxx;

Pluggable database created.

SQL> show pdbs;

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
...........
13 OLDBOSSDB MOUNTED
SQL> alter pluggable database OLDBOSSDB open;

Pluggable database altered.

sftp上传数据文件和元数据文件

先解压这些传输的文件

[root@zdb034 1021]# unpigz -d *

赋权

[root@zdb034 dsgarchive]# chown oracle:oinstall -R bosstest/
[root@zdb034 dsgarchive]# chmod 775 -R bossest/

文件系统拷贝到asm中

SQL> col file_name for a100
SQL> select file_name from dba_data_files;

FILE_NAME
----------------------------------------------------------------------------------------------------
+HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/system.1986.1152225833
+HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/sysaux.1977.1152225833
+HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/undotbs1.1979.1152225831

ASMCMD> cp /dsgarchive/bosstest/1024/tbs_old_data_1.dbf +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/
cp /dsgarchive/bosstest/1024/tbs_old_data_2.dbf +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/
cp /dsgarchive/bosstest/1024/tbs_old_data_3.dbf +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/
cp /dsgarchive/bosstest/1024/tbs_old_data_4.dbf +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/
cp /dsgarchive/bosstest/1024/tbs_old_data_7.dbf +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/
cp /dsgarchive/bosstest/1024/tbs_old_data_9.dbf +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/
.......

创建必要的dblink和directory

create public database link to_boss connect to system identified by "xxxx" using '(DESCRIPTION =(ADDRESS_LIST =(ADDRESS =(PROTOCOL = TCP)(HOST = xx.xx.xx.xx)(PORT = 1688)))(CONNECT_DATA =(SERVICE_NAME = xxossdb)))';
create directory impdp as '/home/oracle/enmo';
grant read,write on directory impdp to public;

目标端创建用户并赋权

源端执行脚本,输出结果,目标端执行创建语句。

set serveroutput ON echo on
DECLARE
v_sql VARCHAR2 (2000);
BEGIN
FOR c_username IN (SELECT name, password
FROM sys.user$
WHERE name NOT IN ('ANONYMOUS',
'APEX_030200',
'APEX_PUBLIC_USER',
'APPQOSSYS',
'CTXSYS',
'DBSNMP',
'DIP',
'EXFSYS',
'FLOWS_FILES',
'MDDATA',
'MDSYS',
'MGMT_VIEW',
'OLAPSYS',
'ORACLE_OCM',
'ORDDATA',
'ORDPLUGINS',
'ORDSYS',
'OUTLN',
'OWBSYS',
'OWBSYS_AUDIT',
'SI_INFORMTN_SCHEMA',
'SPATIAL_CSW_ADMIN_USR',
'SPATIAL_WFS_ADMIN_USR',
'SYS',
'SYSMAN',
'SYSTEM',
'WMSYS',
'XDB',
'XS$NULL','DMSYS','TSMSYS')
AND TYPE# = 1)
LOOP
v_sql :=
'create user '
|| c_username.name
|| ' identified by values '||chr(39)
|| c_username.password||chr(39)
|| ';';
DBMS_OUTPUT.put_line (v_sql);
END LOOP;
END;
/

create user YL_HEWY identified by values '05D228286215613E';
create user UQRY_LC_ZHANGJIN identified by values 'B5739F5CCA86DE9F';
............
create user ENMO identified by values '0687925BF6A65291';
create user LC_DINGYP identified by values '2654914C82A1FA33';

PL/SQL procedure successfully completed.

为了后面导入元数据报错权限,线都赋予dba权限,后面回收即可

grant dba to YL_HEWY ;
grant dba to UQRY_LC_ZHANGJIN ;
grant dba to UCR_PARAM ;
...........
grant dba to ENMO ;
grant dba to LC_DINGYP ;

目标端导入public对象

nohup impdp system/xxxx@oldboss include=db_link:"" in (select db_link from dba_db_links where owner='PUBLIC')"" include=profile include=role FULL=Y directory=impdp network_link=to_boss logfile=dblinkboss.log &

具体过程如下:

-bash-4.2$ cat dblinkboss.log
;;;
Import: Release 19.0.0.0.0 - Production on Mon Nov 6 23:08:26 2023
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/********@oldboss include=db_link:" in (select db_link from dba_db_links where owner='PUBLIC')" include=profile include=role FULL=Y directory=impdp network_link=to_boss logfile=dblinkboss.log
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 0 KB
Processing object type DATABASE_EXPORT/PROFILE
.........
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/SCHEMA/DB_LINK
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 3 error(s) at Mon Nov 6 23:15:58 2023 elapsed 0 00:07:15

目标端导入用户元数据

使用impdp+network_link可以不落地进行元数据导入,但是参数TRANSPORT_datafiles和TRANSPORT_TABLESPACES 必须连用.

-bash-4.2$ cat bosspar.par
logfile=bosspar.log
metrics=yes
cluster=N
directory=impdp
EXCLUDE=STATISTICS --后面发现导入统计信息占用了大量的时间,尤其是/STATISTICS/MARKER。所以此处不导入统计信息,后面统一收集即可。
network_link=to_boss
TRANSPORT_TABLESPACES=TBS_OLD_DATA
TRANSPORT_datafiles='+HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_1.dbf','+HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_2.dbf','+HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_3.dbf','+HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_4.dbf',........,'+HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_293.dbf','+HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_294.dbf'

nohup impdp system/xxxx@oldboss parfile=bosspar.par &

具体过程如下:

-bash-4.2$ impdp system/xxxx@oldboss attach="SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01"

Import: Release 19.0.0.0.0 - Production on Wed Nov 8 08:44:50 2023
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Job: SYS_IMPORT_TRANSPORTABLE_01
Owner: SYSTEM
Operation: IMPORT
Creator Privs: TRUE
GUID: 099A59DE0B692C97E0632204BC87A154
Start Time: Wednesday, 08 November, 2023 8:41:46
Mode: TRANSPORTABLE
Remote link: to_boss
Instance: zhjqdb1
Max Parallelism: 1
Timezone: +00:00
Export timezone: +08:00
Timezone version: 32
Endianness: LITTLE
NLS character set: ZHS16GBK
NLS NCHAR character set: AL16UTF16
EXPORT Job Parameters:
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND system/********@oldboss parfile=bosspar.par
METRICS 1
TABLESPACE_DATAFILE +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_1.dbf
TRACE 0
TABLESPACE_DATAFILE +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_2.dbf
TABLESPACE_DATAFILE +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_3.dbf
TABLESPACE_DATAFILE +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_4.dbf
.............................................................
TABLESPACE_DATAFILE +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_293.dbf
TABLESPACE_DATAFILE +HDD_DATA01/ZHJQDB/097DC21E74783E8AE0632204BC879338/DATAFILE/tbs_old_data_294.dbf
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Job heartbeat: 2

Worker 1 Status:
Instance ID: 1
Instance name: zhjqdb1
Host name: zdb034
Object start time: Wednesday, 08 November, 2023 8:43:42
Object status at: Wednesday, 08 November, 2023 8:44:50
Process Name: DW00
State: EXECUTING
Object Schema: OLD_UCR_STA1
Object Name: TD_TS_S_USRS_CDRBAK20220510
Object Type: TRANSPORTABLE_EXPORT/TABLE
Completed Objects: 68
Worker Parallelism: 1

--过一会
Import> status

Job: SYS_IMPORT_TRANSPORTABLE_01
Operation: IMPORT
Mode: TRANSPORTABLE
Remote link: to_boss
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Job heartbeat: 168

Worker 1 Status:
Instance ID: 1
Instance name: zhjqdb1
Host name: zdb034
Object start time: Wednesday, 08 November, 2023 8:43:42
Object status at: Wednesday, 08 November, 2023 11:32:2
Process Name: DW00
State: EXECUTING
Object Schema: OLD_UIF_ACT1_BI
Object Name: TF_B_WRITEOFFLOG_INBAK20220510
Object Type: TRANSPORTABLE_EXPORT/TABLE
Completed Objects: 9,414
Worker Parallelism: 1

--过一会,导入对象权限:

Import> status

Job: SYS_IMPORT_TRANSPORTABLE_01
Operation: IMPORT
Mode: TRANSPORTABLE
Remote link: to_boss
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Job heartbeat: 6

Worker 1 Status:
Instance ID: 1
Instance name: zhjqdb1
Host name: zdb034
Object start time: Wednesday, 08 November, 2023 12:12:05
Object status at: Wednesday, 08 November, 2023 12:16:39
Process Name: DW00
State: EXECUTING
Object Schema: OLD_UCR_ACT1
Object Name: DEBUG
Object Type: TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Completed Objects: 6,320
Worker Parallelism: 1

--过一会,导入索引信息:
Import> status

Job: SYS_IMPORT_TRANSPORTABLE_01
Operation: IMPORT
Mode: TRANSPORTABLE
Remote link: to_boss
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Job heartbeat: 17

Worker 1 Status:
Instance ID: 1
Instance name: zhjqdb1
Host name: zdb034
Object start time: Wednesday, 08 November, 2023 12:32:32
Object status at: Wednesday, 08 November, 2023 12:48:44
Process Name: DW00
State: EXECUTING
Object Schema: OLD_UCR_STA1
Object Name: IDX_TS_B_BILL_001
Object Type: TRANSPORTABLE_EXPORT/INDEX/INDEX
Completed Objects: 2,047
Worker Parallelism: 1

由于后面的marker导入耗时2天还没完,太慢了,根据后面的排查步骤,drop contents表空间信息后,exclude掉统计信息,drop contents,重新导入。
完整的元数据导入日志如下:

-bash-4.2$ cat bosspar.log
;;;
Import: Release 19.0.0.0.0 - Production on Sun Nov 12 09:42:02 2023
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
;;; **************************************************************************
;;; Parfile values:
;;; parfile: transport_datafiles=+HDD_DATA01/ZHJQDB/09DF248D78BF1C24E0632204B
;;; _parfile: C876C27/DATAFILE/tbs_old_data_1.dbf,
;;; _parfile: +HDD_DATA01/ZHJQDB/09DF248D78BF1C24E0632204BC876C27/DATAFILE/tbs
;;; _parfile: _old_data_2.dbf,
;;; _parfile: +HDD_DATA01/ZHJQDB/09DF248D78BF1C24E0632204BC876C27/DATAFILE/tbs
;;; _parfile: _old_data_3.dbf,
;;; _parfile: +HDD_DATA01/ZHJQDB/09DF248D78BF1C24E0632204BC876C27/DATAFILE/tbs
................................
;;; _parfile: +HDD_DATA01/ZHJQDB/09DF248D78BF1C24E0632204BC876C27/DATAFILE/tbs
;;; _parfile: _old_data_293.dbf,
;;; _parfile: +HDD_DATA01/ZHJQDB/09DF248D78BF1C24E0632204BC876C27/DATAFILE/tbs
;;; _parfile: _old_data_294.dbf
;;; parfile: transport_tablespaces=TBS_OLD_DATA
;;; parfile: network_link=to_boss
;;; parfile: exclude=STATISTICS
;;; parfile: directory=impdp
;;; parfile: cluster=N
;;; parfile: metrics=Y
;;; parfile: logfile=bosspar.log
;;; **************************************************************************
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/********@oldboss parfile=bosspar.par
W-1 Source time zone is +08:00 and target time zone is +00:00.
W-1 Startup took 3 seconds
W-1 Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
W-1 Completed 1 PLUGTS_BLK objects in 78 seconds
W-1 Processing object type TRANSPORTABLE_EXPORT/TABLE
W-1 Completed 11811 TABLE objects in 12430 seconds
W-1 Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
W-1 Completed 31760 OBJECT_GRANT objects in 1168 seconds
W-1 Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
W-1 Completed 2519 INDEX objects in 1116 seconds
W-1 Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_INDEX/INDEX
W-1 Completed 9 INDEX objects in 10 seconds
W-1 Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
W-1 Completed 1871 CONSTRAINT objects in 568 seconds
W-1 Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
W-1 Completed 1 PLUGTS_BLK objects in 2120 seconds
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Sun Nov 12 14:33:33 2023 elapsed 0 04:51:28

整个过程相比前面,只耗时不到5h。效率大大提高。
导入统计信息占用了主要的时间,尤其是/STATISTICS/MARKER。所以此处不导入统计信息,后面统一全库收集即可。导入的时候加参数:EXCLUDE=STATISTICS 即可。

表空间置为read write

SQL> alter tablespace TBS_OLD_DATA read write;

Tablespace altered.

SQL> select TABLESPACE_NAME,status,CONTENTS from dba_tablespaces;

TABLESPACE_NAME STATUS CONTENTS
------------------------------ --------- ---------------------
SYSTEM ONLINE PERMANENT
SYSAUX ONLINE PERMANENT
UNDOTBS1 ONLINE UNDO
TEMP ONLINE TEMPORARY
TBS_OLD_DATA ONLINE PERMANENT

对象比对

表空间信息:

Mon Nov 06 page 1
Tablespace Usage Status

TBS_NAME TYPE TOTAL(GB) USAGE(GB) FREE(GB) FREE PCT % EXTENSIBLE(GB) MAX_SIZE(GB) USED PCT OF MAX % NO_AXF_NUM AXF_NUM
-------------------- -------------------- --------------- --------------- --------------- ---------- --------------- --------------- ----------------- ---------- -------
TBS_OLD_DATA PERMANENT 8,610.000 4,782.176 3,827.824 55.54 .000 8,610.000 55.54 287 0
SYSTEM PERMANENT 14.678 2.788 11.889 19.00 17.322 32.000 8.71 0 1
SYSAUX PERMANENT 2.188 2.074 .113 94.82 29.812 32.000 6.48 0 1
UNDOTBS1 UNDO 1.011 .889 .122 87.93 30.989 32.000 2.78 0 1
TEMP TEMPORARY .339 .000 .339 .00 31.661 32.000 .00 0 1

对象信息:

TBS_NAME TYPE TOTAL(GB) USAGE(GB) FREE(GB) FREE PCT % EXTENSIBLE(GB) MAX_SIZE(GB) USED PCT OF MAX % NO_AXF_NUM AXF_NUM
-------------------- -------------------- --------------- --------------- --------------- ---------- --------------- --------------- ----------------- ---------- -------
TBS_OLD_DATA PERMANENT 8,820.000 6,862.115 1,957.885 77.80 .000 8,820.000 77.80 292 2
SYSTEM PERMANENT 1.074 .419 .655 38.98 30.926 32.000 1.311
SYSAUX PERMANENT .508 .478 .029 94.21 31.492 32.000 1.491
TEMP TEMPORARY .464 .000 .464 .00 31.536 32.000 .001
UNDOTBS1 UNDO .444 .444 .000 100.00 31.556 32.000 1.391

SQL> select owner,table_name from dba_tables where tablespace_name='TBS_OLD_DATA'
2 minus
3 select owner,table_name from dba_tables@to_boss where tablespace_name='TBS_OLD_DATA';

no rows selected
SQL> select owner,index_name from dba_indexes where tablespace_name='TBS_OLD_DATA'
2 minus
3 select owner,index_name from dba_indexes@to_boss where tablespace_name='TBS_OLD_DATA';
OWNER INDEX_NAME
-------------------- ------------------------------
OLD_UCR_ACT1 SYS_C0074670
OLD_UCR_ACT1 SYS_IL0002790879C00039$$
OLD_UCR_ACT1 SYS_IL0002790882C00039$$
。。。。。。。。。
OLD_UCR_ACT1 SYS_IL0002791255C00006$$
OLD_UCR_ACT1 SYS_IL0002791255C00007$$
OLD_UCR_ACT1 SYS_IL0002791255C00008$$
OLD_UCR_ACT1 SYS_IL0002791255C00009$$
OLD_UCR_ACT1 SYS_IOT_TOP_2780721
OLD_UCR_PARAM SYS_C0074432
OLD_UCR_PARAM SYS_C0074433
OLD_UCR_PARAM SYS_C0074434
OLD_UCR_PARAM SYS_C0074435
OLD_UCR_PARAM SYS_C0074436
OLD_UCR_PARAM SYS_C0074437
OLD_UCR_PARAM SYS_C0074438
OLD_UCR_PARAM SYS_C0074439
OLD_UCR_PARAM SYS_C0074891
OLD_UCR_PARAM SYS_C0075328
OLD_UCR_PARAM SYS_IL0002790900C00002$$
OLD_UCR_STA1 SYS_IL0002790865C00003$$
OLD_UCR_STA1 SYS_IL0002790865C00012$$
OLD_UCR_STA_CEN SYS_C0073898
OLD_UCR_STA_CEN SYS_C0073899
OLD_UCR_STA_CEN SYS_IL0002790876C00002$$
OLD_UOP_ACT1 SYS_IL0002790885C00012$$
OLD_UOP_ACT1 SYS_IL0002790888C00012$$

198 rows selected.

SQL>

多余的索引,都是lob字段对应的段和索引段,重新生成的。
所以对象都迁移过来了

导入系统权限信息

nohup impdp system/xxxx@oldboss metrics=yes directory=impdp network_link=TO_boss cluster=N logfile=boss11.log full=y include=system_grant,ROLE_GRANT &

处理无效对象

exec utl_recomp.recomp_parallel(60);

收集数据库和数据字典统计信息

set serveroutput on
set timing on
exec dbms_stats.set_param('DEGREE','64');
select dbms_stats.get_param('DEGREE') from dual;
exec dbms_stats.gather_database_stats;
exec dbms_stats.gather_dictionary_stats;
exec dbms_stats.gather_fixed_objects_stats;
exec dbms_stats.set_param('DEGREE','0');
commit;

回收dba 权限

revoke dba from UOP_VB1 ;
revoke dba from UQRY_LC_ZHANGJIN ;
revoke dba from MD ;
revoke dba from PD ;
revoke dba from SD ;
revoke dba from BD ;
revoke dba from AD ;
revoke dba from ZD ;
.........

修改用户默认表空间

alter user YL_HEWY default tablespace TBS_OLD_DATA;
alter user UQRY_LC_ZHANGJIN default tablespace TBS_OLD_DATA;
alter user UCR_PARAM default tablespace TBS_OLD_DATA;
alter user UCR_ACT1 default tablespace TBS_OLD_DATA;
.............

alter user UOP_VB1 temporary tablespace temp;
alter user UQRY_LC_ZHANGJIN temporary tablespace temp;
.........
alter user LC_RUW temporary tablespace temp;
alter user WS_LIQY temporary tablespace temp;
alter user LC_WANGJ temporary tablespace temp;
alter user YD_MAWX temporary tablespace temp;

经历了3天,整个迁移过程就算结束了。

STATISTICS/MARKER导入慢HANG排查

W-1 Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS
W-1 Completed 11811 TABLE_STATISTICS objects in 85 seconds
W-1 Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" stopped due to fatal error at Fri Nov 10 11:07:41 2023 elapsed 2 02:25:57

从后面我结束的日志开看,marker统计信息跑路2天,都没有跑完。
强制结束之前,排查信息:
会话信息:

SQL> select s.inst_id,s.sid,s.module,s.state,
2 substr(s.event, 1, 21) as event,
3 s.seconds_in_wait as secs,
4 sql.sql_text as sql_text
5 from gv$session s
6 join gv$sql sql on sql.sql_id = s.sql_id
7 where s.module like 'Data Pump%'
8 order by s.inst_id, s.module, s.sid;

INST_ID SID MODULE STATE EVENT SECS
---------- ---------- ---------------------------------------------------------------- ------------------- ------------------------------------------ ----------
SQL_TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 2440 Data Pump Master WAITING wait for unread messa 4
BEGIN :1 := sys.kupc$que_int.receive(:2); END;

1 6785 Data Pump Worker WAITED SHORT TIME DLM cross inst call c 2
insert /*+ OPT_PARAM('_parallel_syspls_obey_force' 'false') */ into sys.dbms_stats_id_map_tab (c5, c1, c2, cn) select /*+ leading(s) index(s) */ distinct s.c5, s.c1, s.c2, i.partition_name cn from "SYS"."IMPDP_STATS" s, dba_ind_partitions i where s.c5 = :1 and s.c1 = :2 and s.type in ('I','i') and s.n13 is not null and s.c2 like 'SYS_%' and s.c3 is null and s.c5 = i.index_owner and s.c1 = i.index_name and s.n13 = i.partition_position and s.c2 != i.partition_name and s.statid is null

1 6785 Data Pump Worker WAITED SHORT TIME DLM cross inst call c 2
insert /*+ OPT_PARAM('_parallel_syspls_obey_force' 'false') */ into sys.dbms_stats_id_map_tab (c5, c1, c2, cn) select /*+ leading(s) index(s) */ distinct s.c5, s.c1, s.c2, i.partition_name cn from "SYS"."IMPDP_STATS" s, dba_ind_partitions i where s.c5 = :1 and s.c1 = :2 and s.type in ('I','i') and s.n13 is not null and s.c2 like 'SYS_%' and s.c3 is null and s.c5 = i.index_owner and s.c1 = i.index_name and s.n13 = i.partition_position and s.c2 != i.partition_name and s.statid is null

10046跟踪


1、查询impdp的进程spid
select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,
s.status, s.username, d.job_name, p.spid, s.serial#
from v$session s, v$process p, dba_datapump_sessions d
where p.addr=s.paddr and s.saddr=d.saddr;

SQL> select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,
2 s.status, s.username, d.job_name, p.spid, s.serial#
3 from v$session s, v$process p, dba_datapump_sessions d
4 where p.addr=s.paddr and s.saddr=d.saddr;

DATE PROGRAM SID STATUS USERNAME JOB_NAME SPID SERIAL#
------------------- -------------------- ---------- -------- ---------- ------------------------------ ------------------------ ----------
2023-11-10 10:31:17 oracle@zdb034 (DW00) 6785 ACTIVE SYSTEM SYS_IMPORT_TRANSPORTABLE_01 12833 34804
2023-11-10 10:31:17 udi@zdb034 (TNS V1-V 7751 ACTIVE SYSTEM SYS_IMPORT_TRANSPORTABLE_01 11415 54982
3)

2023-11-10 10:31:17 oracle@zdb034 (DM00) 2440 ACTIVE SYSTEM SYS_IMPORT_TRANSPORTABLE_01 11530 2869

SQL>
SQL> oradebug setospid 12833
Oracle pid: 296, Unix process pid: 12833, image: oracle@zdb034 (DW00)
SQL> oradebug event 10046 trace name context forever,level 12;
Statement processed.
SQL> --等10min
SQL> oradebug event 10046 trace name context off;
Statement processed.
SQL> oradebug tracefile_name;
/u01/app/oracle/diag/rdbms/zhjqdb/zhjqdb1/trace/zhjqdb1_dw00_12833.trc

结果

********************************************************************************

SQL ID: ccknb8hy76mdd Plan Hash: 1226767464

insert /*+ OPT_PARAM('_parallel_syspls_obey_force' 'false') */

into sys.dbms_stats_id_map_tab (c5, c1, c2, c3, cn)
select /*+
leading(s) index(s) */
distinct s.c5, s.c1, s.c2, s.c3,
i.subpartition_name cn
from "SYS"."IMPDP_STATS" s,
dba_ind_subpartitions i
where s.c5 = :1 and s.c1 = :2
and
s.type in ('I','i') and s.n13 is not null
and s.c3 like 'SYS_%'

and s.c5 = i.index_owner and s.c1 = i.index_name
and s.c2 =
i.partition_name
and s.n13 = i.subpartition_position
and
s.c3 != i.subpartition_name and s.statid is null

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 115 0.01 0.01 0 0 0 0
Execute 116 302.74 303.21 0 24097700 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 231 302.75 303.22 0 24097700 0 0

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 2)

这条sql总共执行花费了303s,光cpu消耗了302s,基本所有的时间都消耗在cpu上,才获取了0行,。显然这个sql的执行计划大大不符合预期。
查看此sql的执行计划

SQL_ID ccknb8hy76mdd, child number 0
-------------------------------------
insert /*+ OPT_PARAM('_parallel_syspls_obey_force' 'false') */
into sys.dbms_stats_id_map_tab (c5, c1, c2, c3, cn) select
/*+ leading(s) index(s) */ distinct s.c5, s.c1, s.c2, s.c3,
i.subpartition_name cn from "SYS"."IMPDP_STATS" s,
dba_ind_subpartitions i where s.c5 = :1 and s.c1 = :2 and
s.type in ('I','i') and s.n13 is not null and s.c3 like
'SYS_%' and s.c5 = i.index_owner and s.c1 = i.index_name
and s.c2 = i.partition_name and s.n13 =
i.subpartition_position and s.c3 != i.subpartition_name and
s.statid is null
Plan hash value: 1226767464
-----------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 18 (100)| |BCPU(2)(.03%) |
| 1 | LOAD TABLE CONVENTIONAL | DBMS_STATS_ID_MAP_TAB | | | | |
| 2 | HASH UNIQUE | | 1 | 410 | 18 (23)| 00:00:01 |BCPU(2)(.03%) |
|* 3 | HASH JOIN | | 1 | 410 | 17 (18)| 00:00:01 |BCPU(1)(.02%) |
|* 4 | TABLE ACCESS BY INDEX ROWID BATCHED | IMPDP_STATS | 1 | 346 | 4 (25)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | IMPDP_STATS | 1 | | 4 (25)| 00:00:01 |BCPU(6150)(95.27%) |
| 6 | VIEW | DBA_IND_SUBPARTITIONS | 2 | 128 | 13 (16)| 00:00:01 |
| 7 | UNION-ALL | | | | | |
| 8 | WINDOW SORT | | 1 | 252 | 7 (15)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 252 | 6 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 1 | 244 | 5 (0)| 00:00:01 |
| 11 | NESTED LOOPS OUTER | | 1 | 241 | 5 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 230 | 5 (0)| 00:00:01 |
| 13 | NESTED LOOPS OUTER | | 1 | 216 | 4 (0)| 00:00:01 |
| 14 | NESTED LOOPS | | 1 | 211 | 4 (0)| 00:00:01 |
| 15 | NESTED LOOPS | | 1 | 120 | 4 (0)| 00:00:01 |
| 16 | NESTED LOOPS | | 1 | 107 | 4 (0)| 00:00:01 |
| 17 | NESTED LOOPS | | 1 | 68 | 4 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | USER$ | 1 | 16 | 2 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | I_USER1 | 1 | | 1 (0)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | I_OBJ2 | 1 | 52 | 2 (0)| 00:00:01 |
|* 21 | TABLE ACCESS BY INDEX ROWID | INDCOMPART$ | 1 | 39 | 0 (0)| |
|* 22 | INDEX UNIQUE SCAN | I_INDCOMPART$ | 1 | | 0 (0)| |
|* 23 | TABLE ACCESS BY INDEX ROWID | IND$ | 1 | 13 | 0 (0)| |
|* 24 | INDEX UNIQUE SCAN | I_IND1 | 1 | | 0 (0)| |
|* 25 | TABLE ACCESS BY INDEX ROWID BATCHED| INDSUBPART$ | 1 | 91 | 0 (0)| |
|* 26 | INDEX RANGE SCAN | I_INDSUBPART_POBJSUBPART$ | 1 | | 0 (0)| |
|* 27 | INDEX UNIQUE SCAN | I_DEFERRED_STG1 | 1 | 5 | 0 (0)| |
|* 28 | TABLE ACCESS BY INDEX ROWID BATCHED | OBJ$ | 1 | 14 | 1 (0)| 00:00:01 |
|* 29 | INDEX RANGE SCAN | I_OBJ1 | 1 | | 1 (0)| 00:00:01 |
| 30 | TABLE ACCESS CLUSTER | SEG$ | 1 | 11 | 0 (0)| |
|* 31 | INDEX UNIQUE SCAN | I_FILE#_BLOCK# | 1 | | 0 (0)| |
| 32 | TABLE ACCESS CLUSTER | TS$ | 1 | 3 | 0 (0)| |
|* 33 | INDEX UNIQUE SCAN | I_TS# | 1 | | 0 (0)| |
|* 34 | TABLE ACCESS CLUSTER | TAB$ | 1 | 8 | 1 (0)| 00:00:01 |
| 35 | WINDOW SORT | | 1 | 204 | 6 (17)| 00:00:01 |
| 36 | NESTED LOOPS | | 1 | 204 | 5 (0)| 00:00:01 |
| 37 | NESTED LOOPS | | 1 | 204 | 5 (0)| 00:00:01 |
| 38 | NESTED LOOPS | | 1 | 190 | 4 (0)| 00:00:01 |
| 39 | NESTED LOOPS | | 1 | 138 | 3 (0)| 00:00:01 |
| 40 | NESTED LOOPS | | 1 | 130 | 2 (0)| 00:00:01 |
| 41 | NESTED LOOPS | | 1 | 120 | 2 (0)| 00:00:01 |
| 42 | NESTED LOOPS | | 1 | 68 | 2 (0)| 00:00:01 |
| 43 | NESTED LOOPS | | 1 | 29 | 2 (0)| 00:00:01 |
| 44 | TABLE ACCESS BY INDEX ROWID | USER$ | 1 | 16 | 2 (0)| 00:00:01 |
|* 45 | INDEX UNIQUE SCAN | I_USER1 | 1 | | 1 (0)| 00:00:01 |
| 46 | INDEX FULL SCAN | I_INDPART_PARAM | 1 | 13 | 0 (0)| |
|* 47 | TABLE ACCESS BY INDEX ROWID | INDCOMPART$ | 1 | 39 | 0 (0)| |
|* 48 | INDEX UNIQUE SCAN | I_INDCOMPART$ | 1 | | 0 (0)| |
|* 49 | TABLE ACCESS BY INDEX ROWID BATCHED| INDSUBPART$ | 1 | 52 | 0 (0)| |
|* 50 | INDEX RANGE SCAN | I_INDSUBPART_POBJSUBPART$ | 1 | | 0 (0)| |
| 51 | TABLE ACCESS BY INDEX ROWID | IND$ | 1 | 10 | 0 (0)| |
|* 52 | INDEX UNIQUE SCAN | I_IND1 | 1 | | 0 (0)| |
|* 53 | TABLE ACCESS CLUSTER | TAB$ | 1 | 8 | 1 (0)| 00:00:01 |
|* 54 | TABLE ACCESS BY INDEX ROWID BATCHED | OBJ$ | 1 | 52 | 1 (0)| 00:00:01 |
|* 55 | INDEX RANGE SCAN | I_OBJ1 | 1 | | 1 (0)| 00:00:01 |
|* 56 | INDEX RANGE SCAN | I_OBJ1 | 1 | | 1 (0)| 00:00:01 |
|* 57 | TABLE ACCESS BY INDEX ROWID | OBJ$ | 1 | 14 | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("S"."C5"="I"."INDEX_OWNER" AND "S"."C1"="I"."INDEX_NAME" AND "S"."C2"="I"."PARTITION_NAME" AND
"S"."N13"="I"."SUBPARTITION_POSITION")
filter("S"."C3""I"."SUBPARTITION_NAME")
4 - filter("S"."N13" IS NOT NULL)
5 - access("S"."STATID" IS NULL AND "S"."C5"=:1 AND "S"."C1"=:2 AND "S"."C3" LIKE 'SYS_%')
filter(("S"."C1"=:2 AND "S"."C5"=:1 AND "S"."C3" LIKE 'SYS_%' AND INTERNAL_FUNCTION("S"."TYPE") AND "S"."C2"
IS NOT NULL))
19 - access("U"."NAME"=:1)
20 - access("U"."USER#"="PO"."OWNER#" AND "PO"."NAME"=:2 AND "PO"."NAMESPACE"=4 AND "PO"."REMOTEOWNER" IS NULL AND
"PO"."LINKNAME" IS NULL)
filter("PO"."LINKNAME" IS NULL)
21 - filter(BITAND("ICP"."FLAGS",8388608)=0)
22 - access("PO"."OBJ#"="ICP"."OBJ#")
23 - filter("I"."TYPE#"9)
24 - access("ICP"."BO#"="I"."OBJ#")
25 - filter(BITAND("ISP"."FLAGS",8388608)=0)
26 - access("ICP"."OBJ#"="ISP"."POBJ#")
27 - access("ISP"."OBJ#"="DS"."OBJ#")
28 - filter(("SO"."NAMESPACE"=4 AND "SO"."REMOTEOWNER" IS NULL AND "SO"."LINKNAME" IS NULL))
29 - access("SO"."OBJ#"="ISP"."OBJ#")
31 - access("ISP"."TS#"="S"."TS#" AND "ISP"."FILE#"="S"."FILE#" AND "ISP"."BLOCK#"="S"."BLOCK#")
33 - access("ISP"."TS#"="TS"."TS#")
34 - filter(("I"."BO#"="T"."OBJ#" AND BITAND("T"."TRIGFLAG",1073741824)1073741824))
45 - access("U"."NAME"=:1)
47 - filter(BITAND("ICP"."FLAGS",8388608)=0)
48 - access("ICP"."OBJ#"="IPP"."OBJ#")
49 - filter(BITAND("ISP"."FLAGS",8388608)=0)
50 - access("ICP"."OBJ#"="ISP"."POBJ#")
52 - access("ICP"."BO#"="I"."OBJ#")
53 - filter(("I"."BO#"="T"."OBJ#" AND BITAND("T"."TRIGFLAG",1073741824)1073741824))
54 - filter(("PO"."NAME"=:2 AND "PO"."NAMESPACE"=4 AND "PO"."REMOTEOWNER" IS NULL AND "PO"."LINKNAME" IS NULL))
55 - access("PO"."OBJ#"="ICP"."OBJ#" AND "U"."USER#"="PO"."OWNER#")
56 - access("SO"."OBJ#"="ISP"."OBJ#")
57 - filter(("SO"."NAMESPACE"=4 AND "SO"."REMOTEOWNER" IS NULL AND "SO"."LINKNAME" IS NULL))
在index range scan


历史执行信息:

SQL> @sqlhist_new
Enter value for sql_id: ccknb8hy76mdd

SNAP_ID BEGIN_TIME PLAN_HASH_VALUE EXEC BUFFER_GETS PER_GET ROWS_PROCESSED PER_ROWS TOTAL_TIME_MS TIME_MS PER_READ
------- ------------------------- --------------- ------------ --------------- --------------- -------------- ---------- ------------- --------------- ---------------
31847 2023-11-08 14:00~15:00 1226767464 230 47,864,691 208,107.35 0 0 593,423 2,580.10 0.05
31848 2023-11-08 15:00~16:00 1226767464 181 37,608,577 207,782.19 0 0 468,150 2,586.46 0.00
31852 2023-11-08 19:00~20:00 1226767464 404 84,015,032 207,958.00 0 0 1,065,163 2,636.54 0.00
31856 2023-11-08 23:00~00:00 1226767464 375 78,073,397 208,195.73 0 0 1,001,163 2,669.77 0.00
31857 2023-11-09 00:00~01:00 1226767464 58 11,972,417 206,420.98 0 0 154,912 2,670.89 0.00
31860 2023-11-09 03:00~04:00 1226767464 263 54,692,954 207,958.00 0 0 685,271 2,605.59 0.00
31861 2023-11-09 04:00~05:00 1226767464 189 39,304,062 207,958.00 0 0 497,662 2,633.13 0.00
31864 2023-11-09 07:00~08:00 1226767464 138 28,698,204 207,958.00 0 0 364,557 2,641.72 0.00
31865 2023-11-09 08:00~09:00 1226767464 320 66,546,560 207,958.00 0 0 847,912 2,649.72 0.00
31868 2023-11-09 11:00~12:00 1226767464 23 4,783,034 207,958.00 0 0 59,310 2,578.69 0.00
31869 2023-11-09 12:00~13:00 1226767464 414 86,094,612 207,958.00 0 0 1,060,037 2,560.48 0.00
31873 2023-11-09 16:00~17:00 1226767464 414 86,094,612 207,958.00 0 0 1,070,189 2,585.00 0.00
31877 2023-11-09 20:00~21:00 1226767464 420 87,342,360 207,958.00 0 0 1,095,037 2,607.23 0.00
31882 2023-11-10 01:00~02:00 1226767464 191 39,673,947 207,717.00 0 0 489,638 2,563.55 0.00
31883 2023-11-10 02:00~03:00 1226767464 220 45,697,740 207,717.00 0 0 576,832 2,621.96 0.00
31887 2023-11-10 06:00~07:00 1226767464 404 83,884,184 207,634.12 0 0 1,073,224 2,656.50 0.00
31891 2023-11-10 10:00~11:00 1226767464 433 89,943,989 207,722.84 0 0 1,135,863 2,623.24 0.00

17 rows selected.

每次0rows,却要耗费20w的逻辑读,导致每次2.5s左右,面对上百次的执行次数,整个过程耗费时间就非常长。
参考mos:Slow DataPump Export/Import of Object Type /STATISTICS/MARKER (Doc ID 2387329.1)
创建合适索引:

SQL> alter session set container=OLDBOSSDB;

Session altered.
SQL> create index SYS.IMPDP_STATS_1 ON SYS.IMPDP_STATS (c5,type,c1,c2,c3,n13,statid);

删除表空间信息,重新导入元数据:

--删除TEST_DRP表空间及包含的内容,保留数据文件
SQL> drop tablespace TBS_OLD_DATA including contents;

SQL> select s.inst_id,s.sid,s.module,s.state,
2 substr(s.event, 1, 21) as event,
3 s.seconds_in_wait as secs,
4 sql.sql_id,
5 sql.sql_text as sql_text
6 from gv$session s
7 join gv$sql sql on sql.sql_id = s.sql_id
8 where s.module like 'Data Pump%'
9 order by s.inst_id, s.module, s.sid;

INST_ID SID MODULE STATE EVENT SECS SQL_ID SQL_TEXT
---------- ---------- -------------------- ------------------- -------------------- ---------- ------------- ----------------------------------------------------------------------------------------------------
1 7504 Data Pump Master WAITING wait for unread mess 2 bjf05cwcj5s6p BEGIN :1 := sys.kupc$que_int.receive(:2); END;
a

1 4966 Data Pump Worker WAITED SHORT TIME enq: CR - block rang 0 79npqs02jfh5t insert /*+ OPT_PARAM('_parallel_syspls_obey_force' 'false') */ into sys.dbms_stats_id_m
e ap_tab (c5, c1, c2, cn) select /*+ leading(s) index(s) */ distinct s.c5, s.c1, s.c2,
d.partition_name cn from "SYS"."IMPDP_STATS" s, (select u.name table_owner, op.name
table_name, op.subname partition_name, tp.part# partition_position
from user$ u, obj$ op, (select obj#, part# from tabpartv$ where
bo# = :1 union all select obj#, part# from tabcompartv$
where bo# = :2 ) tp where u.user# = op.owner# and op.type# = 19 and op.
obj# = tp.obj# ) d where s.c5 = :3 and s.c1 = :4 and s.type in ('T','C','E','
P','H','B','t','c','M','U','G','L') and s.n13 is not null and s.c2 like 'SYS_%' and s.c3 is
null and s.c5 = d.table_owner and s.c1 = d.table_name and s.n13 = d.partition_posit

导入元数据过程中,sql_id:79npqs02jfh5t 执行计划已经发生改变

执行情况也发生变化:


+------------------------------------------------------------------------+
| infromation from v$sqlstats |
+------------------------------------------------------------------------+

CPU(MS) ELA(MS) DISK GET ROWS ROWS APPLI(MS) CONCUR(MS) CLUSTER(MS) USER_IO(MS) PLSQL JAVA
EXEC PRE EXEC PRE EXEC PRE EXEC PRE EXEC PRE EXEC PRE FETCH PER EXEC PER EXEC PER EXEC PER EXEC PER EXEC PER EXEC SQL_PROFILE
---------- -------- -------- -------- ------------ ----------- --------- --------- ---------- ----------- ----------- -------- -------- ---------------
11811 1 2 0 42 0 0 0 0 0 2 0 0
+------------------------------------------------------------------------+
| information from awr sysdate-7 |
+------------------------------------------------------------------------+

PLAN GET DISK WRITE ROWS ROWS USER_IO(MS) ELA(MS) CPU(MS) CLUSTER(MS) PLSQL
END_TI I NAME HASH VALUE EXEC PRE EXEC PRE EXEC PER EXEC ROW_P PRE EXEC PRE FETCH PER EXEC PRE EXEC PRE EXEC PER EXEC PER EXEC
------ - --------------- ------------- ---------- ------------ -------- -------- ----- ----------- --------- ----------- -------- -------- ----------- --------
08 16 1 SYS 845067460 240 207,969 0 0 0 0 0 57 ######## ######## 0 0
08 17 1 SYS 845067460 363 207,959 0 0 0 0 0 0 ######## ######## 0 0
08 18 1 SYS 845067460 377 208,011 0 0 0 0 0 0 ######## ######## 0 0
08 19 1 SYS 845067460 428 207,912 0 0 0 0 0 0 ######## ######## 0 0
08 20 1 SYS 845067460 144 207,959 0 0 0 0 0 0 ######## ######## 0 0
08 21 1 SYS 845067460 423 207,959 0 0 0 0 0 0 ######## ######## 0 0
08 22 1 SYS 845067460 422 208,018 0 0 0 0 0 0 ######## ######## 0 0
08 23 1 SYS 845067460 419 207,899 0 0 0 0 0 0 ######## ######## 0 0
09 00 1 SYS 845067460 176 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 01 1 SYS 845067460 376 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 02 1 SYS 845067460 427 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 03 1 SYS 845067460 420 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 04 1 SYS 845067460 261 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 05 1 SYS 845067460 297 208,355 0 0 0 0 0 0 ######## ######## 0 0
09 06 1 SYS 845067460 432 208,094 0 0 0 0 0 0 ######## ######## 0 0
09 07 1 SYS 845067460 425 207,821 0 0 0 0 0 0 ######## ######## 0 0
09 08 1 SYS 845067460 332 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 09 1 SYS 845067460 213 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 10 1 SYS 845067460 426 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 11 1 SYS 845067460 428 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 12 1 SYS 845067460 409 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 13 1 SYS 845067460 158 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 14 1 SYS 845067460 438 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 15 1 SYS 845067460 427 207,959 0 0 0 0 0 23 ######## ######## 0 0
09 16 1 SYS 845067460 438 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 17 1 SYS 845067460 145 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 18 1 SYS 845067460 427 207,986 0 0 0 0 0 0 ######## ######## 0 0
09 19 1 SYS 845067460 428 208,187 0 0 0 0 0 0 ######## ######## 0 0
09 20 1 SYS 845067460 435 207,707 0 0 0 0 0 0 ######## ######## 0 0
09 21 1 SYS 845067460 149 207,959 0 0 0 0 0 0 ######## ######## 0 0
09 22 1 SYS 845067460 422 208,130 0 0 0 0 0 0 ######## ######## 0 0
09 23 1 SYS 845067460 427 207,953 0 0 0 0 0 0 ######## ######## 0 0
10 00 1 SYS 845067460 423 207,793 0 0 0 0 0 0 ######## ######## 0 0
10 01 1 SYS 845067460 55 207,959 0 0 0 0 0 0 ######## ######## 0 0
10 03 1 SYS 845067460 254 208,016 0 0 0 0 0 0 ######## ######## 0 0
10 04 1 SYS 845067460 425 207,820 0 0 0 0 0 0 ######## ######## 0 0
10 05 1 SYS 845067460 424 207,824 0 0 0 0 0 0 ######## ######## 0 0
10 06 1 SYS 845067460 402 207,263 0 0 0 0 0 0 ######## ######## 0 0
10 07 1 SYS 845067460 155 207,718 0 0 0 0 0 0 ######## ######## 0 0
10 08 1 SYS 845067460 410 207,718 0 0 0 0 0 0 ######## ######## 0 0
10 09 1 SYS 845067460 386 208,016 0 0 0 0 0 0 ######## ######## 0 0
10 10 1 SYS 845067460 431 207,718 0 0 0 0 0 0 ######## ######## 0 0
10 11 1 SYS 845067460 143 207,718 0 0 0 0 0 0 ######## ######## 0 0
10 12 1 SYS 845067460 44 207,718 0 0 0 0 0 0 ######## ######## 0 0

44 rows selected.

每次执行的消耗时间和逻辑读都有数量级的下降。
参考:
Primary Note for Transportable Tablespaces (TTS) – Common Questions and Issues (Doc ID 1166564.1)
Slow DataPump Export/Import of Object Type /STATISTICS/MARKER (Doc ID 2387329.1)
Data pump Import Job Is Hanging On STATISTICS/MARKER (Doc ID 2135702.1)

后面还是慢,由于跑一次耗时太长,就懒得折腾了,直接exclude=statistics排查统计信息导入。

创建合适还原点,方便随时回退。

由前面可知,我们在导入marker统计信息的时候,耗时很长。我们终止后,排除统计信息,重新导入的时候,我们采用了drop tablespace including contents的当时,删除了表空间的所有信息,再次重新导入元数据。但是此操作也很耗时,而且如果是omf管理的时候,同样会删除数据文件,那么有没有合适的方法,来重新回到导入统计信息的起点,再次导入那?
答案肯定是有的。就是我们的闪回点。在导入元数据前,创建闪回点,当需要再次重新导入元数据的时候,我们可以闪回到起点,重新来过,数据库不受任何影响。

在Oracle Database 12.1中,闪回数据库操作仅限于CDB,从Oracle Database 12.2开始,支持CDB与PDB数据库的闪回。在12.2 版本新特性引入了PDB级别的flashback,更加快速方便,可以使用RMAN和SQL两种方式实现,且闪回后原PDB的备份仍然有效。
可以对多租户数据库中的单个PDB执行闪回操作。对特定的PDB执行闪回数据库操作只会修改与这个PDB相关文件中的数据。CDB中的其它 PDB不受影响处于可读写状态。如果使用还原点,在执行闪回数据库操作时可以使用CDB还原点,PDB还原点,PDB干净还原点或PDB受保证的还原点。

1、的数据库flashback前提条件
1)必须归档模式。否则,报错ORA-38784和ORA-38785。
2)flashback的时候,数据库必须处理mount状态。否则,报错ORA-38757。

SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /u01/app/oracle/product/19.3.0/db_1/dbs/arch
Oldest online log sequence 21747
Current log sequence 21762
SQL> create restore point orapdb_20231110 guarantee flashback database;
create restore point orapdb_20231110 guarantee flashback database
*
ERROR at line 1:
ORA-38784: Cannot create restore point 'ORAPDB_20231110'.
ORA-38785: Media recovery must be enabled for guaranteed restore point.

SQL> flashback database to restore point orapdb_20231110;
flashback database to restore point orapdb_20231110
*
ERROR at line 1:
ORA-38757: Database must be mounted and not open to FLASHBACK.

2、创建闪回点与flashback

SQL> show parameter recover;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +DATADG
db_recovery_file_dest_size big integer 2T
db_unrecoverable_scn_tracking boolean TRUE
recovery_parallelism integer 0
remote_recovery_file_dest string
13:36:23.920
SQL> create restore point orapdb_20231110 guarantee flashback database;
select name,guarantee_flashback_database,to_char(scn) from v$restore_point;

Restore point created.

SQL>
NAME
--------------------------------------------------------------------------------
GUA TO_CHAR(SCN)
--- ----------------------------------------
ORAPDB_20231110
SQL> flashback pluggable database orapdb to restore point orapdb_20231110;
flashback pluggable database orapdb to restore point orapdb_20231110
*
ERROR at line 1:
ORA-65025: Pluggable database is not closed on all instances.
SQL> shutdown immediate;
Pluggable Database closed.
SQL> flashback pluggable database orapdb to restore point orapdb_20231110;

Flashback complete.

SQL> show pdbs;

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 ORAPDB MOUNTED
SQL> alter pluggable database orapdb open;
alter pluggable database orapdb open
*
ERROR at line 1:
ORA-01113: file 13 needs media recovery
ORA-01110: data file 13:
'+DATADG/ORA19C/AEC97BEC0A343F08E053170B010A9002/DATAFILE/users.278.1050594329'

SQL> alter pluggable database orapdb open resetlogs;

Pluggable database altered.

语法如下:

--创建
archive log list;
create restore point orapdb_20231110 guarantee flashback database;
select name,guarantee_flashback_database,to_char(scn) from v$restore_point;
--闪回
shutdown immediate;
startup mount
flashback pluggable database orapdb to restore point orapdb_20231110;
alter pluggable database orapdb open resetlogs;
drop restore point orapdb_20231110;

3、验证flashback的效果

SQL> create table 20231101(id int);
create table 20231101(id int)
*
ERROR at line 1:
ORA-00903: invalid table name

SQL> create table t_20231101(id,int);
create table t_20231101(id,int)
*
ERROR at line 1:
ORA-02263: need to specify the datatype for this column

SQL> create table t_20231101(id int);

Table created.

SQL> select count(*) from t_20231101;

COUNT(*)
----------
0

SQL> show pdbs;

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 ORAPDB READ WRITE NO
SQL> shutdown immediate;
Pluggable Database closed.
SQL> flashback pluggable database orapdb to restore point orapdb_20231110;

Flashback complete.

SQL> alter pluggable database orapdb open resetlogs;

Pluggable database altered.

SQL> select count(*) from t_20231101;
select count(*) from t_20231101
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL> drop restore point orapdb_20231110;

Restore point dropped.

4、12.2版本之后,支持pdb级别的闪回。而不会影响其他pdb和cdb。
创建还原点,如果恢复到之前创建的还原点,前提是保证还原点之后的闪回日志的完整性。还原点可以在CDB或PDB级创建,只是PDB作用域不同

SQL> show pdbs;

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 ORAPDB READ WRITE NO
SQL> create restore point orapdb_20231110 guarantee flashback database;

Restore point created.

SQL> select name,guarantee_flashback_database,to_char(scn) from v$restore_point;

NAME GUA TO_CHAR(SCN)
-------------------- --- ----------------------------------------
ORAPDB_20231110 YES 1575039
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 746583112 bytes
Fixed Size 9139272 bytes
Variable Size 427819008 bytes
Database Buffers 301989888 bytes
Redo Buffers 7634944 bytes
Database mounted.
SQL> flashback database to restore point orapdb_20231110;
flashback database to restore point orapdb_20231110
*
ERROR at line 1:
ORA-38780: Restore point 'ORAPDB_20231110' does not exist.

SQL> alter database open;
select name,guarantee_flashback_database,to_char(scn) from v$restore_point;
Database altered.

SQL>

NAME GUA TO_CHAR(SCN)
-------------------- --- ----------------------------------------
ORAPDB_20231110 YES 1575039

SQL> drop restore point orapdb_20231110;
drop restore point orapdb_20231110
*
ERROR at line 1:
--以上闪回点orapdb_20231110实在pdb级别创建的。所以我们在cdb级别闪回的时候,会报错。此闪回点只针对这个pdb有效。
SQL> alter session set container=orapdb;

Session altered.

SQL> select name,guarantee_flashback_database,to_char(scn) from v$restore_point;

NAME GUA TO_CHAR(SCN)
-------------------- --- ----------------------------------------
ORAPDB_20231110 YES 1575039

SQL> drop restore point orapdb_20231110;

Restore point dropped.

--进入pdb后,此闪回点就可以删除,属于自己的。

SQL> conn / as sysdba
show pdbs;
Connected.
SQL>
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 ORAPDB MOUNTED
SQL> create restore point orapdb_20231110 guarantee flashback database;

Restore point created.

SQL> select name,guarantee_flashback_database,to_char(scn) from v$restore_point;

NAME GUA TO_CHAR(SCN)
-------------------- --- ----------------------------------------
ORAPDB_20231110 YES 1578910

SQL> flashback pluggable database orapdb to restore point orapdb_20231110;

Flashback complete.

SQL> alter session set container=orapdb;

Session altered.

SQL> flashback pluggable database orapdb to restore point orapdb_20231110;

Flashback complete.

SQL> conn / as sysdba
Connected.
SQL> drop restore point orapdb_20231110;

Restore point dropped.
--以上闪回点我们实在cdb级别创建的,可以看出,在cdb和pdb中都是生效的。

综上,就是pdb级别创建的闪回点,只针对这个pdb生效。在cdb级别创建的闪回点,针对全局有效。

总结

  1. 此库虽然待迁移的表空间大,但是对象较少。在后面元数据的导入过程中,仍然时间很长。导入统计信息占用了主要的时间,尤其是/STATISTICS/MARKER。导入的时候加参数:EXCLUDE=STATISTICS 即可。不导入统计信息,后面统一全库收集即可。
  2. RAC 2节点,为了减少争用,close 2节点的pdb,只在1节点open,进行操作。待全部迁移完成,在2节点open即可。
  3. 目标端导入元数据前,应该先创建还原点。12.2 可pdb闪回。这样当发现统计信息导入慢的时候,可以闪回去,重新去掉统计信息,再次导入。或者由于导入元数据异常中断,再次导入就会报错,也可闪回,重新导入。避免万无一失。XTTS喝TTS所有的导入元数据前,都应该先创建还原点,减少错误的发生,导入完成后再drop即可。
    12.2之后,支持pdb级别的单独闪回。

相关文章

Oracle如何使用授予和撤销权限的语法和示例
Awesome Project: 探索 MatrixOrigin 云原生分布式数据库
下载丨66页PDF,云和恩墨技术通讯(2024年7月刊)
社区版oceanbase安装
Oracle 导出CSV工具-sqluldr2
ETL数据集成丨快速将MySQL数据迁移至Doris数据库

发布评论