前 言
有些时候为了缩短数据泵导入导出的时间,一般会选择将大表和其他表分开导出,尤其是遇到大表有 lob 字段的,导出时间会更慢。这样则需要分开导出大表和其他表,而单独导出大表也会很慢,可以利用 rowid 分片技术将大表分开来同时导出,提高导出效率。
1、大表信息
表记录只有 1525536 条,但是只有 1.8G 大小,有两个 CLOB 字段,查看 Lob 大小有 13.71G 大小。
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0
SQL> select count(*) from PROD.T_CA_TX_HIS;
COUNT(*)
----------
1525536
SQL> set line 120
SQL> col OWNER for a30
SQL> col SEGMENT_NAME for a30
SQL> select OWNER,SEGMENT_NAME,BYTES/1024/1024/1024 from dba_segments where SEGMENT_NAME='T_CA_TX_HIS' and OWNER='PROD';
OWNER SEGMENT_NAME BYTES/1024/1024/1024
------------------------------ ------------------------------ --------------------
PROD T_CA_TX_HIS 1.83007813
SQL> desc PROD.T_CA_TX_HIS
Name Null? Type
----------------------------------------------------------------- -------- --------------------------------------------
SEQUENCE_NO NOT NULL NUMBER(18)
PK_TX_HIS NOT NULL VARCHAR2(36)
FK_USER_CCBSCF VARCHAR2(36)
FK_PERSON_CCBSCF VARCHAR2(36)
SOFT_CA_BRAND NOT NULL VARCHAR2(20)
SOFT_CA_USER_ID VARCHAR2(50)
TARGET_URL VARCHAR2(300)
TX_CODE VARCHAR2(20)
TX_REQUEST CLOB
TX_RESPONSE CLOB
TX_ERROR_CODE VARCHAR2(50)
TX_ERROR_MESSAGE VARCHAR2(500)
CREATE_TIME NOT NULL DATE
FK_USER_CREATE NOT NULL VARCHAR2(36)
USER_NAME_CREATE NOT NULL VARCHAR2(60)
SQL> col SEGMENT_NAME for a30
SQL> select segment_name,BYTES/1024/1024/1024 from dba_segments where OWNER='PROD' group by segment_name,BYTES/1024/1024/1024 order by 2 asc;
.........省略........
SEGMENT_NAME BYTES/1024/1024/1024
------------------------------ --------------------
T_OPEN_MESSAGE_SMS_RECORD 2.8125
SYS_LOB0000167075C00010$$ 13.7119141
--查看某用户下 Lob 字段大小
SET LINE 345 PAGES 456
COL OWNER FOR a20
COL TABLE_NAME FOR A40
COL SEGMENT_TYPE FOR A20
col COLUMN_NAME FOR A35
SELECT
A.OWNER,
B.TABLE_NAME,
B.COLUMN_NAME,
a.SEGMENT_TYPE,
ROUND(SUM(A.BYTES/1024/1024/1024),2) G
FROM DBA_SEGMENTS A
LEFT JOIN DBA_LOBS B
ON A.OWNER = B.OWNER
AND A.SEGMENT_NAME = B.SEGMENT_NAME
WHERE A.SEGMENT_TYPE='LOBSEGMENT'
AND A.OWNER in('&OWNER')
GROUP BY A.OWNER,B.TABLE_NAME,B.COLUMN_NAME,a.SEGMENT_TYPE
ORDER BY 5 DESC;
OWNER TABLE_NAME COLUMN_NAME SEGMENT_TYPE G
-------------------- ---------------------------------------- ----------------------------------- -------------------- ----------
PROD T_CA_TX_HIS TX_RESPONSE LOBSEGMENT 13.71
2、正常导出大表
SQL> set linesize 9999
SQL> col OWNER for a10
SQL> col DIRECTORY_NAME for a30
SQL> col DIRECTORY_PATH for a60
SQL> select * from dba_directories;
SQL> create or replace directory PUBLIC_DUMP as '/data/ora-share';
SQL> grant read,write on directory PUBLIC_DUMP to public;
$ expdp PROD/proD_#31 directory=PUBLIC_DUMP LOGFILE=PROD.T_CA_TX_HIS.log dumpfile=PROD.T_CA_TX_HIS.dmp tables=T_CA_TX_HIS COMPRESSION=ALL CLUSTER=N
Export: Release 19.0.0.0.0 - Production on Wed Nov 29 15:00:36 2023
Version 19.15.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
FLASHBACK automatically enabled to preserve database integrity.
Starting "PROD"."SYS_EXPORT_TABLE_01": PROD/******** directory=PUBLIC_DUMP LOGFILE=PROD.T_CA_TX_HIS.log dumpfile=PROD.T_CA_TX_HIS.dmp tables=T_CA_TX_HIS COMPRESSION=ALL CLUSTER=N
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "PROD"."T_CA_TX_HIS" 3.710 GB 1524771 rows
Master table "PROD"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for PROD.SYS_EXPORT_TABLE_01 is:
/data/ora-share/PROD.T_CA_TX_HIS.dmp
Job "PROD"."SYS_EXPORT_TABLE_01" successfully completed at Wed Nov 29 16:16:40 2023 elapsed 0 01:16:03
--排除大表导出其他表和对象 T_AUTH_ORIGINAL
nohup expdp \'/ as sysdba\' directory=PUBLIC_DUMP dumpfile=expdp_prod_cc-2023-11-25_%U.dmp COMPRESSION=ALL exclude=TABLE:\"IN \'T_CA_TX_HIS\'\" exclude=statistics parallel=4 cluster=no schemas=PROD_CC logfile=expdp_PROD_CC112521.log &
3、利用分片导出 lob 大表
--可以利用 rowid 切片方式导出 lob 大表
vi tableid.par
userid='/ as sysdba'
directory=PUBLIC_DUMP
content=ALL
compression=ALL
cluster=no
tables=PROD.T_CA_TX_HIS
nohup expdp parfile=tableid.par dumpfile=T_CA_TX_HIS_01.dmp logfile=tableid_01.log query=\"where mod\(dbms_rowid.rowid_block_number\(rowid\),8\)=0\" &
nohup expdp parfile=tableid.par dumpfile=T_CA_TX_HIS_02.dmp logfile=tableid_02.log query=\"where mod\(dbms_rowid.rowid_block_number\(rowid\),8\)=1\" &
nohup expdp parfile=tableid.par dumpfile=T_CA_TX_HIS_03.dmp logfile=tableid_03.log query=\"where mod\(dbms_rowid.rowid_block_number\(rowid\),8\)=2\" &
nohup expdp parfile=tableid.par dumpfile=T_CA_TX_HIS_04.dmp logfile=tableid_04.log query=\"where mod\(dbms_rowid.rowid_block_number\(rowid\),8\)=3\" &
nohup expdp parfile=tableid.par dumpfile=T_CA_TX_HIS_05.dmp logfile=tableid_05.log query=\"where mod\(dbms_rowid.rowid_block_number\(rowid\),8\)=4\" &
nohup expdp parfile=tableid.par dumpfile=T_CA_TX_HIS_06.dmp logfile=tableid_06.log query=\"where mod\(dbms_rowid.rowid_block_number\(rowid\),8\)=5\" &
nohup expdp parfile=tableid.par dumpfile=T_CA_TX_HIS_07.dmp logfile=tableid_07.log query=\"where mod\(dbms_rowid.rowid_block_number\(rowid\),8\)=6\" &
nohup expdp parfile=tableid.par dumpfile=T_CA_TX_HIS_08.dmp logfile=tableid_08.log query=\"where mod\(dbms_rowid.rowid_block_number\(rowid\),8\)=7\" &
查看日志,大概每个 tableid 均需要 22 分钟左右,如不用 rowid 分片则需要一个多小时才能导出 01:16:03 。
more tableid_08.log
;;;
Export: Release 19.0.0.0.0 - Production on Wed Nov 29 16:53:42 2023
Version 19.15.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
;;; **************************************************************************
;;; Parfile values:
;;; parfile: tables=PROD.T_CA_TX_HIS
;;; parfile: compression=ALL
;;; parfile: content=ALL
;;; parfile: directory=PUBLIC_DUMP
;;; parfile: userid=/******** AS SYSDBA
;;; **************************************************************************
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYS"."SYS_EXPORT_TABLE_08": /******** AS SYSDBA parfile=tableid.par dumpfile=T_CA_TX_HIS_08.dmp logfile=tableid_08.log query="where mod(dbms_rowid.rowid_block_number(rowid),8)=7"
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "PROD"."T_CA_TX_HIS" 471.9 MB 190827 rows
Master table "SYS"."SYS_EXPORT_TABLE_08" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_08 is:
/data/ora-share/T_CA_TX_HIS_08.dmp
Job "SYS"."SYS_EXPORT_TABLE_08" successfully completed at Wed Nov 29 17:15:40 2023 elapsed 0 00:21:50
Job "SYS"."SYS_EXPORT_TABLE_05" successfully completed at Wed Nov 29 17:15:40 2023 elapsed 0 00:21:54
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Wed Nov 29 17:16:26 2023 elapsed 0 00:22:42
Job "SYS"."SYS_EXPORT_TABLE_07" successfully completed at Wed Nov 29 17:17:50 2023 elapsed 0 00:24:01
Job "SYS"."SYS_EXPORT_TABLE_02" successfully completed at Wed Nov 29 17:17:27 2023 elapsed 0 00:23:42
Job "SYS"."SYS_EXPORT_TABLE_04" successfully completed at Wed Nov 29 17:16:45 2023 elapsed 0 00:22:59
Job "SYS"."SYS_EXPORT_TABLE_03" successfully completed at Wed Nov 29 17:16:24 2023 elapsed 0 00:22:38
Job "SYS"."SYS_EXPORT_TABLE_06" successfully completed at Wed Nov 29 17:16:03 2023 elapsed 0 00:22:15
4、目标库分别导入 dmp
导入时是挨个 dmp 执行串行导入,每个 dmp 都会锁表,第一个 dmp 导入完成后表锁释放,接着导入第二个,实际上也不是特别快。
--创建好表空间和用户
CREATE TABLESPACE DT_CC_DATA DATAFILE '+DATA' SIZE 20g AUTOEXTEND ON NEXT 8192 MAXSIZE 32767M;
--查看原用户创建语句及权限
set long 9999
select dbms_metadata.get_ddl('USER',username) from dba_users where username='PROD';
--执行导入命令:
nohup impdp \'/ as sysdba\' directory=public_dump dumpfile=T_CA_TX_HIS_01.dmp logfile=T_CA_TX_HIS_01.log cluster=no REMAP_TABLESPACE=PROD_TBS:CC_OP_DATA REMAP_SCHEMA=PROD:CC_OP TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TABLE_EXISTS_ACTION=append &
nohup impdp \'/ as sysdba\' directory=public_dump dumpfile=T_CA_TX_HIS_02.dmp logfile=T_CA_TX_HIS_02.log cluster=no REMAP_TABLESPACE=PROD_TBS:CC_OP_DATA REMAP_SCHEMA=PROD:CC_OP TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TABLE_EXISTS_ACTION=append &
nohup impdp \'/ as sysdba\' directory=public_dump dumpfile=T_CA_TX_HIS_03.dmp logfile=T_CA_TX_HIS_03.log cluster=no REMAP_TABLESPACE=PROD_TBS:CC_OP_DATA REMAP_SCHEMA=PROD:CC_OP TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TABLE_EXISTS_ACTION=append &
nohup impdp \'/ as sysdba\' directory=public_dump dumpfile=T_CA_TX_HIS_04.dmp logfile=T_CA_TX_HIS_04.log cluster=no REMAP_TABLESPACE=PROD_TBS:CC_OP_DATA REMAP_SCHEMA=PROD:CC_OP TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TABLE_EXISTS_ACTION=append &
nohup impdp \'/ as sysdba\' directory=public_dump dumpfile=T_CA_TX_HIS_05.dmp logfile=T_CA_TX_HIS_05.log cluster=no REMAP_TABLESPACE=PROD_TBS:CC_OP_DATA REMAP_SCHEMA=PROD:CC_OP TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TABLE_EXISTS_ACTION=append &
nohup impdp \'/ as sysdba\' directory=public_dump dumpfile=T_CA_TX_HIS_06.dmp logfile=T_CA_TX_HIS_06.log cluster=no REMAP_TABLESPACE=PROD_TBS:CC_OP_DATA REMAP_SCHEMA=PROD:CC_OP TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TABLE_EXISTS_ACTION=append &
nohup impdp \'/ as sysdba\' directory=public_dump dumpfile=T_CA_TX_HIS_07.dmp logfile=T_CA_TX_HIS_07.log cluster=no REMAP_TABLESPACE=PROD_TBS:CC_OP_DATA REMAP_SCHEMA=PROD:CC_OP TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TABLE_EXISTS_ACTION=append &
nohup impdp \'/ as sysdba\' directory=public_dump dumpfile=T_CA_TX_HIS_08.dmp logfile=T_CA_TX_HIS_08.log cluster=no REMAP_TABLESPACE=PROD_TBS:CC_OP_DATA REMAP_SCHEMA=PROD:CC_OP TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TABLE_EXISTS_ACTION=append &
注意:Oracle 12c 以后的 impdp 的 TRANSFORM 参数已经扩展为包括 DISABLE_ARCHIVE_LOGGING 选项。该选项的默认值为 “N”,不会影响日志行为。将该选项设置为 “Y”,这将会使表和索引在导入前将日指属性设置为 NOLOGGING,从而导入期间减少相关日志的产生,导入后再将日志属性重置为 LOGGING。如果目标库有 ADG、OGG 等其他复制软件在数据库级别开启了 force logging,那么“TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y” 参数将会无效,也会生成大量归档日志。
查看如下日志,导入最短 7 分钟,第二个 dmp 则需要 14 分钟,最后一个最长时间则需要 54 分钟,由此可见是串行导入的,这块并没有缩短时间。
# more T_CA_TX_HIS_02.log
;;;
Import: Release 19.0.0.0.0 - Production on Wed Nov 29 17:50:49 2023
Version 19.21.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "SYS"."SYS_IMPORT_FULL_04" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_04": "/******** AS SYSDBA" directory=public_dump dumpfile=T_CA_TX_HIS_02.dmp logfile=T_CA_TX_HIS_02.log cluster=no REMAP_TABLESPACE=PROD
_SCFOP_TBS:CC_OP_DATA REMAP_SCHEMA=PROD:CC_OP TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TABLE_EXISTS_ACTION=append
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "CC_OP"."T_CA_TX_HIS" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "CC_OP"."T_CA_TX_HIS" 475.4 MB 189956 rows
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_FULL_04" successfully completed at Wed Nov 29 18:45:39 2023 elapsed 0 00:54:44
Job "SYS"."SYS_IMPORT_FULL_06" completed with 1 error(s) at Wed Nov 29 18:05:29 2023 elapsed 0 00:14:31
Job "SYS"."SYS_IMPORT_FULL_04" successfully completed at Wed Nov 29 18:45:39 2023 elapsed 0 00:54:44
Job "SYS"."SYS_IMPORT_FULL_07" completed with 1 error(s) at Wed Nov 29 18:32:05 2023 elapsed 0 00:41:05
Job "SYS"."SYS_IMPORT_FULL_10" completed with 1 error(s) at Wed Nov 29 18:39:14 2023 elapsed 0 00:48:06
Job "SYS"."SYS_IMPORT_FULL_05" completed with 1 error(s) at Wed Nov 29 18:12:09 2023 elapsed 0 00:21:12
Job "SYS"."SYS_IMPORT_FULL_11" completed with 1 error(s) at Wed Nov 29 18:18:57 2023 elapsed 0 00:27:48
Job "SYS"."SYS_IMPORT_FULL_08" completed with 1 error(s) at Wed Nov 29 18:25:49 2023 elapsed 0 00:34:48
Job "SYS"."SYS_IMPORT_FULL_09" completed with 1 error(s) at Wed Nov 29 17:58:31 2023 elapsed 0 00:07:28
5、数据泵其他用法
expdp help=y
expdp 参数
DIRECTORY:用于转储文件和日志文件的目录对象。
DUMPFILE:指定导出备份文件的命名。
LOGFILE:指定导出备份日志的命名。里面记录了备份中的信息。
FULL:导出整个数据库 (默认是N,就是默认只会导出登录用户的所有数据)。
SCHEMAS:要导出的方案的列表 (指定想要导出哪个用户下的数据)。
EXCLUDE:排除特定对象类型。(表名要大写)
SAMPLE:要导出的数据的百分比。
TABLESPACES:标识要导出的表空间的列表。
VERSION:指定导出数据库的版本,一般用于高版本数据库的数据要导入到低版本数据库中时用到。
PARALLEL:更改当前作业的活动 worker 的数量。
REUSE_DUMPFILES:覆盖目标转储文件 (如果文件存在) [N]。
TABLES:标识要导出的表的列表。例如, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995。
QUERY:用于导出表的子集的谓词子句。例如, QUERY=employees:"WHERE department_id > 10"。
JOB_NAME:要创建的导出作业的名称。
impdp参数
impdp help=y
DIRECTORY 供转储文件, 日志文件和 sql 文件使用的目录对象。
DUMPFILE 要从 (expdat.dmp) 中导入的转储文件的列表,
LOGFILE 日志文件名 (import.log)。
FULL 从源导入全部对象 (Y)。
SCHEMAS 要导入的方案的列表。
EXCLUDE 排除特定的对象类型, 例如 EXCLUDE=TABLE:EMP。
JOB_NAME 要创建的导入作业的名称。
TABLESPACES 标识要导入的表空间的列表。
REUSE_DATAFILES 如果表空间已存在, 则将其初始化 (N)
PARALLEL 更改当前作业的活动 worker 的数目。
QUERY 用于导入表的子集的谓词子句。
VERSION 要导出的对象的版本, 其中有效关键字为:
TABLES 标识要导入的表的列表。
TABLE_EXISTS_ACTION 导入对象已存在时执行的操作。 有效关键字: (SKIP)跳过, APPEND附加, REPLACE 替换和 TRUNCATE清空表后在添加。
REMAP_TABLESPACE 将表空间对象重新映射到另一个表空间。
REMAP_SCHEMA 将一个 schema 中的对象加载到另一个 schema。
REMAP_TABLE 将表名重新映射到另一个表。例如, REMAP_TABLE=EMP.EMPNO:REMAPPKG.EMPNO。
数据泵常用导入导出语句
--Data Mining and Real Application Testing options
--UDE-00010: multiple job modes requested, schema and tables.
schemas 和 tables 不能同时出现。
--仅导出元数据表结构
nohup expdp \'/ as sysdba\' directory=PUBLIC_DUMP SCHEMAS=DT_CC CONTENT=METADATA_ONLY exclude=STATISTICS LOGFILE=DT_CC0916.log dumpfile=expdp_METADATA_ONLY_DT_CC-2023-09-16_%U.dmp COMPRESSION=ALL PARALLEL=4 &
--SQL语句
SQL> select to_char(current_scn) from v$database;
TO_CHAR(CURRENT_SCN)
----------------------------------------
14898415798
--导出某 schema 的数据
nohup expdp \'/ as sysdba\' directory=PUBLIC_DUMP SCHEMAS=PROD FLASHBACK_SCN=14898415798 exclude=STATISTICS LOGFILE=expdp_PROD0420.log dumpfile=expdp_PROD_2022-04-20_%U.dmp COMPRESSION=ALL CLUSTER=N PARALLEL=8 &
--导出某用户下以 T_PRICE 开头的表。
nohup expdp PROD/'rop_P98#' directory=PUBLIC_DUMP LOGFILE=prod_t_price63Tables.log dumpfile=expdp_prod_t_price63Tables.dmp tables=t_price% flashback_scn=9759642727 COMPRESSION=all exclude=statistics parallel=4 cluster=no &
--按条件导出大表的一部分数据
expdp PROD/'rop_P98#' directory=PUBLIC_DUMP LOGFILE=prod.T_AUTH_ORIGINAL.log dumpfile=expdp_prod.T_AUTH_ORIGINAL.dmp tables=T_AUTH_ORIGINAL COMPRESSION=ALL query="where DIGEST_TIME '>=' to_date\('2023-01-01','yyyy-mm-dd'\)"
--使用 parfile 导出表的一部分数据
expdp scott/tiger directory=exp_dir parfile=emp_main.par
vim emp_main.par
tables=emp_main
dumpfile=emp_main.dmp
logfile=emp_main.log
query="where sendtime between to_date('20220101','yyyymmdd') and to_date('20220401','yyyymmdd')"
--利用 SQLfile 参数生成创建索引,触发器,约束的 SQL 语句,该参数可以用于 impdp,主要作用是未真实在目标端执行导入的情况下,生成 sql 文件包含该 dmp 文件的所有 ddl 语句,使用语法为
impdp \'/ as sysdba \' directory=PUBLIC_DIR dumpfile=expdp_FULL_T2_CC_2022-10_17_%U.dmp logfile=T2_index.log sqlfile=t2_cre_index.sql include=INDEX include=TRIGGER include=CONSTRAINT
nohup impdp \'/ as sysdba\' directory=EXPDP_DIR dumpfile=expdp_prod-2021-08-17_%U.dmp parallel=4 logfile=impdp_PROD_081720.log logtime=ALL TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y &
注意:在不管是非归档还是归档情况下使用 DISABLE_ARCHIVE_LOGGING 都会减小导入时间,减少归档量,但是需要注意如果数据库是 force logging 情况下,DISABLE_ARCHIVE_LOGGING 参数会无效。
--导入到其他用户
nohup impdp \'/ as sysdba\' directory=PUBLIC_DUMP LOGFILE=impdp_D2_CC0826.log dumpfile=expdp_prod_cc-2021-08-25_%U.dmp REMAP_SCHEMA=prod_cc:D2_CC REMAP_TABLESPACE=CC_DATA:D2_CC_DATA,CC_INDEX:D2_CC_INDEX PARALLEL=4 &
--查看表的统计信息
SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
SQL> select TABLE_NAME,OWNER,NUM_ROWS,LAST_ANALYZED from dba_tables where table_name like 'ACT_GE_%' and owner='OP_DEMO';
--收集用户统计信息
SQL> exec dbms_stats.gather_schema_stats('OP_DEMO')
SQL> exec dbms_stats.gather_schema_stats(ownname => 'D5_CC',options => 'GATHER AUTO',estimate_percent => dbms_stats.auto_sample_size,method_opt => 'for all columns size repeat',degree => 5)
--只导出表结构和数据,排除索引和统计信息
nohup expdp \'/ as sysdba\' directory=PUBLIC_DUMP SCHEMAS=prod exclude=STATISTICS,INDEX LOGFILE=prod0712.log dumpfile=onlydata_expdp_prod-2021-07-12_%U.dmp COMPRESSION=ALL PARALLEL=4 CLUSTER=N &
--导入用户元数据
impdp \'/ as sysdba\' directory=DUMP_DIR LOGFILE=New_imp_T2_APP.log dumpfile=T2_APP-2021-05-31_NEW.dmp REMAP_SCHEMA=T2_APP:T1_APP REMAP_TABLESPACE=T2_APP_TBS:T1_APP_TBS
--直接导入
nohup impdp \'/ as sysdba\' directory=EXPDP_DIR dumpfile=expdp_prod-2021-08-16_%U.dmp parallel=4 logfile=impdp_PROD_0817.log logtime=ALL TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y &
--使用 dblink 不落地导入
nohup impdp system/Oracle_19C@TEST directory=EXPDP_DIR NETWORK_LINK=PROD_LINK flashback_scn=9010004930 exclude=statistics parallel=4 cluster=no schemas=PROD,PROD_CC logfile=impdp_PROD.log logtime=ALL TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y &
注意:logtime=ALL,Oracle 12c 以后的新参数,记录导入导出的时间,将时间信息输出到控制台和日志里。
--导出序列
SQL> SELECT ' CREATE SEQUENCE '||SEQUENCE_NAME|| ' INCREMENT BY '|| INCREMENT_BY ||' START WITH '||LAST_NUMBER||' MAXVALUE '|| MAX_VALUE ||' CACHE '||CACHE_SIZE||' ORDER NOCYCLE ;'
FROM user_SEQUENCES;
--导出序列
SQL> select dbms_metadata.get_ddl('SEQUENCE',u.object_name) from user_objects u where object_type='SEQUENCE';
--导出序列
nohup expdp \'/ as sysdba\' directory=PUBLIC_DUMP LOGFILE=prod.log dumpfile=expdp_prod-2021-05-21_%U.dmp SCHEMAS=prod COMPRESSION=ALL PARALLEL=4 CLUSTER=N &
nohup impdp \'/ as sysdba\' directory=PUBLIC_DUMP include=sequence LOGFILE=imp_D4_CC_SEQ.log dumpfile=expdp_prod_scfop-2021-07-09.dmp REMAP_SCHEMA=prod:D4_CC REMAP_TABLESPACE=CC_DATA:D4_CC_DATA PARALLEL=2 &
SQL> select sequence_name from user_sequences;
--SQL 大全 https://www.modb.pro/db/45337
导出部分数据
对一个数据量在 TB 级别的生产库做全库迁移费时又费力,但创建测试环境时,我们往往并不需要用到所有的数据,只需要使用部分数据进行功能性测试即可。对此,数据泵提供了两种方式用于导出部分数据,一种方式是 QUERY,即按条件导出,类似于查询语句中的 where。例如,导出业务用户下每张表的前 5000 行数据,命令如下:
expdp \'/ as sysdba\' parfile=expdp.par
vim expdp.par
directory=EXPDIR
parallel=8
CLUSTER=N
dumpfile=jieke_%U.dmp
logfile=jieke_1130.log
schemas=
(
'PROD',
'CC',
'APP',
'PROD_CC'
)
query="where rownum