DataX3.0安装部署及同步测试

2024年 3月 5日 52.0k 0

DataX3.0安装部署及同步测试
文档概述DataX是一个异构数据源离线同步工具,致力于实现包括关系型数据库(MySQL、Oracle、SqlServer、PostgreSQL等)、HDFS、Hive、ADS、HBase、TableStore(OTS)、MaxCompute(ODPS)、Hologres、DRDS 等各种异构数据源之间稳定高效的数据同步功能。DataX 是阿里云 DataWorks数据集成的开源版本,在阿里巴巴集团内被广泛使用的离线数据同步工具/平台。

DataX本身作为数据同步框架,将不同数据源的同步抽象为从源头数据源读取数据的Reader插件,以及向目标端写入数据的Writer插件,理论上DataX框架可以支持任意数据源类型的数据同步工作。同时DataX插件体系作为一套生态系统,每接入一套新数据源该新加入的数据源即可实现和现有的数据源互通。

第一部分:使用docker安装mysql5.7
1.安装docker
2.安装mysql5.7

1.1安装最新版本docker,如果存在旧版本的Docker,可以卸载

sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine

1.2安装 Docker依赖包:

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

1.3添加 Docker YUM 源

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

1.4.安装docker

sudo yum install docker-ce docker-ce-cli containerd.io

Dependency Installed:
docker-buildx-plugin.x86_64 0:0.12.1-1.el7 docker-ce-rootless-extras.x86_64 0:25.0.2-1.el7 docker-compose-plugin.x86_64 0:2.24.5-1.el7

Complete!

安装如果报错:
Error: Package: docker-ce-rootless-extras-25.0.2-1.el7.x86_64 (docker-ce-stable)
Requires: slirp4netns >= 0.4
Error: Package: 3:docker-ce-25.0.2-1.el7.x86_64 (docker-ce-stable)
Requires: container-selinux >= 2:2.74
Error: Package: containerd.io-1.6.28-3.1.el7.x86_64 (docker-ce-stable)
Requires: container-selinux >= 2:2.74
Error: Package: docker-ce-rootless-extras-25.0.2-1.el7.x86_64 (docker-ce-stable)
Requires: fuse-overlayfs >= 0.7

解决:
cd /etc/yum.repos.d
vim centos-extras.repo
[centos-extras]
name=Centos extras - $basearch
baseurl=http://mirror.centos.org/centos/7/extras/x86_64
enabled=1
gpgcheck=0

yum -y install slirp4netns fuse-overlayfs container-selinux

1.5.配置镜像加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json 3306/tcp, :::3306->3306/tcp, 33060/tcp mysql

[root@node2 ]# docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
mysql 5.7 c20987f18b13 2 years ago 448MB

2.4.进入容器
docker exec -it mysql bash

mysql -u root -p123456

切换到mysql数据库,并查看user表。
use mysql;

select user,host from user;

[root@node2 ]# docker exec -it mysql bash
root@84c6627f5440:/# mysql -u root -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select user,host from user;
+---------------+-----------+
| user | host |
+---------------+-----------+
| root | % |
| mysql.session | localhost |
| mysql.sys | localhost |
| root | localhost |
+---------------+-----------+
4 rows in set (0.00 sec)

到此,使用docker安装mysql5.7完成

第二部分:Docker环境下Oracle12c安装步骤
镜像仓库地址:https://github.com/MaksymBilenko/docker-oracle-12c

1.拉取镜像文件
拉取Oracle12c的Docker文件

docker pull quay.io/maksymbilenko/oracle-12c

[root@node1 container]# docker pull quay.io/maksymbilenko/oracle-12c
Using default tag: latest
latest: Pulling from maksymbilenko/oracle-12c
8ba884070f61: Pull complete
ef9513b81046: Pull complete
6f1de349e202: Pull complete
5376ebfa0fa3: Pull complete
5f632c3633d2: Pull complete
3e74293031d2: Pull complete
Digest: sha256:de3f7d4a608107d6271d7b2c40368caae7b4fc14bb893e164346fb951e15a70b
Status: Downloaded newer image for quay.io/maksymbilenko/oracle-12c:latest
quay.io/maksymbilenko/oracle-12c:latest

2.建立Oracle数据库本地目录
在系统根目录下,新建/oracle/data目录

mkdir -p /oracle/data

3.新建目录授权
给目录授权

chown -R 1000:1000 /oracle/data

4.运行容器
docker运行12C容器

docker run --name o12c -d -p 8080:8080 -p 1521:1521 -v /oracle/data:/u01/app/oracle quay.io/maksymbilenko/oracle-12c

[root@node1 container]# mkdir -p /oracle/data
[root@node1 container]# chown -R 1000:1000 /oracle/data
[root@node1 container]# docker run --name o12c -d -p 8080:8080 -p 1521:1521 -v /oracle/data:/u01/app/oracle quay.io/maksymbilenko/oracle-12c
4044f830d3637b1f119a1a01598c5e9d45ada1637227fc4a062855bcf1c590f8

[root@node1 container]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4044f830d363 quay.io/maksymbilenko/oracle-12c "/entrypoint.sh " 16 seconds ago Up 14 seconds 0.0.0.0:1521->1521/tcp, :::1521->1521/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp o12c

[root@node1 container]# docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/maksymbilenko/oracle-12c latest a063ba7a3cab 2 years ago 8GB
[root@node1 container]#

4.1数据库连接信息

hostname: localhost
port: 1521
sid: xe
service name: xe
username: system
password: oracle

SYS & SYSTEM 的密码

oracle

4.2进入容器

docker exec -it 4044f830d363 bash

4.3修改环境变量,设置ORACLE_SID,ORACLE_HOME

[oracle@4044f830d363 ~]$ vim .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH
export ORACLE_HOME=/u01/app/oracle/product/12.2.0/SE

export ORACLE_SID=xe

export PATH=$ORACLE_HOME/bin:$PATH

[oracle@4044f830d363 ~]$ source .bash_profile

4.4检查实例状态
[oracle@4044f830d363 ~]$ sqlplus /nolog
SQL> conn / as sysdba;
Connected.
SQL> select instance_name,status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
xe OPEN

4.5检查监听状态,lsnrctl status

[oracle@4044f830d363 ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 01-FEB-2024 08:27:44

Copyright (c) 1991, 2016, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 01-FEB-2024 08:01:25
Uptime 0 days 0 hr. 26 min. 23 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/4044f830d363/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=4044f830d363)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=4044f830d363)(PORT=8080))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "xe" has 1 instance(s).
Instance "xe", status READY, has 1 handler(s) for this service...
Service "xeXDB" has 1 instance(s).
Instance "xe", status READY, has 1 handler(s) for this service...
The command completed successfully

5.新建表空间和用户

#创建test表空间

create tablespace test datafile '/u01/app/oracle/oradata/xe/test.dbf' size 1M autoextend on next 50M maxsize unlimited;

#创建test用户

create user test identified by "admin@123" default tablespace test profile DEFAULT ACCOUNT UNLOCK;

#表空间和用户赋值权限

CREATE USER test identified by test;
alter user test identified by test;
GRANT DBA to test ;
grant SELECT ANY DICTIONARY to test;
GRANT EXECUTE ON SYS.DBMS_LOCK TO test;

[oracle@4044f830d363 ~]$ sqlplus /nolog

SQL*Plus: Release 12.2.0.1.0 Production on Thu Feb 1 08:10:59 2024

Copyright (c) 1982, 2016, Oracle. All rights reserved.

SQL> conn /as sysdba
Connected.

SQL> CREATE USER test identified by test;

User created.

SQL> alter user test identified by test;
GRANT DBA to test ;
grant SELECT ANY DICTIONARY to test;
GRANT EXECUTE ON SYS.DBMS_LOCK TO test;
User altered.

SQL>
Grant succeeded.

SQL>
Grant succeeded.

SQL>

Grant succeeded.

SQL> create tablespace test datafile '/u01/app/oracle/oradata/xe/test.dbf' size 1M autoextend on next 50M maxsize unlimited;

Tablespace created.

SQL> create user test identified by "admin@123" default tablespace test profile DEFAULT ACCOUNT UNLOCK;

User created.

SQL> grant dba to test;

Grant succeeded.

SQL> grant unlimited tablespace to test;

Grant succeeded.

至此,Oracle12c安装完成.

第三部分:使用docker部署datax和datax-web
一.准备基础环境

二.安装mysql5.7数据库 见【使用docker安装mysql5.7】

三.datax环境配置与执行datax_web容器

一.准备基础环境

1.0 服务器配置

172.17.0.29 mysql CENTOS7.6 8C16G

172.17.0.34 datax CENTOS7.6 8C16G

172.17.0.32 oracle CENTOS7.6 8C16G

1.1 开通所需端口3306,9527

firewall-cmd --zone=public --add-port=3306/tcp --permanent

firewall-cmd --zone=public --add-port=9527/tcp --permanent

firewall-cmd --reload

1.2 重启docker服务

systemctl stop docker

systemctl restart docker

1.3 拉取datax_web镜像

docker pull linshellfeng/datax_web:3.0.1

注意:datax_web:3.0.1镜像已经整合了datax3.0和datax-web,整合后的镜像大小约990M

[root@datax ~]# docker pull linshellfeng/datax_web:3.0.1

3.0.1: Pulling from linshellfeng/datax_web

ac9208207ada: Pull complete

96c0123b30b0: Pull complete

a8db204c040d: Downloading [=================================> ] 47.28MB/70.51MB

b21c9e28dc6d: Downloading [===================> ] 84.28MB/217.6MB

8099a5760bef: Downloading [==============> ] 56.92MB/190.8MB

55b3abe48486: Waiting

2882620f39a9: Waiting

872999062d9e: Waiting

b21c9e28dc6d: Downloading

55b3abe48486: Downloading

unauthorized: authentication required

报错:unauthorized: authentication required

解决:注册docker账号

https://hub.docker.com/signup

[root@datax ~]# docker login

Log in with your Docker ID or email address to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com/ to create one.

You can log in with your password or a Personal Access Token (PAT). Using a limited-scope PAT grants better security and is required for organizations using SSO. Learn more at https://docs.docker.com/go/access-tokens/

Username: ********* ##输入注册的docker账号

Password: ********* ##输入docker密码

WARNING! Your password will be stored unencrypted in /root/.docker/config.json.

Configure a credential helper to remove this warning. See

https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

[root@datax ~]# cat /root/.docker/config.json

{

"auths": {

"https://index.docker.io/v1/": {

"auth": "Mjc3NTY4ODIxQHFxLmNvbToxYSMyQiQzYyU0RA=="

}

}

}

docker login

[root@datax ~]# docker pull linshellfeng/datax_web:3.0.1

3.0.1: Pulling from linshellfeng/datax_web

Digest: sha256:c64395010bf9552ed56bd0f5ff23e728db6c28ab4a7f5a1e2eee7d0796078fc0

Status: Image is up to date for linshellfeng/datax_web:3.0.1

docker.io/linshellfeng/datax_web:3.0.1

[root@datax ~]# docker image ls

REPOSITORY TAG IMAGE ID CREATED SIZE

linshellfeng/datax_web 3.0.1 7d1bcef9d5ee 21 months ago 974MB

二. 安装mysql数据库

2.1 安装步骤见docker安装mysql5.7

2.2 初始化datax_web_db数据库和配置datax_web账号密码:在mysql数据库节点上执行

使用root账号登录mysql数据库以后,使用以下脚本初始化datax_web_db数据库和配置datax_web账号密码,注意:xxxxxx替换为自定义的密码

create database datax_web_db default character set utf8mb4 collate utf8mb4_general_ci;

#create user 'datax_web'@'%' identified with mysql_native_password by 'xxxxxx';

CREATE USER 'datax_web'@'%' IDENTIFIED BY '123456';

grant all privileges on datax_web_db.* to 'datax_web'@'%';

flush privileges;

[root@mysql ~]# docker exec -it mysql bash

root@84c6627f5440:/# mysql -u root -p123456

mysql: [Warning] Using a password on the command line interface can be insecure.

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 3

Server version: 5.7.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;

+--------------------+

| Database |

+--------------------+

| information_schema |

| mysql |

| performance_schema |

| sys |

+--------------------+

4 rows in set (0.00 sec)

mysql> create database datax_web_db default character set utf8mb4 collate utf8mb4_general_ci;

Query OK, 1 row affected (0.00 sec)

mysql> CREATE USER 'datax_web'@'%' IDENTIFIED BY '123456';

Query OK, 0 rows affected (0.00 sec)

mysql> grant all privileges on datax_web_db.* to 'datax_web'@'%';

Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)

mysql> exit

2.3 初始化datax_web_db数据库

#执行以下脚本,完成datax_web_db数据库初始化,满足datax_web服务应用需要

使用datax_web登录,然后切换到datax_web_db,复制datax_web.sql的内执行即可,这里不再详细介绍数据库方面的操作。

root@84c6627f5440:/# mysql -u datax_web -p123456

mysql: [Warning] Using a password on the command line interface can be insecure.

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 4

Server version: 5.7.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;

+--------------------+

| Database |

+--------------------+

| information_schema |

| datax_web_db |

+--------------------+

2 rows in set (0.00 sec)

mysql> use datax_web_db

#执行datax_web.sql 复制sql语句直接执行即可

#执行完成后,生成的表如下

mysql> show tables;

+------------------------+

| Tables_in_datax_web_db |

+------------------------+

| job_group |

| job_info |

| job_jdbc_datasource |

| job_lock |

| job_log |

| job_log_report |

| job_logglue |

| job_permission |

| job_project |

| job_registry |

| job_template |

| job_user |

+------------------------+

12 rows in set (0.00 sec)

以下为datax_web.sql的内容

/*

Navicat Premium Data Transfer

Source Server : localhost

Source Server Type : MySQL

Source Server Version : 50725

Source Host : localhost:3306

Source Schema : datax_web

Target Server Type : MySQL

Target Server Version : 50725

File Encoding : 65001

Date: 15/12/2019 22:27:10

*/

SET NAMES utf8mb4;

SET FOREIGN_KEY_CHECKS = 0;

-- ----------------------------

-- Table structure for job_group

-- ----------------------------

DROP TABLE IF EXISTS `job_group`;

CREATE TABLE `job_group` (

`id` int(11) NOT NULL AUTO_INCREMENT,

`app_name` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '执行器AppName',

`title` varchar(12) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '执行器名称',

`order` int(11) NOT NULL DEFAULT 0 COMMENT '排序',

`address_type` tinyint(4) NOT NULL DEFAULT 0 COMMENT '执行器地址类型:0=自动注册、1=手动录入',

`address_list` varchar(512) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器地址列表,多地址逗号分隔',

PRIMARY KEY (`id`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 2 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-- ----------------------------

-- Records of job_group

-- ----------------------------

INSERT INTO `job_group` VALUES (1, 'datax-executor', 'datax执行器', 1, 0, NULL);

-- ----------------------------

-- Table structure for job_info

-- ----------------------------

DROP TABLE IF EXISTS `job_info`;

CREATE TABLE `job_info` (

`id` int(11) NOT NULL AUTO_INCREMENT,

`job_group` int(11) NOT NULL COMMENT '执行器主键ID',

`job_cron` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '任务执行CRON',

`job_desc` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL,

`add_time` datetime(0) NULL DEFAULT NULL,

`update_time` datetime(0) NULL DEFAULT NULL,

`author` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '作者',

`alarm_email` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '报警邮件',

`executor_route_strategy` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器路由策略',

`executor_handler` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器任务handler',

`executor_param` varchar(512) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器任务参数',

`executor_block_strategy` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '阻塞处理策略',

`executor_timeout` int(11) NOT NULL DEFAULT 0 COMMENT '任务执行超时时间,单位秒',

`executor_fail_retry_count` int(11) NOT NULL DEFAULT 0 COMMENT '失败重试次数',

`glue_type` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'GLUE类型',

`glue_source` mediumtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT 'GLUE源代码',

`glue_remark` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'GLUE备注',

`glue_updatetime` datetime(0) NULL DEFAULT NULL COMMENT 'GLUE更新时间',

`child_jobid` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '子任务ID,多个逗号分隔',

`trigger_status` tinyint(4) NOT NULL DEFAULT 0 COMMENT '调度状态:0-停止,1-运行',

`trigger_last_time` bigint(13) NOT NULL DEFAULT 0 COMMENT '上次调度时间',

`trigger_next_time` bigint(13) NOT NULL DEFAULT 0 COMMENT '下次调度时间',

`job_json` text CHARACTER SET utf8 COLLATE utf8_general_ci NULL COMMENT 'datax运行脚本',

PRIMARY KEY (`id`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 7 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-- ----------------------------

-- Table structure for job_jdbc_datasource

-- ----------------------------

DROP TABLE IF EXISTS `job_jdbc_datasource`;

CREATE TABLE `job_jdbc_datasource` (

`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '自增主键',

`datasource_name` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '数据源名称',

`datasource_group` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT 'Default' COMMENT '数据源分组',

`jdbc_username` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '用户名',

`jdbc_password` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '密码',

`jdbc_url` varchar(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'jdbc url',

`jdbc_driver_class` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'jdbc驱动类',

`status` tinyint(1) NOT NULL DEFAULT 1 COMMENT '状态:0删除 1启用 2禁用',

`create_by` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '创建人',

`create_date` datetime(0) NULL DEFAULT CURRENT_TIMESTAMP(0) COMMENT '创建时间',

`update_by` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '更新人',

`update_date` datetime(0) NULL DEFAULT NULL COMMENT '更新时间',

`comments` varchar(1000) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '备注',

PRIMARY KEY (`id`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 6 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = 'jdbc数据源配置' ROW_FORMAT = Dynamic;

-- ----------------------------

-- Table structure for job_lock

-- ----------------------------

DROP TABLE IF EXISTS `job_lock`;

CREATE TABLE `job_lock` (

`lock_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '锁名称',

PRIMARY KEY (`lock_name`) USING BTREE

) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-- ----------------------------

-- Records of job_lock

-- ----------------------------

INSERT INTO `job_lock` VALUES ('schedule_lock');

-- ----------------------------

-- Table structure for job_log

-- ----------------------------

DROP TABLE IF EXISTS `job_log`;

CREATE TABLE `job_log` (

`id` bigint(20) NOT NULL AUTO_INCREMENT,

`job_group` int(11) NOT NULL COMMENT '执行器主键ID',

`job_id` int(11) NOT NULL COMMENT '任务,主键ID',

`job_desc` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,

`executor_address` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器地址,本次执行的地址',

`executor_handler` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器任务handler',

`executor_param` varchar(512) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器任务参数',

`executor_sharding_param` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器任务分片参数,格式如 1/2',

`executor_fail_retry_count` int(11) NULL DEFAULT 0 COMMENT '失败重试次数',

`trigger_time` datetime(0) NULL DEFAULT NULL COMMENT '调度-时间',

`trigger_code` int(11) NOT NULL COMMENT '调度-结果',

`trigger_msg` text CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '调度-日志',

`handle_time` datetime(0) NULL DEFAULT NULL COMMENT '执行-时间',

`handle_code` int(11) NOT NULL COMMENT '执行-状态',

`handle_msg` text CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '执行-日志',

`alarm_status` tinyint(4) NOT NULL DEFAULT 0 COMMENT '告警状态:0-默认、1-无需告警、2-告警成功、3-告警失败',

`process_id` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'datax进程Id',

`max_id` bigint(20) NULL DEFAULT NULL COMMENT '增量表max id',

PRIMARY KEY (`id`) USING BTREE,

INDEX `I_trigger_time`(`trigger_time`) USING BTREE,

INDEX `I_handle_code`(`handle_code`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 0 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-- ----------------------------

-- Table structure for job_log_report

-- ----------------------------

DROP TABLE IF EXISTS `job_log_report`;

CREATE TABLE `job_log_report` (

`id` int(11) NOT NULL AUTO_INCREMENT,

`trigger_day` datetime(0) NULL DEFAULT NULL COMMENT '调度-时间',

`running_count` int(11) NOT NULL DEFAULT 0 COMMENT '运行中-日志数量',

`suc_count` int(11) NOT NULL DEFAULT 0 COMMENT '执行成功-日志数量',

`fail_count` int(11) NOT NULL DEFAULT 0 COMMENT '执行失败-日志数量',

PRIMARY KEY (`id`) USING BTREE,

UNIQUE INDEX `i_trigger_day`(`trigger_day`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 28 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-- ----------------------------

-- Records of job_log_report

-- ----------------------------

INSERT INTO `job_log_report` VALUES (20, '2019-12-07 00:00:00', 0, 0, 0);

INSERT INTO `job_log_report` VALUES (21, '2019-12-10 00:00:00', 77, 52, 23);

INSERT INTO `job_log_report` VALUES (22, '2019-12-11 00:00:00', 9, 2, 11);

INSERT INTO `job_log_report` VALUES (23, '2019-12-13 00:00:00', 9, 48, 74);

INSERT INTO `job_log_report` VALUES (24, '2019-12-12 00:00:00', 10, 8, 30);

INSERT INTO `job_log_report` VALUES (25, '2019-12-14 00:00:00', 78, 45, 66);

INSERT INTO `job_log_report` VALUES (26, '2019-12-15 00:00:00', 24, 76, 9);

INSERT INTO `job_log_report` VALUES (27, '2019-12-16 00:00:00', 23, 85, 10);

-- ----------------------------

-- Table structure for job_logglue

-- ----------------------------

DROP TABLE IF EXISTS `job_logglue`;

CREATE TABLE `job_logglue` (

`id` int(11) NOT NULL AUTO_INCREMENT,

`job_id` int(11) NOT NULL COMMENT '任务,主键ID',

`glue_type` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'GLUE类型',

`glue_source` mediumtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT 'GLUE源代码',

`glue_remark` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'GLUE备注',

`add_time` datetime(0) NULL DEFAULT NULL,

`update_time` datetime(0) NULL DEFAULT NULL,

PRIMARY KEY (`id`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-- ----------------------------

-- Table structure for job_registry

-- ----------------------------

DROP TABLE IF EXISTS `job_registry`;

CREATE TABLE `job_registry` (

`id` int(11) NOT NULL AUTO_INCREMENT,

`registry_group` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL,

`registry_key` varchar(191) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL,

`registry_value` varchar(191) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL,

`update_time` datetime(0) NULL DEFAULT NULL,

PRIMARY KEY (`id`) USING BTREE,

INDEX `i_g_k_v`(`registry_group`, `registry_key`, `registry_value`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 26 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-- ----------------------------

-- Table structure for job_user

-- ----------------------------

DROP TABLE IF EXISTS `job_user`;

CREATE TABLE `job_user` (

`id` int(11) NOT NULL AUTO_INCREMENT,

`username` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '账号',

`password` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '密码',

`role` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '角色:0-普通用户、1-管理员',

`permission` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '权限:执行器ID列表,多个逗号分割',

PRIMARY KEY (`id`) USING BTREE,

UNIQUE INDEX `i_username`(`username`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 10 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

-- ----------------------------

-- Records of job_user

-- ----------------------------

INSERT INTO `job_user` VALUES (1, 'admin', '$2a$10$2KCqRbra0Yn2TwvkZxtfLuWuUP5KyCWsljO/ci5pLD27pqR3TV1vy', 'ROLE_ADMIN', NULL);

/**

v2.1.1脚本更新

*/

ALTER TABLE `job_info`

ADD COLUMN `replace_param` VARCHAR(100) NULL DEFAULT NULL COMMENT '动态参数' AFTER `job_json`,

ADD COLUMN `jvm_param` VARCHAR(200) NULL DEFAULT NULL COMMENT 'jvm参数' AFTER `replace_param`,

ADD COLUMN `time_offset` INT(11) NULL DEFAULT '0'COMMENT '时间偏移量' AFTER `jvm_param`;

/**

增量改版脚本更新

*/

ALTER TABLE `job_info` DROP COLUMN `time_offset`;

ALTER TABLE `job_info`

ADD COLUMN `inc_start_time` DATETIME NULL DEFAULT NULL COMMENT '增量初始时间' AFTER `jvm_param`;

-- ----------------------------

-- Table structure for job_template

-- ----------------------------

DROP TABLE IF EXISTS `job_template`;

CREATE TABLE `job_template` (

`id` int(11) NOT NULL AUTO_INCREMENT,

`job_group` int(11) NOT NULL COMMENT '执行器主键ID',

`job_cron` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '任务执行CRON',

`job_desc` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL,

`add_time` datetime(0) NULL DEFAULT NULL,

`update_time` datetime(0) NULL DEFAULT NULL,

`user_id` int(11) NOT NULL COMMENT '修改用户',

`alarm_email` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '报警邮件',

`executor_route_strategy` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器路由策略',

`executor_handler` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器任务handler',

`executor_param` varchar(512) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '执行器参数',

`executor_block_strategy` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '阻塞处理策略',

`executor_timeout` int(11) NOT NULL DEFAULT 0 COMMENT '任务执行超时时间,单位秒',

`executor_fail_retry_count` int(11) NOT NULL DEFAULT 0 COMMENT '失败重试次数',

`glue_type` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT 'GLUE类型',

`glue_source` mediumtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT 'GLUE源代码',

`glue_remark` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'GLUE备注',

`glue_updatetime` datetime(0) NULL DEFAULT NULL COMMENT 'GLUE更新时间',

`child_jobid` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '子任务ID,多个逗号分隔',

`trigger_last_time` bigint(13) NOT NULL DEFAULT 0 COMMENT '上次调度时间',

`trigger_next_time` bigint(13) NOT NULL DEFAULT 0 COMMENT '下次调度时间',

`job_json` text CHARACTER SET utf8 COLLATE utf8_general_ci NULL COMMENT 'datax运行脚本',

`jvm_param` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'jvm参数',

`project_id` int(11) NULL DEFAULT NULL COMMENT '所属项目Id',

PRIMARY KEY (`id`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 22 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

/**

添加数据源字段

*/

ALTER TABLE `job_jdbc_datasource`

ADD COLUMN `datasource` VARCHAR(45) NOT NULL COMMENT '数据源' AFTER `datasource_name`;

/**

添加分区字段

*/

ALTER TABLE `job_info`

ADD COLUMN `partition_info` VARCHAR(100) NULL DEFAULT NULL COMMENT '分区信息' AFTER `inc_start_time`;

/**

2.1.1版本新增----------------------------------------------------------------------------------------------

*/

/**

最近一次执行状态

*/

ALTER TABLE `job_info`

ADD COLUMN `last_handle_code` INT(11) NULL DEFAULT '0' COMMENT '最近一次执行状态' AFTER `partition_info`;

/**

zookeeper地址

*/

ALTER TABLE `job_jdbc_datasource`

ADD COLUMN `zk_adress` VARCHAR(200) NULL DEFAULT NULL AFTER `jdbc_driver_class`;

ALTER TABLE `job_info`

CHANGE COLUMN `executor_timeout` `executor_timeout` INT(11) NOT NULL DEFAULT '0' COMMENT '任务执行超时时间,单位分钟' ;

/**

用户名密码改为非必填

*/

ALTER TABLE `job_jdbc_datasource`

CHANGE COLUMN `jdbc_username` `jdbc_username` VARCHAR(100) CHARACTER SET 'utf8mb4' NULL DEFAULT NULL COMMENT '用户名' ,

CHANGE COLUMN `jdbc_password` `jdbc_password` VARCHAR(200) CHARACTER SET 'utf8mb4' NULL DEFAULT NULL COMMENT '密码' ;

/**

添加mongodb数据库名字段

*/

ALTER TABLE `job_jdbc_datasource`

ADD COLUMN `database_name` VARCHAR(45) NULL DEFAULT NULL COMMENT '数据库名' AFTER `datasource_group`;

/**

添加执行器资源字段

*/

ALTER TABLE `job_registry`

ADD COLUMN `cpu_usage` DOUBLE NULL AFTER `registry_value`,

ADD COLUMN `memory_usage` DOUBLE NULL AFTER `cpu_usage`,

ADD COLUMN `load_average` DOUBLE NULL AFTER `memory_usage`;

-- ----------------------------

-- Table structure for job_permission

-- ----------------------------

DROP TABLE IF EXISTS `job_permission`;

CREATE TABLE `job_permission` (

`id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',

`name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '权限名',

`description` varchar(11) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '权限描述',

`url` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,

`pid` int(11) NULL DEFAULT NULL,

PRIMARY KEY (`id`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 3 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

ALTER TABLE `job_info`

ADD COLUMN `replace_param_type` varchar(255) NULL COMMENT '增量时间格式' AFTER `last_handle_code`;

ALTER TABLE `job_info`

ADD COLUMN `project_id` int(11) NULL COMMENT '所属项目id' AFTER `job_desc`;

ALTER TABLE `job_info`

ADD COLUMN `reader_table` VARCHAR(255) NULL COMMENT 'reader表名称' AFTER `replace_param_type`,

ADD COLUMN `primary_key` VARCHAR(50) NULL COMMENT '增量表主键' AFTER `reader_table`,

ADD COLUMN `inc_start_id` VARCHAR(20) NULL COMMENT '增量初始id' AFTER `primary_key`,

ADD COLUMN `increment_type` TINYINT(4) NULL COMMENT '增量类型' AFTER `inc_start_id`,

ADD COLUMN `datasource_id` BIGINT(11) NULL COMMENT '数据源id' AFTER `increment_type`;

CREATE TABLE `job_project` (

`id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key',

`name` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'project name',

`description` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,

`user_id` int(11) NULL DEFAULT NULL COMMENT 'creator id',

`flag` tinyint(4) NULL DEFAULT 1 COMMENT '0 not available, 1 available',

`create_time` datetime(0) NULL DEFAULT CURRENT_TIMESTAMP(0) COMMENT 'create time',

`update_time` datetime(0) NULL DEFAULT CURRENT_TIMESTAMP(0) COMMENT 'update time',

PRIMARY KEY (`id`) USING BTREE

) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

ALTER TABLE `job_info`

CHANGE COLUMN `author` `user_id` INT(11) NOT NULL COMMENT '修改用户' ;

ALTER TABLE `job_info`

CHANGE COLUMN `increment_type` `increment_type` TINYINT(4) NULL DEFAULT 0 COMMENT '增量类型' ;

三. datax环境配置与执行datax_web容器

3.1在宿主机创建/home/datax/datax-admin/conf目录,并将bootstrap.properties拷贝到/home/datax/datax-admin/conf目录下,该文件用来配置datax-web连接数据库的信息,ip地址和xxxxxx需要按实际情况填写。xxxxxx为mysql数据库的密码,与2.2中创建的密码一致。

在datax服务器上执行:

#创建conf目录

mkdir -p /home/datax/datax-admin/conf

root@datax ~]# find / -name bootstrap.properties

/var/lib/docker/overlay2/b15829b7d70f8506421f7431c07bde03a9d163b721fbb57345fde9baee90e213/diff/home/datax/datax-web-2.1.2/modules/datax-admin/conf/bootstrap.properties

[root@datax ~]# cd /home/datax/datax-admin/conf

[root@datax conf]# cp /var/lib/docker/overlay2/b15829b7d70f8506421f7431c07bde03a9d163b721fbb57345fde9baee90e213/diff/home/datax/datax-web-2.1.2/modules/datax-admin/conf/bootstrap.properties ./

[root@datax conf]# ls

bootstrap.properties

#编辑bootstrap.properties文件

vim bootstrap.properties

bootstrap.properties内容参考如下:

#Database

DB_HOST=172.17.0.29

DB_PORT=3306

DB_USERNAME=datax_web

DB_PASSWORD=123456

DB_DATABASE=datax_web_db

[root@datax conf]# cat bootstrap.properties

#Database

DB_HOST=172.17.0.29

DB_PORT=3306

DB_USERNAME=datax_web

DB_PASSWORD=123456

DB_DATABASE=datax_web_db

3.2、启动容器

docker run -d --name datax_web -p 9527:9527 -v /home/datax/datax-admin/conf/bootstrap.properties:/home/datax/datax-web-2.1.2/modules/datax-admin/conf/bootstrap.properties linshellfeng/datax_web:3.0.1

[root@datax conf]# docker image ls

REPOSITORY TAG IMAGE ID CREATED SIZE

linshellfeng/datax_web 3.0.1 7d1bcef9d5ee 21 months ago 974MB

[root@datax conf]# docker ps -ls

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE

80b8913daf4b linshellfeng/datax_web:3.0.1 "docker-entrypoint.sh" About a minute ago Up About a minute 0.0.0.0:9527->9527/tcp, :::9527->9527/tcp datax_web 102kB (virtual 974MB)

3.3 进入容器内部(非必要步骤,用来测试datax_web容器是否正常启动)

docker exec -it datax_web /bin/bash

3.4 测试是否datax-web服务是否正常启动

在容器内:curl http://127.0.0.1:9527/index.html

显示内容如下:

root@80b8913daf4b /]# curl http://127.0.0.1:9527/index.html

3.5 使用浏览器访问宿主机服务:http://宿主机ip:9527/index.html

http://10.87.0.34:9527/index.html

3.6 使用admin/123456登录后及时修改密码

第四部分:解决datax同步报错问题
一.验证datax安装是否成功
1.1 删除重复jar包
1.2 修改core.json文件参数:core-transport-channel-speed-byte的值从-1 修改为2000000 即2M
1.3 同步用户test被锁解锁

二.datax-web页面重新配置数据源以及同步任务
2.1 datax-web页面重新发起同步任务
2.2 修改oracle数据源
2.3 任务构建重新生成新的json文件

1.1 删除重复jar包

#登陆容器datax_web
docker exec -it datax_web /bin/bash

#执行检测脚本验证datax安装是否成功
报错1:
[root@80b8913daf4b job]# python /home/datax/datax/bin/datax.py /home/datax/datax/job/job.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/datax/datax/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/datax/datax/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/datax/datax/lib/logback-classic-1.0.13.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (com.alibaba.datax.common.statistics.VMInfo).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

##备份重复jar包
[root@80b8913daf4b ~]#cd /home/datax/datax/lib
[root@80b8913daf4b lib]#mkdir bak
mv slf4j-log4j12-1.7.10.jar bak/
mv slf4j-log4j12-1.6.1.jar bak/

#删除重复jar包
rm slf4j-log4j12-1.7.10.jar
rm slf4j-log4j12-1.6.1.jar

#检查jar是否删除
[root@80b8913daf4b ~]# cd /home/datax/datax/lib
[root@80b8913daf4b lib]# ll slf4j-log4j12-1.6.1.jar
ls: cannot access slf4j-log4j12-1.6.1.jar: No such file or directory
[root@80b8913daf4b lib]# ll slf4j-log4j12-1.7.10.jar
ls: cannot access slf4j-log4j12-1.7.10.jar: No such file or directory
[root@80b8913daf4b lib]# ll logback-classic-1.0.13.jar
-rw-r--r-- 1 root root 264600 Apr 29 2022 logback-classic-1.0.13.jar

#退出datax_web容器
[root@80b8913daf4b lib]# exit

#在宿主机上停止datax_web
[root@datax ~]#docker stop datax_web

#在宿主机上启动datax_web
[root@datax ~]#docker start datax_web

#在宿主机上检查datax_web容器状态
[root@datax ~]#docker ps -ls

[root@datax ~]# docker ps -ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
80b8913daf4b linshellfeng/datax_web:3.0.1 "docker-entrypoint.sh" 42 hours ago Up 17 minutes 0.0.0.0:9527->9527/tcp, :::9527->9527/tcp datax_web 531kB (virtual 975MB)

1.2 修改core.json文件参数:core-transport-channel-speed-byte的值从-1 修改为2000000 即2M

#再次执行检测脚本验证datax安装是否成功
[root@80b8913daf4b job]# python /home/datax/datax/bin/datax.py /home/datax/datax/job/job.json
报错2:
2024-02-04 13:52:09.900 [job-0] ERROR JobContainer - Exception when job run
com.alibaba.datax.common.exception.DataXException: Code:[Framework-03], Description:[DataX引擎配置错误,该问题通常是由于DataX安装错误引起,请联系您的运维解决 .]. - 在有总bps限速条件下,单个channel的bps值不能为空,也不能为非正数
at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:26) ~[datax-common-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.core.job.JobContainer.adjustChannelNumber(JobContainer.java:430) ~[datax-core-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.core.job.JobContainer.split(JobContainer.java:387) ~[datax-core-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:117) ~[datax-core-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.core.Engine.start(Engine.java:92) [datax-core-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.core.Engine.entry(Engine.java:171) [datax-core-0.0.1-SNAPSHOT.jar:na]
at com.alibaba.datax.core.Engine.main(Engine.java:204) [datax-core-0.0.1-SNAPSHOT.jar:na]
2024-02-04 13:52:09.904 [job-0] INFO StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00%
2024-02-04 13:52:09.905 [job-0] ERROR Engine -

经DataX智能分析,该任务最可能的错误原因是:
com.alibaba.datax.common.exception.DataXException: Code:[Framework-03], Description:[DataX引擎配置错误,该问题通常是由于DataX安装错误引起,请联系您的运维解决 .]. - 在有总bps限速条件下,单个channel的bps值不能为空,也不能为非正数
at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:26)
at com.alibaba.datax.core.job.JobContainer.adjustChannelNumber(JobContainer.java:430)
at com.alibaba.datax.core.job.JobContainer.split(JobContainer.java:387)
at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:117)
at com.alibaba.datax.core.Engine.start(Engine.java:92)
at com.alibaba.datax.core.Engine.entry(Engine.java:171)
at com.alibaba.datax.core.Engine.main(Engine.java:204)

解决:
修改core.json文件参数:core-transport-channel-speed-byte的值从-1 修改为2000000 即2M
cd /home/datax/datax/conf
[root@80b8913daf4b conf]# vi core.json

"core": {
"dataXServer": {
"address": "http://localhost:7001/api",
"timeout": 10000,
"reportDataxLog": false,
"reportPerfLog": false
},
"transport": {
"channel": {
"class": "com.alibaba.datax.core.transport.channel.memory.MemoryChannel",
"speed": {
"byte": 2000000,
"record": -1
},
"flowControlInterval": 20,
"capacity": 512,
"byteCapacity": 67108864
},
"exchanger": {
"class": "com.alibaba.datax.core.plugin.BufferedRecordExchanger",
"bufferSize": 32
}
},

#执行检测脚本验证datax安装是否成功

[root@80b8913daf4b conf]# python /home/datax/datax/bin/datax.py /home/datax/datax/job/job.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !
Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved.

2024-02-04 13:58:11.312 [main] INFO VMInfo - VMInfo# operatingSystem class => sun.management.OperatingSystemImpl
2024-02-04 13:58:11.320 [main] INFO Engine - the machine info =>

osInfo: Oracle Corporation 1.8 25.162-b12
jvmInfo: Linux amd64 3.10.0-957.el7.x86_64
cpu num: 8

totalPhysicalMemory: -0.00G
freePhysicalMemory: -0.00G
maxFileDescriptorCount: -1
currentOpenFileDescriptorCount: -1

GC Names [PS MarkSweep, PS Scavenge]

MEMORY_NAME | allocation_size | init_size
PS Eden Space | 256.00MB | 256.00MB
Code Cache | 240.00MB | 2.44MB
Compressed Class Space | 1,024.00MB | 0.00MB
PS Survivor Space | 42.50MB | 42.50MB
PS Old Gen | 683.00MB | 683.00MB
Metaspace | -0.00MB | 0.00MB

2024-02-04 13:58:11.336 [main] INFO Engine -
{
"content":[
{
"reader":{
"parameter":{
"column":[
{
"type":"string",
"value":"DataX"
},
{
"type":"long",
"value":19890604
},
{
"type":"date",
"value":"1989-06-04 00:00:00"
},
{
"type":"bool",
"value":true
},
{
"type":"bytes",
"value":"test"
}
],
"sliceRecordCount":100000
},
"name":"streamreader"
},
"writer":{
"parameter":{
"print":false,
"encoding":"UTF-8"
},
"name":"streamwriter"
}
}
],
"setting":{
"errorLimit":{
"record":0,
"percentage":0.02
},
"speed":{
"byte":10485760
}
}
}

2024-02-04 13:58:11.353 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null
2024-02-04 13:58:11.355 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0
2024-02-04 13:58:11.355 [main] INFO JobContainer - DataX jobContainer starts job.
2024-02-04 13:58:11.356 [main] INFO JobContainer - Set jobId = 0
2024-02-04 13:58:11.370 [job-0] INFO JobContainer - jobContainer starts to do prepare ...
2024-02-04 13:58:11.371 [job-0] INFO JobContainer - DataX Reader.Job [streamreader] do prepare work .
2024-02-04 13:58:11.371 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] do prepare work .
2024-02-04 13:58:11.371 [job-0] INFO JobContainer - jobContainer starts to do split ...
2024-02-04 13:58:11.371 [job-0] INFO JobContainer - Job set Max-Byte-Speed to 10485760 bytes.
2024-02-04 13:58:11.372 [job-0] INFO JobContainer - DataX Reader.Job [streamreader] splits to [5] tasks.
2024-02-04 13:58:11.372 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] splits to [5] tasks.
2024-02-04 13:58:11.390 [job-0] INFO JobContainer - jobContainer starts to do schedule ...
2024-02-04 13:58:11.406 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups.
2024-02-04 13:58:11.408 [job-0] INFO JobContainer - Running by standalone Mode.
2024-02-04 13:58:11.415 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [5] channels for [5] tasks.
2024-02-04 13:58:11.418 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to 2000000.
2024-02-04 13:58:11.418 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated.
2024-02-04 13:58:11.427 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] attemptCount[1] is started
2024-02-04 13:58:11.429 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] attemptCount[1] is started
2024-02-04 13:58:11.431 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started
2024-02-04 13:58:11.433 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] attemptCount[1] is started
2024-02-04 13:58:11.442 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] attemptCount[1] is started
2024-02-04 13:58:11.643 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[212]ms
2024-02-04 13:58:11.643 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] is successed, used[210]ms
2024-02-04 13:58:11.743 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] is successed, used[306]ms
2024-02-04 13:58:11.744 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] is successed, used[317]ms
2024-02-04 13:58:11.844 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] is successed, used[415]ms
2024-02-04 13:58:11.844 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it's tasks.
2024-02-04 13:58:21.423 [job-0] INFO StandAloneJobContainerCommunicator - Total 500000 records, 13000000 bytes | Speed 1.24MB/s, 50000 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.150s | All Task WaitReaderTime 0.751s | Percentage 100.00%
2024-02-04 13:58:21.423 [job-0] INFO AbstractScheduler - Scheduler accomplished all tasks.
2024-02-04 13:58:21.423 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] do post work.
2024-02-04 13:58:21.424 [job-0] INFO JobContainer - DataX Reader.Job [streamreader] do post work.
2024-02-04 13:58:21.424 [job-0] INFO JobContainer - DataX jobId [0] completed successfully.
2024-02-04 13:58:21.424 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /home/datax/datax/hook
2024-02-04 13:58:21.425 [job-0] INFO JobContainer -
[total cpu info] =>
averageCpu | maxDeltaCpu | minDeltaCpu
-1.00% | -1.00% | -1.00%

[total gc info] =>
NAME | totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime | minDeltaGCTime
PS MarkSweep | 0 | 0 | 0 | 0.000s | 0.000s | 0.000s
PS Scavenge | 0 | 0 | 0 | 0.000s | 0.000s | 0.000s

2024-02-04 13:58:21.426 [job-0] INFO JobContainer - PerfTrace not enable!
2024-02-04 13:58:21.426 [job-0] INFO StandAloneJobContainerCommunicator - Total 500000 records, 13000000 bytes | Speed 1.24MB/s, 50000 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.150s | All Task WaitReaderTime 0.751s | Percentage 100.00%
2024-02-04 13:58:21.426 [job-0] INFO JobContainer -
任务启动时刻 : 2024-02-04 13:58:11
任务结束时刻 : 2024-02-04 13:58:21
任务总计耗时 : 10s
任务平均流量 : 1.24MB/s
记录写入速度 : 50000rec/s
读出记录总数 : 500000
读写失败总数 : 0

1.3 同步用户test被锁解锁

#重新执行同步任务
查看日志:

2024-02-04 14:18:45 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-04 14:18:45.935 [job-0] ERROR RetryUtil - Exception when calling callable, 即将尝试执行第1次重试.本次重试计划等待[1000]ms,实际等待[1000]ms, 异常Msg:[DataX无法连接对应的数据库,可能原因是:1) 配置的ip/port/database/jdbc错误,无法连接。2) 配置的username/password错误,鉴权失败。请和DBA确认该数据库的连接信息是否正确。]
2024-02-04 14:18:45 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-04 14:18:45.960 [job-0] WARN DBUtil - test connection of [jdbc:oracle:thin:@//172.17.0.32:1521/xe] failed, for Code:[DBUtilErrorCode-10], Description:[连接数据库失败. 请检查您的 账号、密码、数据库名称、IP、Port或者向 DBA 寻求帮助(注意网络环境).]. - 具体错误信息为:java.sql.SQLException: ORA-28000: the account is locked
2024-02-04 14:18:45 [AnalysisStatistics.analysisStatisticsLog-53] .
2024-02-04 14:18:47 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-04 14:18:47.961 [job-0] ERROR RetryUtil - Exception when calling callable, 即将尝试执行第2次重试.本次重试计划等待[2000]ms,实际等待[2000]ms, 异常Msg:[DataX无法连接对应的数据库,可能原因是:1) 配置的ip/port/database/jdbc错误,无法连接。2) 配置的username/password错误,鉴权失败。请和DBA确认该数据库的连接信息是否正确。]
2024-02-04 14:18:47 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-04 14:18:47.988 [job-0] WARN DBUtil - test connection of [jdbc:oracle:thin:@//172.17.0.32:1521/xe] failed, for Code:[DBUtilErrorCode-10], Description:[连接数据库失败. 请检查您的 账号、密码、数据库名称、IP、Port或者向 DBA 寻求帮助(注意网络环境).]. - 具体错误信息为:java.sql.SQLException: ORA-28000: the account is locked

#test用户被锁住解锁
1.select username,account_status from dba_users;

USERNAME ACCOUNT_STATUS
---------------------------------------- --------------------------------
ORACLE_OCM EXPIRED & LOCKED
LHR OPEN
SYSDG EXPIRED & LOCKED
SPATIAL_CSW_ADMIN_USR EXPIRED & LOCKED
TEST LOCKED(TIMED)

2.select resource_name,resource_type,limit from dba_profiles where resource_name like 'failed_login_attempts%' and profile='default';

3.解锁方法
alter user test account unlock;

4.1.修改为30次
alter profile default limit failed_login_attempts 30;

4.2.修改为无限次(为安全起见,不建议使用)
alter profile default limit failed_login_attempts unlimited;

二.datax-web页面重新配置数据源以及同步任务

2.1 datax-web页面重新发起同步任务
任务管理-单表同步-日志查看
报错:
2024-02-04 14:42:44 [AnalysisStatistics.analysisStatisticsLog-53] Caused by: java.lang.Exception: DataX无法连接对应的数据库,可能原因是:1) 配置的ip/port/database/jdbc错误,无法连接。2) 配置的username/password错误,鉴权失败。请和DBA确认该数据库的连接信息是否正确。
2024-02-04 14:42:44 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.DBUtil$2.call(DBUtil.java:71)
2024-02-04 14:42:44 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.DBUtil$2.call(DBUtil.java:51)
2024-02-04 14:42:44 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.util.RetryUtil$Retry.call(RetryUtil.java:164)
2024-02-04 14:42:44 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.util.RetryUtil$Retry.doRetry(RetryUtil.java:111)
2024-02-04 14:42:44 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.util.RetryUtil.executeWithRetry(RetryUtil.java:30)
2024-02-04 14:42:44 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.DBUtil.chooseJdbcUrl(DBUtil.java:51)
2024-02-04 14:42:44 [AnalysisStatistics.analysisStatisticsLog-53] ... 11 more
2024-02-04 14:42:44 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-04 14:42:44 [JobThread.run-165]
----------- datax-web job execute end(finish) -----------
----------- ReturnT:ReturnT [code=500, msg=command exit value(1) is failed, content=null]
2024-02-04 14:42:44 [TriggerCallbackThread.callbackLog-186]
----------- datax-web job callback finish.

2.2 修改oracle数据源
数据源管理-编辑-测试连接
提示连接成功,表明数据源配置正确

2.3 任务构建重新生成新的json文件

任务管理-任务构建-

{
"job": {
"setting": {
"speed": {
"channel": 3,
"byte": 1048576
},
"errorLimit": {
"record": 0,
"percentage": 0.02
}
},
"content": [
{
"reader": {
"name": "oraclereader",
"parameter": {
"username": "OhlJ4g2KfCRznayQNh0eng==",
"password": "XCYVpFosvZBBWobFzmLWvA==",
"column": [
"\"ID\""
],
"splitPk": "",
"connection": [
{
"table": [
"T1_ORACLE"
],
"jdbcUrl": [
"jdbc:oracle:thin:@//172.17.0.32:1521/xe"
]
}
]
}
},
"writer": {
"name": "mysqlwriter",
"parameter": {
"username": "yRjwDFuoPKlqya9h9H2Amg==",
"password": "XCYVpFosvZBBWobFzmLWvA==",
"column": [
"`id`"
],
"connection": [
{
"table": [
"t1_mysql"
],
"jdbcUrl": "jdbc:mysql://172.17.0.29:3306/test"
}
]
}
}
}
]
}
}

2.4 任务管理-任务管理-操作-编辑-把生成的json复制到文本框中-最后点确认

2.5 任务管理-任务管理-操作-执行一次

2.6 任务管理-任务管理-操作-查看日志

2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-04 14:48:50.590 [job-0] INFO JobContainer -
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] [total cpu info] =>
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] averageCpu | maxDeltaCpu | minDeltaCpu
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] -1.00% | -1.00% | -1.00%
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] [total gc info] =>
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] NAME | totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime | minDeltaGCTime
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] PS MarkSweep | 1 | 1 | 1 | 0.030s | 0.030s | 0.030s
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] PS Scavenge | 1 | 1 | 1 | 0.021s | 0.021s | 0.021s
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-04 14:48:50.590 [job-0] INFO JobContainer - PerfTrace not enable!
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-04 14:48:50.591 [job-0] INFO StandAloneJobContainerCommunicator - Total 1 records, 1 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-04 14:48:50.591 [job-0] INFO JobContainer -
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 任务启动时刻 : 2024-02-04 14:48:39
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 任务结束时刻 : 2024-02-04 14:48:50
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 任务总计耗时 : 11s
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 任务平均流量 : 0B/s
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 记录写入速度 : 0rec/s
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 读出记录总数 : 1
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] 读写失败总数 : 0
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-04 14:48:50 [AnalysisStatistics.analysisStatisticsLog-53] Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
2024-02-04 14:48:50 [JobThread.run-165]
----------- datax-web job execute end(finish) -----------
----------- ReturnT:ReturnT [code=200, msg=LogStatistics{taskStartTime=2024-02-04 14:48:39, taskEndTime=2024-02-04 14:48:50, taskTotalTime=11s, taskAverageFlow=0B/s, taskRecordWritingSpeed=0rec/s, taskRecordReaderNum=1, taskRecordWriteFailNum=0}, content=null]
2024-02-04 14:48:50 [TriggerCallbackThread.callbackLog-186]
----------- datax-web job callback finish.

第五部分:同步测试5.1 tdsql到oracle12c进行单表同步测试

一.非分布式实例-增量同步
二.分布式数据库分片表同步

一.非分布式实例-增量测试:
mysql:

CREATE TABLE `t1_mysql1` (
`id` int(11) NOT NULL,
PRIMARY KEY (`id`)
);

insert into t1_mysql1 values(1),(2),(3);

select * from t1_mysql1;

oracle:

[root@**** ~]# docker exec -it 4044f830d363 bash
[root@4044f830d363 /]#su - oracle
Last login: Sun Feb 4 06:24:12 UTC 2024 on pts/3
[oracle@4044f830d363 ~]$

[oracle@4044f830d363 ~]$ sqlplus /nolog

SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 5 07:28:27 2024

Copyright (c) 1982, 2016, Oracle. All rights reserved.

SQL> conn test/123456;
Connected.
SQL> select * from tab;

TNAME TABTYPE CLUSTERID
---------------------------------------- ------- ----------
EMPLOYEE_DEPARTMENT TABLE
T1_ORACLE TABLE

SQL> select * from T1_ORACLE;

ID
----------
1

实验结果:

SQL> select * from T1_ORACLE;

ID
----------
1
2
3

二.分布式数据库分片表同步测试:

源端mysql环境初始化:

实例详情:
set1:mysql_10.87.5.49_4012
jdbc:mysql://10.87.5.49:4012/test

set2:mysql_10.87.5.51_4002
jdbc:mysql://10.87.5.51:4002/test

root@VM-0-12-centos ~]# mysql -udump -p'dump@123' -h10.87.*.* -P15*** -c

create database test;
use test;

MySQL [test]> create table t1_mysql ( a int key , b int, c char(20) ) shardkey=a;
Query OK, 0 rows affected (0.63 sec)

插入数据
insert into t1_mysql(a,b,c) values(1,1,'name1');
insert into t1_mysql(a,b,c) values(2,2,'name2');
insert into t1_mysql(a,b,c) values(3,3,'name3');
insert into t1_mysql(a,b,c) values(4,4,'name4');
insert into t1_mysql(a,b,c) values(5,5,'name5');
insert into t1_mysql(a,b,c) values(6,6,'name6');
insert into t1_mysql(a,b,c) values(7,7,'name7');
insert into t1_mysql(a,b,c) values(8,8,'name8');
insert into t1_mysql(a,b,c) values(9,9,'name9');
insert into t1_mysql(a,b,c) values(10,10,'name10');
insert into t1_mysql(a,b,c) values(11,11,'name11');
insert into t1_mysql(a,b,c) values(12,12,'name12');
insert into t1_mysql(a,b,c) values(13,13,'name13');
insert into t1_mysql(a,b,c) values(14,14,'name14');
insert into t1_mysql(a,b,c) values(15,15,'name15');
insert into t1_mysql(a,b,c) values(16,16,'name16');
insert into t1_mysql(a,b,c) values(17,17,'name17');
insert into t1_mysql(a,b,c) values(18,18,'name18');
insert into t1_mysql(a,b,c) values(19,19,'name19');
insert into t1_mysql(a,b,c) values(20,20,'name20');
insert into t1_mysql(a,b,c) values(21,21,'name21');
insert into t1_mysql(a,b,c) values(22,22,'name22');
insert into t1_mysql(a,b,c) values(23,23,'name23');
insert into t1_mysql(a,b,c) values(24,24,'name24');
insert into t1_mysql(a,b,c) values(25,25,'name25');
insert into t1_mysql(a,b,c) values(26,26,'name26');
insert into t1_mysql(a,b,c) values(27,27,'name27');
insert into t1_mysql(a,b,c) values(28,28,'name28');
insert into t1_mysql(a,b,c) values(29,29,'name29');
insert into t1_mysql(a,b,c) values(30,30,'name30');
insert into t1_mysql(a,b,c) values(31,31,'name31');
insert into t1_mysql(a,b,c) values(32,32,'name32');
insert into t1_mysql(a,b,c) values(33,33,'name33');
insert into t1_mysql(a,b,c) values(34,34,'name34');
insert into t1_mysql(a,b,c) values(35,35,'name35');
insert into t1_mysql(a,b,c) values(36,36,'name36');
insert into t1_mysql(a,b,c) values(37,37,'name37');
insert into t1_mysql(a,b,c) values(38,38,'name38');
insert into t1_mysql(a,b,c) values(39,39,'name39');
insert into t1_mysql(a,b,c) values(40,40,'name40');
insert into t1_mysql(a,b,c) values(41,41,'name41');
insert into t1_mysql(a,b,c) values(42,42,'name42');
insert into t1_mysql(a,b,c) values(43,43,'name43');
insert into t1_mysql(a,b,c) values(44,44,'name44');
insert into t1_mysql(a,b,c) values(45,45,'name45');
insert into t1_mysql(a,b,c) values(46,46,'name46');
insert into t1_mysql(a,b,c) values(47,47,'name47');
insert into t1_mysql(a,b,c) values(48,48,'name48');
insert into t1_mysql(a,b,c) values(49,49,'name49');
insert into t1_mysql(a,b,c) values(50,50,'name50');
insert into t1_mysql(a,b,c) values(51,51,'name51');
insert into t1_mysql(a,b,c) values(52,52,'name52');
insert into t1_mysql(a,b,c) values(53,53,'name53');
insert into t1_mysql(a,b,c) values(54,54,'name54');
insert into t1_mysql(a,b,c) values(55,55,'name55');
insert into t1_mysql(a,b,c) values(56,56,'name56');
insert into t1_mysql(a,b,c) values(57,57,'name57');
insert into t1_mysql(a,b,c) values(58,58,'name58');
insert into t1_mysql(a,b,c) values(59,59,'name59');
insert into t1_mysql(a,b,c) values(60,60,'name60');
insert into t1_mysql(a,b,c) values(61,61,'name61');
insert into t1_mysql(a,b,c) values(62,62,'name62');
insert into t1_mysql(a,b,c) values(63,63,'name63');
insert into t1_mysql(a,b,c) values(64,64,'name64');
insert into t1_mysql(a,b,c) values(65,65,'name65');
insert into t1_mysql(a,b,c) values(66,66,'name66');
insert into t1_mysql(a,b,c) values(67,67,'name67');
insert into t1_mysql(a,b,c) values(68,68,'name68');
insert into t1_mysql(a,b,c) values(69,69,'name69');
insert into t1_mysql(a,b,c) values(70,70,'name70');
insert into t1_mysql(a,b,c) values(71,71,'name71');
insert into t1_mysql(a,b,c) values(72,72,'name72');
insert into t1_mysql(a,b,c) values(73,73,'name73');
insert into t1_mysql(a,b,c) values(74,74,'name74');
insert into t1_mysql(a,b,c) values(75,75,'name75');
insert into t1_mysql(a,b,c) values(76,76,'name76');
insert into t1_mysql(a,b,c) values(77,77,'name77');
insert into t1_mysql(a,b,c) values(78,78,'name78');
insert into t1_mysql(a,b,c) values(79,79,'name79');
insert into t1_mysql(a,b,c) values(80,80,'name80');
insert into t1_mysql(a,b,c) values(81,81,'name81');
insert into t1_mysql(a,b,c) values(82,82,'name82');
insert into t1_mysql(a,b,c) values(83,83,'name83');
insert into t1_mysql(a,b,c) values(84,84,'name84');
insert into t1_mysql(a,b,c) values(85,85,'name85');
insert into t1_mysql(a,b,c) values(86,86,'name86');
insert into t1_mysql(a,b,c) values(87,87,'name87');
insert into t1_mysql(a,b,c) values(88,88,'name88');
insert into t1_mysql(a,b,c) values(89,89,'name89');
insert into t1_mysql(a,b,c) values(90,90,'name90');
insert into t1_mysql(a,b,c) values(91,91,'name91');
insert into t1_mysql(a,b,c) values(92,92,'name92');
insert into t1_mysql(a,b,c) values(93,93,'name93');
insert into t1_mysql(a,b,c) values(94,94,'name94');
insert into t1_mysql(a,b,c) values(95,95,'name95');
insert into t1_mysql(a,b,c) values(96,96,'name96');
insert into t1_mysql(a,b,c) values(97,97,'name97');
insert into t1_mysql(a,b,c) values(98,98,'name98');
insert into t1_mysql(a,b,c) values(99,99,'name99');
insert into t1_mysql(a,b,c) values(100,100,'name100');

mysql> select count(*) from t1_mysql;
+----------+
| count(*) |
+----------+
| 100 |
+----------+
1 row in set (0.00 sec)

mysql> /*proxy*/show status;
+-----------------------------+---------------------------------------------------------------------------------------------+
| status_name | value |
+-----------------------------+---------------------------------------------------------------------------------------------+
| cluster | group_1618898494_22 |
| set_1618898567_1:ip | 10.879:4012;s1@10.87.54012@1@,s2@10.87.5.48:4012@1@02@0 |
| set_1618898567_1:hash_range | 0---31 |
| set_1618898758_3:ip | 10.87.5.51:4002;s1@10.87.4002@@0,s2@10.87.5.52:4002@1@06@0 |
| set_1618898758_3:hash_range | 32---63 |
| set | set_1618898567_1,set_1618898758_3 |
+-----------------------------+---------------------------------------------------------------------------------------------+
6 rows in set (0.00 sec)

mysql> /*sets:allsets*/ select count(*) from t1_mysql;
+----------+------------------+
| count(*) | info |
+----------+------------------+
| 48 | set_1618898567_1 |
| 52 | set_1618898758_3 |
+----------+------------------+
2 rows in set (0.00 sec)

目标端oracle12c环境初始化:

创建表:

create table t2_oracle ( a int primary key , b int, c varchar(20) );

创建数据源mysql_10.87.5.49_4012

任务构建

任务管理-执行同步任务

查看日志:显示同步成功
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.432 [job-0] INFO OriginalConfPretreatmentUtil - table:[T2_ORACLE] all columns:[
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] A,B,C
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] ].
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.464 [job-0] INFO OriginalConfPretreatmentUtil - Write data [
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] INSERT INTO %s ("A","B","C") VALUES(?,?,?)
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] ], which jdbcUrl like:[jdbc:oracle:thin:@//172.17.0.32:1521/xe]
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.465 [job-0] INFO JobContainer - jobContainer starts to do prepare ...
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.465 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] do prepare work .
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.465 [job-0] INFO JobContainer - DataX Writer.Job [oraclewriter] do prepare work .
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.465 [job-0] INFO JobContainer - jobContainer starts to do split ...
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.466 [job-0] INFO JobContainer - Job set Max-Byte-Speed to 1048576 bytes.
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.468 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] splits to [1] tasks.
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.468 [job-0] INFO JobContainer - DataX Writer.Job [oraclewriter] splits to [1] tasks.
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.483 [job-0] INFO JobContainer - jobContainer starts to do schedule ...
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.485 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups.
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.487 [job-0] INFO JobContainer - Running by standalone Mode.
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.491 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [1] channels for [1] tasks.
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.495 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to 2000000.
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.495 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated.
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.502 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.505 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select `a`,`b`,`c` from t1_mysql
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] ] jdbcUrl:[jdbc:mysql://10.87.5.51:4002/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.559 [0-0-0-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select `a`,`b`,`c` from t1_mysql
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] ] jdbcUrl:[jdbc:mysql://10.87.5.51:4002/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.703 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[202]ms
2024-02-05 17:14:44 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:44.703 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it's tasks.
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.504 [job-0] INFO StandAloneJobContainerCommunicator - Total 52 records, 505 bytes | Speed 50B/s, 5 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.504 [job-0] INFO AbstractScheduler - Scheduler accomplished all tasks.
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.505 [job-0] INFO JobContainer - DataX Writer.Job [oraclewriter] do post work.
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.505 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] do post work.
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.505 [job-0] INFO JobContainer - DataX jobId [0] completed successfully.
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.506 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /home/datax/datax/hook
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.507 [job-0] INFO JobContainer -
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] [total cpu info] =>
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] averageCpu | maxDeltaCpu | minDeltaCpu
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] -1.00% | -1.00% | -1.00%
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] [total gc info] =>
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] NAME | totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime | minDeltaGCTime
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] PS MarkSweep | 1 | 1 | 1 | 0.034s | 0.034s | 0.034s
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] PS Scavenge | 1 | 1 | 1 | 0.024s | 0.024s | 0.024s
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.507 [job-0] INFO JobContainer - PerfTrace not enable!
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.507 [job-0] INFO StandAloneJobContainerCommunicator - Total 52 records, 505 bytes | Speed 50B/s, 5 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:14:54.508 [job-0] INFO JobContainer -
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 任务启动时刻 : 2024-02-05 17:14:43
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 任务结束时刻 : 2024-02-05 17:14:54
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 任务总计耗时 : 11s
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 任务平均流量 : 50B/s
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 记录写入速度 : 5rec/s
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 读出记录总数 : 52
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] 读写失败总数 : 0
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-05 17:14:54 [AnalysisStatistics.analysisStatisticsLog-53] Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
2024-02-05 17:14:54 [JobThread.run-165]
----------- datax-web job execute end(finish) -----------
----------- ReturnT:ReturnT [code=200, msg=LogStatistics{taskStartTime=2024-02-05 17:14:43, taskEndTime=2024-02-05 17:14:54, taskTotalTime=11s, taskAverageFlow=50B/s, taskRecordWritingSpeed=5rec/s, taskRecordReaderNum=52, taskRecordWriteFailNum=0}, content=null]
2024-02-05 17:14:54 [TriggerCallbackThread.callbackLog-186]
----------- datax-web job callback finish.

创建数据源mysql_10.87.5.51_4002

任务构建

任务管理-执行同步任务

查看日志:任务执行成功

2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.780 [job-0] INFO OriginalConfPretreatmentUtil - Write data [
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] INSERT INTO %s ("A","B","C") VALUES(?,?,?)
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] ], which jdbcUrl like:[jdbc:oracle:thin:@//172.17.0.32:1521/xe]
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.781 [job-0] INFO JobContainer - jobContainer starts to do prepare ...
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.781 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] do prepare work .
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.781 [job-0] INFO JobContainer - DataX Writer.Job [oraclewriter] do prepare work .
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.782 [job-0] INFO JobContainer - jobContainer starts to do split ...
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.782 [job-0] INFO JobContainer - Job set Max-Byte-Speed to 1048576 bytes.
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.784 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] splits to [1] tasks.
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.785 [job-0] INFO JobContainer - DataX Writer.Job [oraclewriter] splits to [1] tasks.
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.799 [job-0] INFO JobContainer - jobContainer starts to do schedule ...
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.802 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups.
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.804 [job-0] INFO JobContainer - Running by standalone Mode.
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.808 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [1] channels for [1] tasks.
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.812 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to 2000000.
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.812 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated.
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.821 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.824 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select `a`,`b`,`c` from t1_mysql
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] ] jdbcUrl:[jdbc:mysql://10.87.5.49:4012/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:29.882 [0-0-0-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select `a`,`b`,`c` from t1_mysql
2024-02-05 17:18:29 [AnalysisStatistics.analysisStatisticsLog-53] ] jdbcUrl:[jdbc:mysql://10.87.5.49:4012/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].
2024-02-05 17:18:30 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:30.021 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[201]ms
2024-02-05 17:18:30 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:30.022 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it's tasks.
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.820 [job-0] INFO StandAloneJobContainerCommunicator - Total 48 records, 471 bytes | Speed 47B/s, 4 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.820 [job-0] INFO AbstractScheduler - Scheduler accomplished all tasks.
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.820 [job-0] INFO JobContainer - DataX Writer.Job [oraclewriter] do post work.
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.820 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] do post work.
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.820 [job-0] INFO JobContainer - DataX jobId [0] completed successfully.
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.821 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /home/datax/datax/hook
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.822 [job-0] INFO JobContainer -
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] [total cpu info] =>
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] averageCpu | maxDeltaCpu | minDeltaCpu
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] -1.00% | -1.00% | -1.00%
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] [total gc info] =>
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] NAME | totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime | minDeltaGCTime
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] PS MarkSweep | 1 | 1 | 1 | 0.038s | 0.038s | 0.038s
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] PS Scavenge | 1 | 1 | 1 | 0.024s | 0.024s | 0.024s
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.822 [job-0] INFO JobContainer - PerfTrace not enable!
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.823 [job-0] INFO StandAloneJobContainerCommunicator - Total 48 records, 471 bytes | Speed 47B/s, 4 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 2024-02-05 17:18:39.823 [job-0] INFO JobContainer -
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 任务启动时刻 : 2024-02-05 17:18:28
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 任务结束时刻 : 2024-02-05 17:18:39
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 任务总计耗时 : 11s
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 任务平均流量 : 47B/s
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 记录写入速度 : 4rec/s
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 读出记录总数 : 48
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] 读写失败总数 : 0
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53]
2024-02-05 17:18:39 [AnalysisStatistics.analysisStatisticsLog-53] Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
2024-02-05 17:18:39 [JobThread.run-165]
----------- datax-web job execute end(finish) -----------
----------- ReturnT:ReturnT [code=200, msg=LogStatistics{taskStartTime=2024-02-05 17:18:28, taskEndTime=2024-02-05 17:18:39, taskTotalTime=11s, taskAverageFlow=47B/s, taskRecordWritingSpeed=4rec/s, taskRecordReaderNum=48, taskRecordWriteFailNum=0}, content=null]
2024-02-05 17:18:39 [TriggerCallbackThread.callbackLog-186]
----------- datax-web job callback finish.

目标端Oracle:查询结果为100条
SQL> select count(*) from t2_oracle;

COUNT(*)
----------
100

5.2 tdsql到MySQL5.7单表同步

1).创建单个任务
源端:模拟数据
[root@tdsql lua]# cd /root/sysbench/sysbench-master/src/lua
[root@tdsql lua]# ../sysbench oltp_common_noshard.lua --threads=2 --mysql-user=dump --mysql-password="dump@123" --mysql-host=10.87.5.54 --mysql-port=15002 --mysql-db=sbtests --report-interval=1 --table-size=100000 --tables=10 prepare
sysbench 1.1.0 (using bundled LuaJIT 2.1.0-beta3)

Initializing worker threads...

Creating table 'sbtest2'...
Creating table 'sbtest1'...
Inserting 100000 records into 'sbtest2'
Inserting 100000 records into 'sbtest1'
Creating a secondary index on 'sbtest2'...
Creating a secondary index on 'sbtest1'...
Creating table 'sbtest4'...
Creating table 'sbtest3'...
Inserting 100000 records into 'sbtest4'
Inserting 100000 records into 'sbtest3'
Creating a secondary index on 'sbtest4'...
Creating a secondary index on 'sbtest3'...
Creating table 'sbtest6'...
Creating table 'sbtest5'...
Inserting 100000 records into 'sbtest6'
Inserting 100000 records into 'sbtest5'
Creating a secondary index on 'sbtest6'...
Creating a secondary index on 'sbtest5'...
Creating table 'sbtest8'...
Creating table 'sbtest7'...
Inserting 100000 records into 'sbtest8'
Inserting 100000 records into 'sbtest7'
Creating a secondary index on 'sbtest8'...
Creating a secondary index on 'sbtest7'...
Creating table 'sbtest10'...
Creating table 'sbtest9'...
Inserting 100000 records into 'sbtest10'
Inserting 100000 records into 'sbtest9'
Creating a secondary index on 'sbtest10'...
Creating a secondary index on 'sbtest9'...
[root@tdsql lua]#
[root@tdsql lua]#
[root@tdsql lua]#
[root@tdsql lua]#
[root@tdsql lua]#
[root@tdsql lua]# mysql -u'dump' -p'dump@123' -h10.87.5.54 -P15002 -c
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 8683521
Server version: 5.7.30-33-V2.0R631D001-v17-20200901-1756-log Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sbtests |
| sys |
| sysdb |
| test |
| xa |
+--------------------+
8 rows in set (0.00 sec)

MySQL [(none)]>
MySQL [(none)]> use sbtests;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MySQL [sbtests]> show tables;
+-------------------+
| Tables_in_sbtests |
+-------------------+
| sbtest1 |
| sbtest10 |
| sbtest2 |
| sbtest3 |
| sbtest4 |
| sbtest5 |
| sbtest6 |
| sbtest7 |
| sbtest8 |
| sbtest9 |
+-------------------+
10 rows in set (0.00 sec)

MySQL [sbtests]> show create table sbtest1 \G
*************************** 1. row ***************************
Table: sbtest1
Create Table: CREATE TABLE `sbtest1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) COLLATE utf8_bin NOT NULL DEFAULT '',
`pad` char(60) COLLATE utf8_bin NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`)
) ENGINE=InnoDB AUTO_INCREMENT=100001 DEFAULT CHARSET=utf8 COLLATE=utf8_bin
1 row in set (0.00 sec)

MySQL [sbtests]> select count(*) from sbtest1;
+----------+
| count(*) |
+----------+
| 100000 |
+----------+
1 row in set (0.02 sec)

[root@tdsql lua]# mysql -u'dump' -p'dump@123' -h10.87.5.54 -P15002 -c
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 8519682
Server version: 5.7.30-33-V2.0R631D001-v17-20200901-1756-log Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> /*proxy*/show status;
+-----------------------------+---------------------------------------------------------------------------------------------+
| status_name | value |
+-----------------------------+---------------------------------------------------------------------------------------------+
| cluster | group_1618898494_22 |
| set_1618898567_1:ip | 10.87.5.49:4012;s1@10.87.5.47:4012@1@IDC_WH_HBYB_01@0,s2@10.87.5.48:4012@1@IDC_WH_HBYB_02@0 |
| set_1618898567_1:hash_range | 0---31 |
| set_1618898758_3:ip | 10.87.5.51:4002;s1@10.87.5.50:4002@1@IDC_WH_HBYB_04@0,s2@10.87.5.52:4002@1@IDC_WH_HBYB_06@0 |
| set_1618898758_3:hash_range | 32---63 |
| set | set_1618898567_1,set_1618898758_3 |
+-----------------------------+---------------------------------------------------------------------------------------------+
6 rows in set (0.00 sec)

MySQL [(none)]> /*sets:set_1618898567_1*/ select count(*) from sbtests.sbtest4;
+----------+------------------+
| count(*) | info |
+----------+------------------+
| 100000 | set_1618898567_1 |
+----------+------------------+
1 row in set (0.02 sec)

目标端创建表结构
1).使用msyql workbench工具导出表结构定义SQL,步骤如下;
data export-勾选schema,schema表对象-选择dump structure only-勾选导出目录-勾选创建schema-导出

2).目标端创建表结构
mysql> SET @@SESSION.SQL_LOG_BIN = @MYSQLDUMP_TEMP_LOG_BIN;
Query OK, 0 rows affected (0.00 sec)

mysql> /*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
Query OK, 0 rows affected (0.00 sec)

mysql>
mysql> /*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
Query OK, 0 rows affected (0.00 sec)

mysql> /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
Query OK, 0 rows affected (0.00 sec)

mysql> /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
Query OK, 0 rows affected (0.00 sec)

mysql> /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
Query OK, 0 rows affected (0.00 sec)

mysql> /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
Query OK, 0 rows affected (0.00 sec)

mysql> /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
Query OK, 0 rows affected (0.00 sec)

mysql>
mysql> -- Dump completed on 2024-03-04 11:55:35
mysql>
mysql>
mysql>
mysql> exit
Bye
root@84c6627f5440:/# mysql -u root -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7584
Server version: 5.7.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| datax_web_db |
| mysql |
| performance_schema |
| sbtests |
| sys |
| test |
+--------------------+
7 rows in set (0.00 sec)

mysql>
mysql>
mysql> exit
Bye
root@84c6627f5440:/# exit
exit

3.DATAX创建批量任务
项目管理-添加项目
数据源管理-新增数据源
任务管理-DATAX任务模版-添加
任务批量构建
任务管理-查询-执行任务

4.登陆到目标数据库验证数据

[root@mysql ~]# docker exec -it mysql bash
root@84c6627f5440:/# mysql -u root -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7601
Server version: 5.7.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>
mysql>
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| datax_web_db |
| mysql |
| performance_schema |
| sbtests |
| sys |
| test |
+--------------------+
7 rows in set (0.01 sec)

mysql> use sbtests;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql>
mysql>
mysql> show tables;
+-------------------+
| Tables_in_sbtests |
+-------------------+
| sbtest1 |
| sbtest10 |
| sbtest2 |
| sbtest3 |
| sbtest4 |
| sbtest5 |
| sbtest6 |
| sbtest7 |
| sbtest8 |
| sbtest9 |
+-------------------+
10 rows in set (0.00 sec)

mysql>
mysql>
mysql> select count(*) from sbtest1;
+----------+
| count(*) |
+----------+
| 100000 |
+----------+
1 row in set (0.01 sec)

mysql>
mysql>
mysql> select count(*) from sbtest2;
+----------+
| count(*) |
+----------+
| 100000 |
+----------+
1 row in set (0.02 sec)

mysql> select count(*) from sbtest3;
+----------+
| count(*) |
+----------+
| 100000 |
+----------+
1 row in set (0.01 sec)

如上测试结果表明,DataX支持全量增量数据同步,也支持分布式及非分布式数据库数据同步,以上仅为参考。

相关文章

Oracle如何使用授予和撤销权限的语法和示例
Awesome Project: 探索 MatrixOrigin 云原生分布式数据库
下载丨66页PDF,云和恩墨技术通讯(2024年7月刊)
社区版oceanbase安装
Oracle 导出CSV工具-sqluldr2
ETL数据集成丨快速将MySQL数据迁移至Doris数据库

发布评论