作者简介:张霁:数据库架构师。
这次作业是OceanBase 集群三节点部署方法,通过中控机直接远程登录到 OceanBase 节点上部署启动 observer 和 obproxy 进程。
由于手上正好有7台物理机,所以在这个作业中会使用OBD直接部署为2:2:2架构的OceanBase集群。
机器信息如下:
机器划分如下:
环境设置
内核参数修改
修改配置文件:vi /etc/sysctl.conf
添加以下内容
net.core.somaxconn = 2048 net.core.netdev_max_backlog = 10000 net.core.rmem_default = 16777216 net.core.wmem_default = 16777216 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.ip_local_port_range = 3500 65535 net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_fin_timeout = 15 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_slow_start_after_idle=0 vm.swappiness = 0 vm.min_free_kbytes = 2097152 vm.max_map_count=655360 fs.aio-max-nr=1048576
让配置生效
sysctl -p
会话变量修改
OceanBase 数据库的进程涉及的限制包括线程最大栈空间大小(Stack)、最大文件句柄数(Open Files)和 core 文件大小 (Core File Size)。
修改配置文件:vi /etc/security/limits.conf
在全局级别修改。注意修改后,已经登录的会话需要退出重登录才生效。
将会话级别的最大栈空间大小设置为 unlimited,最大文件句柄数设置为 655350,Core 文件大小设置为 unlimited 。 修改 /etc/security/limits.conf 配置文件,如果已有设置值低于这个设置值
soft nofile 655360 hard nofile 655360 soft nproc 655360 hard nproc 655360 soft core unlimited hard core unlimited soft stack unlimited hard stack unlimited
查看配置方法。退出当前会话,重新登录。执行以下命令,查看配置是否生效:
ulimit -a
关闭 SELinux
修改 SELinux 配置文件中的 SELINUX 选项。 注意:必须使用注释中的三个值之一。如果写错了,机器重启后操作系统会报错起不来,那时候就只能进入单用户模式修改了。
修改配置文件:vi /etc/selinux/config
This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX=disabled
配置文件修改后只会重启主机后生效,还需要使用下面命令立即生效。
setenforce 0
关闭防火墙
查看防火墙状态
systemctl status firewalld
如果是 inactive 那就不用管。如果是 active,那就永久关闭
systemctl disable firewalld systemctl stop firewalld systemctl status firewalld
判断是否使用 ntpd 同步时间。
systemctl status ntpd Unit ntpd.service could not be found.
如果提示上面这个信息,表示没有使用 ntpd。 如果提示有 ntpd 服务,就卸载 ntpd 软件,检查chrony服务。
chrony 服务检查
查看时间同步活动
chronyc activity
查看时间服务器
chronyc sources
查看同步状态
chronyc sources -v
校准时间服务器:
chronyc tracking
安装 chrony 服务
如果没有安装,可以采用 YUM 安装方法。您也可以下载相应的 RPM 包安装。
我的服务器已经安装,以下安装方法参考社区教程,仅供参考:
yum -y install chrony
chrony 配置说明
chrony 服务守护进程名是 chronyd,chronyc 是用来监控 chronyd 性能和配置参数的命令行工具。 chrony 的主配置文件: /etc/chrony.conf 。配置方法如下:
vi /etc/chrony.conf
server 后面跟时间同步服务器 使用pool.ntp.org 项目中的公共服务器。按 server 配置,理论上您想添加多少时间服务器都可以。 或者使用 阿里云的 ntp 服务器 Please consider joining the pool (http://www.pool.ntp.org/join.html). server ntp.cloud.aliyuncs.com minpoll 4 maxpoll 10 iburst server ntp.aliyun.com minpoll 4 maxpoll 10 iburst server ntp1.aliyun.com minpoll 4 maxpoll 10 iburst server ntp1.cloud.aliyuncs.com minpoll 4 maxpoll 10 iburst server ntp10.cloud.aliyuncs.com minpoll 4 maxpoll 10 iburst 如果是测试环境,没有时间同步服务器,那就选取一台配置为时间同步服务器。 如果选中的是本机,则取消下面 server 注释 #server 127.127.1.0 根据实际时间计算出服务器增减时间的比率,然后记录到一个文件中,在系统重启后为系统做出最佳时间补偿调整。 driftfile /var/lib/chrony/drift chronyd 根据需求减慢或加速时间调整, 在某些情况下系统时钟可能漂移过快,导致时间调整用时过长。 该指令强制 chronyd 调整时期,大于某个阀值时步进调整系统时钟。 只有在因 chronyd 启动时间超过指定的限制时(可使用负值来禁用限制)没有更多时钟更新时才生效。 makestep 1.0 3 将启用一个内核模式,在该模式中,系统时间每11分钟会拷贝到实时时钟(RTC)。 rtcsync Enable hardware timestamping on all interfaces that support it. 通过使用hwtimestamp指令启用硬件时间戳 #hwtimestamp eth0 #hwtimestamp eth1 #hwtimestamp * Increase the minimum number of selectable sources required to adjust the system clock. #minsources 2 指定一台主机、子网,或者网络以允许或拒绝NTP连接到扮演时钟服务器的机器 #allow 192.168.0.0/16 #deny 192.168/16 即使没有同步到时间源,也要服务时间 local stratum 10 指定包含NTP验证密钥的文件。 #keyfile /etc/chrony.keys 指定日志文件的目录。 logdir /var/log/chrony Select which information is logged. #log measurements statistics tracking
最简单的配置文件如下:
server 127.127.1.0 allow 10.144.0.0/16 local stratum 10
配置安装用户
添加普通用户
groupadd -g 2000 admin useradd -u 2000 -g 2000 admin
passwd admin
输入密码即可
添加admin的sudo权限
[root@localhost ~]# visudo
在以下内容后添加内容:
Allow root to run any commands anywhere root ALL=(ALL) ALL
添加以下内容:
admin ALL=(ALL) ALL
初始化目录
查看磁盘分区情况
[root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk sdb 8:16 0 1.8T 0 disk sdc 8:32 0 446.1G 0 disk ├─sdc1 8:33 0 1G 0 part /boot/efi ├─sdc2 8:34 0 1G 0 part /boot └─sdc3 8:35 0 444.1G 0 part ├─vg00-lvroot 253:0 0 100G 0 lvm / └─vg00-lvswap 253:1 0 16G 0 lvm [SWAP] loop0 7:0 0 387M 0 loop /usr/hxm/loadrunner
查看磁盘分区和挂载情况
[root@localhost ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda LVM2_member 4bEcx7-m6bx-mL41-vKEO-DvOC-zEgE-H32Add sdb sdc ├─sdc1 vfat 0320-6ED4 /boot/efi ├─sdc2 xfs 93739afe-f19e-436e-aefb-be9522931fdf /boot └─sdc3 LVM2_member 22LfZ1-VwqX-MMKK-97L0-1l0T-iDlG-YPXHZ5 ├─vg00-lvroot xfs lvroot c2918725-45ec-4415-a7c1-d10598630178 / └─vg00-lvswap swap lvswap b9ce8c9c-03b2-4ceb-9044-945493566db5 [SWAP] loop0 udf T7330-15010 2010-08-25-10-57-00-0 /usr/hxm/loadrunner
查看磁盘使用情况
[root@localhost ~]# df -hl Filesystem Size Used Avail Use% Mounted on devtmpfs 126G 0 126G 0% /dev tmpfs 126G 4.0K 126G 1% /dev/shm tmpfs 126G 116M 126G 1% /run tmpfs 126G 0 126G 0% /sys/fs/cgroup /dev/mapper/vg00-lvroot 100G 43G 58G 43% / /dev/sdc2 1014M 227M 788M 23% /boot /dev/sdc1 1022M 12M 1011M 2% /boot/efi tmpfs 26G 12K 26G 1% /run/user/42 tmpfs 26G 0 26G 0% /run/user/0 /dev/loop0 386M 386M 0 100% /usr/hxm/loadrunner
查看系统支持的分区类型
mkfs然后敲两下tab键查看我们的系统支持的分区类型
[root@localhost ~]# mkfs mkfs mkfs.btrfs mkfs.cramfs mkfs.ext2 mkfs.ext3 mkfs.ext4 mkfs.fat mkfs.minix mkfs.msdos mkfs.vfat mkfs.xfs
格式化分区
[root@localhost ~]# mkfs -t ext4 /dev/sda mke2fs 1.42.9 (28-Dec-2013) /dev/sda is entire device, not just one partition! Proceed anyway? (y,n) y Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 117211136 inodes, 468843606 blocks 23442180 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2617245696 14308 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
[root@localhost ~]# mkfs -t ext4 /dev/sdb mke2fs 1.42.9 (28-Dec-2013) /dev/sdb is entire device, not just one partition! Proceed anyway? (y,n) y Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 117211136 inodes, 468843606 blocks 23442180 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2617245696 14308 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done
查看磁盘分区和挂载情况
[root@localhost ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ext4 90551db9-8abf-4b2e-80f7-27cb6d3139ce sdb ext4 31120181-6143-4385-bc62-8841e21941db sdc ├─sdc1 vfat 0320-6ED4 /boot/efi ├─sdc2 xfs 93739afe-f19e-436e-aefb-be9522931fdf /boot └─sdc3 LVM2_member 22LfZ1-VwqX-MMKK-97L0-1l0T-iDlG-YPXHZ5 ├─vg00-lvroot xfs lvroot c2918725-45ec-4415-a7c1-d10598630178 / └─vg00-lvswap swap lvswap b9ce8c9c-03b2-4ceb-9044-945493566db5 [SWAP] loop0 udf T7330-15010 2010-08-25-10-57-00-0 /usr/hxm/loadrunner
目录创建
[root@localhost ~]# mkdir -p /data/1 [root@localhost ~]# mkdir -p /data/2
目录挂载
mount /dev/sda /data/1
这句话的意思就是把本设备的 sda 分区挂载到 data 目录下的1目录上
[root@localhost ~]# mount /dev/sda /data/1 [root@localhost ~]# mount /dev/sdb /data/2
查看磁盘分区和挂载情况
可以看到 sda 分区和 sdb 分区已经分别被成功的被挂载到 /data/1he /data/2 目录上了。
[root@localhost ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ext4 90551db9-8abf-4b2e-80f7-27cb6d3139ce /data/1 sdb ext4 31120181-6143-4385-bc62-8841e21941db /data/2 sdc ├─sdc1 vfat 0320-6ED4 /boot/efi ├─sdc2 xfs 93739afe-f19e-436e-aefb-be9522931fdf /boot └─sdc3 LVM2_member 22LfZ1-VwqX-MMKK-97L0-1l0T-iDlG-YPXHZ5 ├─vg00-lvroot xfs lvroot c2918725-45ec-4415-a7c1-d10598630178 / └─vg00-lvswap swap lvswap b9ce8c9c-03b2-4ceb-9044-945493566db5 [SWAP] loop0 udf T7330-15010 2010-08-25-10-57-00-0 /usr/hxm/loadrunner
检查挂载是否成功
[root@localhost /]# ls /data/1 lost+found [root@localhost /]# ls /data/2 lost+found
出现 lost+found 就说明我们挂载成功了。
设置可以自动挂载(永久挂载)
检查当前挂载情况:
[root@localhost /]# cat /etc/fstab /etc/fstab Created by anaconda on Sat Dec 18 00:52:03 2021 Accessible filesystems, by reference, are maintained under '/dev/disk' See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info /dev/mapper/vg00-lvroot / xfs defaults 0 0 UUID=93739afe-f19e-436e-aefb-be9522931fdf /boot xfs defaults 0 0 UUID=0320-6ED4 /boot/efi vfat defaults,uid=0,gid=0,umask=0077,shortname=winnt 0 0 /dev/mapper/vg00-lvswap swap swap defaults 0 0
后我们不用UUID,直接写清楚把这台设备的哪个分区挂载到哪个目录下:
输入 vim /etc/fstab,进入修改界面:
将/data/1和data/2进行填写
/etc/fstab Created by anaconda on Sat Dec 18 00:52:03 2021 Accessible filesystems, by reference, are maintained under '/dev/disk' See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info /dev/mapper/vg00-lvroot / xfs defaults 0 0 UUID=93739afe-f19e-436e-aefb-be9522931fdf /boot xfs defaults 0 0 UUID=0320-6ED4 /boot/efi vfat defaults,uid=0,gid=0,umask=0077,shortname=winnt 0 0 /dev/mapper/vg00-lvswap swap swap defaults 0 0 /dev/sda /data/1 ext4 defaults 0 0 /dev/sdb /data/2 ext4 defaults 0 0
这个意思就是把这台机器的 sda 分区挂载到/data/1,sdb分区挂载到/data/2,默认我们改成0 0就行。
保存退出,然后执行mount -a指令(a指auto,意思是让它自动挂载)
[root@localhost /]# mount -a [root@localhost /]#
然后就完成自动挂载了。
更改目录权限
[root@localhost /]# chown -R admin.admin /data/1 && chown -R admin.admin /data/2 [root@localhost /]#
安装 OBD 自动化部署软件
OBD 全称是 OceanBase Deployer,是 OceanBase 社区版的命令行下自动化部署软件。 根据中控机器能否连接公网,提供离线和在线两种安装方法,您可根据实际情况选择安装方式。我这里是内网环境,选择离线安装。
下载相关软件包
软件包可通过官网下载页进行自行下载。
下载后的软件包传输到中控机上指定目录中。
obproxy-3.2.0-1.el7.x86_64.rpm
离线安装 OBD
在登录中控机,我这里使用了root用户进行安装。一般情况下应该是应用用户安装。可自行选择。
ob-deploy 软件默认安装在 /usr/obd 下。不同版本可能有点变化,您可通过下面命令查看位置。
[root@localhost 3.1.2_ce]# rpm -ivh ob-deploy-1.2.0-15.el7.x86_64.rpm warning: ob-deploy-1.2.0-15.el7.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID e9b4a7aa: NOKEY Preparing... ################################# [100%] Updating / installing... 1:ob-deploy-1.2.0-15.el7 ################################# [100%] Installation of obd finished successfully Please source /etc/profile.d/obd.sh to enable it [root@localhost 3.1.2_ce]#
检查安装是否成功
后续可以使用应用用户,比如admin用户执行检查obd是否安装成功。
[admin@localhost 3.1.2_ce]$ obd cluster list Local deploy is empty
查看 .obd/ 目录
[admin@localhost 3.1.2_ce]$ ls ~/.obd/
cluster lock log mirror plugins version
关于~/.obd目录创建问题
刚安装 obd 后, 没有创建任何~/.obd, 是直到敲任何一个obd 命令后, 才创建的.
而且不直接创建~/.obd 的原因是, obd 可以在任何一个用户下运行, 因此不知道在哪个用户下建~/.obd
obd 命令
可使用 -h 查看 obd 命令使用帮助。
[admin@localhost 3.1.2_ce]$ obd -h Usage: obd <command> [options] Available commands: cluster Deploy and manage a cluster. mirror Manage a component repository for OBD. repo Manage local repository for OBD. test Run test for a running deployment. update Update OBD. Options: --version show program's version number and exit -h, --help Show help and exit. -v, --verbose Activate verbose output.
将软件包加到离线仓库
删除远程仓库
使用下面命令:
注意 :下面命令需在部署运行 OBD 的操作系统用户下操作。这里是用户 admin 。
/bin/rm -rf ~/.obd/mirror/remote/OceanBase.repo
使用下面命令将前面的软件包复制到本地仓库。
[admin@localhost 3.1.2_ce]$ obd mirror clone /home/admin/soft/3.1.2_ce/*.rpm name: libobclient version: 2.0.0 release:2.el7 arch: x86_64 md5: f73cae67e2ff5be0682ac2803aba33a7ed26430e add /home/admin/soft/3.1.2_ce/libobclient-2.0.0-2.el7.x86_64.rpm to local mirror name: obagent version: 1.1.0 release:1.el7 arch: x86_64 md5: d2416fadeadba35944872467843d55da0999f298 add /home/admin/soft/3.1.2_ce/obagent-1.1.0-1.el7.x86_64.rpm to local mirror name: obclient version: 2.0.0 release:2.el7 arch: x86_64 md5: 1d2c3ee31f40b9d2fbf97f653f549d896b7e7060 add /home/admin/soft/3.1.2_ce/obclient-2.0.0-2.el7.x86_64.rpm to local mirror name: ob-deploy version: 1.2.0 release:15.el7 arch: x86_64 md5: b1cfa5fb95b6d7b06927e4ddbcdd7651f19311a9 add /home/admin/soft/3.1.2_ce/ob-deploy-1.2.0-15.el7.x86_64.rpm to local mirror name: obproxy version: 3.2.0 release:1.el7 arch: x86_64 md5: 8d5c6978f988935dc3da1dbec208914668dcf3b2 add /home/admin/soft/3.1.2_ce/obproxy-3.2.0-1.el7.x86_64.rpm to local mirror name: oceanbase-ce version: 3.1.2 release:10000392021123010.el7 arch: x86_64 md5: 7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed add /home/admin/soft/3.1.2_ce/oceanbase-ce-3.1.2-10000392021123010.el7.x86_64.rpm to local mirror name: oceanbase-ce-libs version: 3.1.2 release:10000392021123010.el7 arch: x86_64 md5: 94fff0ab31de053051dba66039e3185fa390cad5 add /home/admin/soft/3.1.2_ce/oceanbase-ce-libs-3.1.2-10000392021123010.el7.x86_64.rpm to local mirror name: oceanbase-ce-utils version: 3.1.2 release:10000392021123010.el7 arch: x86_64 md5: 6ca7db146fee526f4201508f9bd30901e487b7c5 add /home/admin/soft/3.1.2_ce/oceanbase-ce-utils-3.1.2-10000392021123010.el7.x86_64.rpm to local mirror
查看仓库的 RPM 列表。
[admin@localhost 3.1.2_ce]$ obd mirror list local +----------------------------------------------------------------------------------------------------------+ | local Package List | +--------------------+---------+-----------------------+--------+------------------------------------------+ | name | version | release | arch | md5 | +--------------------+---------+-----------------------+--------+------------------------------------------+ | libobclient | 2.0.0 | 2.el7 | x86_64 | f73cae67e2ff5be0682ac2803aba33a7ed26430e | | obagent | 1.1.0 | 1.el7 | x86_64 | d2416fadeadba35944872467843d55da0999f298 | | obclient | 2.0.0 | 2.el7 | x86_64 | 1d2c3ee31f40b9d2fbf97f653f549d896b7e7060 | | ob-deploy | 1.2.0 | 15.el7 | x86_64 | b1cfa5fb95b6d7b06927e4ddbcdd7651f19311a9 | | obproxy | 3.2.0 | 1.el7 | x86_64 | 8d5c6978f988935dc3da1dbec208914668dcf3b2 | | oceanbase-ce | 3.1.2 | 10000392021123010.el7 | x86_64 | 7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed | | oceanbase-ce-libs | 3.1.2 | 10000392021123010.el7 | x86_64 | 94fff0ab31de053051dba66039e3185fa390cad5 | | oceanbase-ce-utils | 3.1.2 | 10000392021123010.el7 | x86_64 | 6ca7db146fee526f4201508f9bd30901e487b7c5 | +--------------------+---------+-----------------------+--------+------------------------------------------+
安装 OceanBase 集群
编辑部署 yml 文件
obd 可以支持对 yml 参数多种多样的配置安装,其中对生产环境有autodeploy 模式,可以自适应参数。我这里做性能测试,所以选择了 autodeploy 模式。
从官网下载模板文件后进行编辑。下载地址:https://github.com/oceanbase/obdeploy/blob/master/example/autodeploy/distributed-with-obproxy-example.yaml
Only need to configure when remote login is required user: username: admin password: admin key_file: your ssh-key file path if need port: your ssh port, default 22 timeout: ssh connection timeout (second), default 30 oceanbase-ce: servers: name: obce_zone1_01 # Please don't use hostname, only IP can be supported ip: 10.144.2.106 name: obce_zone1_02 ip: 10.144.2.107 name: obce_zone2_01 ip: 10.144.2.108 name: obce_zone2_02 # Please don't use hostname, only IP can be supported ip: 10.144.2.109 name: obce_zone3_01 ip: 10.144.2.110 name: obce_zone3_02 ip: 10.144.2.111 global: # The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field. home_path: /home/admin/oceanbase-ce # The directory for data storage. The default value is $home_path/store. data_dir: /data/1/oceanbase-ce # The directory for clog, ilog, and slog. The default value is the same as the data_dir value. redo_dir: /data/2/oceanbase-ce # Please set devname as the network adaptor's name whose ip is in the setting of severs. # if set severs as "127.0.0.1", please set devname as "lo" # if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0" devname: bond0 # External port for OceanBase Database. The default value is 2881.DO NOT change this value after the cluster is started. # mysql_port: 2881 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started. # rpc_port: 2882 # Defines the zone for an observer. The default value is zone1. # zone: zone1 # The maximum running memory for an observer. When ignored, autodeploy calculates this value based on the current server available resource. # memory_limit: 58G # The percentage of the maximum available memory to the total memory. This value takes effect only when memory_limit is 0. The default value is 80. # memory_limit_percentage: 80 # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. Autodeploy calculates this value based on the current server available resource. # system_memory: 22G # The size of a data file. When ignored, autodeploy calculates this value based on the current server available resource. # datafile_size: 200G # The percentage of the data_dir space to the total disk space. This value takes effect only when datafile_size is 0. The default value is 90. # datafile_disk_percentage: 90 # System log level. The default value is INFO. # syslog_level: INFO # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true. The default value for autodeploy mode is false. # enable_syslog_wf: false # Enable auto system log recycling or not. The default value is false. The default value for autodeploy mode is on. # enable_syslog_recycle: true # The maximum number of reserved log files before enabling auto recycling. When set to 0, no logs are deleted. The default value for autodeploy mode is 4. # max_syslog_file_count: 4 # Cluster name for OceanBase Database. The default value is obcluster. When you deploy OceanBase Database and obproxy, this value must be the same as the cluster_name for obproxy. # appname: obcluster # Password for root. The default value is empty. # root_password: # Password for proxyro. proxyro_password must be the same as observer_sys_password. The default value is empty. # proxyro_password: obce_zone1_01: zone: zone1 obce_zone1_02: zone: zone1 obce_zone2_01: zone: zone2 obce_zone2_02: zone: zone2 obce_zone3_01: zone: zone3 obce_zone3_02: zone: zone3 obproxy: depends: oceanbase-ce servers: 10.144.2.106 10.144.2.107 10.144.2.108 10.144.2.109 10.144.2.110 10.144.2.111 global: # The working directory for obproxy. Obproxy is started under this directory. This is a required field. home_path: /home/admin/obproxy # External port. The default value is 2883. # listen_port: 2883 # The Prometheus port. The default value is 2884. # prometheus_listen_port: 2884 # rs_list is the root server list for observers. The default root server is the first server in the zone. # The format for rs_list is observer_ip:observer_mysql_port;observer_ip:observer_mysql_port. # Ignore this value in autodeploy mode. # rs_list: 127.0.0.1:2881 # Cluster name for the proxy OceanBase Database. The default value is obcluster. This value must be set to the same with the appname for OceanBase Database. # cluster_name: obcluster # Password for obproxy system tenant. The default value is empty. # obproxy_sys_password: # Password for proxyro. proxyro_password must be the same with proxyro_password. The default value is empty. # observer_sys_password:
执行安装
[admin@localhost ~]$ obd cluster autodeploy obtest -c distributed-with-obproxy-example.yaml oceanbase-ce-3.1.2 already installed. obproxy-3.2.0 already installed. Cluster param config check ok Open ssh connection ok Generate observer configuration ok Generate obproxy configuration ok oceanbase-ce-3.1.2 already installed. obproxy-3.2.0 already installed. +-------------------------------------------------------------------------------------------+ | Packages | +--------------+---------+-----------------------+------------------------------------------+ | Repository | Version | Release | Md5 | +--------------+---------+-----------------------+------------------------------------------+ | oceanbase-ce | 3.1.2 | 10000392021123010.el7 | 7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed | | obproxy | 3.2.0 | 1.el7 | 8d5c6978f988935dc3da1dbec208914668dcf3b2 | +--------------+---------+-----------------------+------------------------------------------+ Repository integrity check ok Parameter check ok Open ssh connection ok Remote oceanbase-ce-3.1.2-7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed repository install ok Remote oceanbase-ce-3.1.2-7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed repository lib check ok Remote obproxy-3.2.0-8d5c6978f988935dc3da1dbec208914668dcf3b2 repository install ok Remote obproxy-3.2.0-8d5c6978f988935dc3da1dbec208914668dcf3b2 repository lib check ok Cluster status check ok Initializes observer work home ok Initializes obproxy work home ok obtest deployed Get local repositories and plugins ok Open ssh connection ok Load cluster param plugin ok Check before start observer ok Check before start obproxy ok Start observer ok observer program health check ok Connect to observer ok Initialize cluster Cluster bootstrap ok Wait for observer init ok +------------------------------------------------+ | observer | +--------------+---------+------+-------+--------+ | ip | version | port | zone | status | +--------------+---------+------+-------+--------+ | 10.144.2.106 | 3.1.2 | 2881 | zone1 | active | | 10.144.2.107 | | 0 | zone1 | active | | 10.144.2.108 | 3.1.2 | 2881 | zone2 | active | | 10.144.2.109 | | 0 | zone2 | active | | 10.144.2.110 | 3.1.2 | 2881 | zone3 | active | | 10.144.2.111 | | 0 | zone3 | active | +--------------+---------+------+-------+--------+ Start obproxy ok obproxy program health check ok Connect to obproxy ok Initialize cluster +------------------------------------------------+ | obproxy | +--------------+------+-----------------+--------+ | ip | port | prometheus_port | status | +--------------+------+-----------------+--------+ | 10.144.2.106 | 2883 | 2884 | active | | 10.144.2.107 | 2883 | 2884 | active | | 10.144.2.108 | 2883 | 2884 | active | | 10.144.2.109 | 2883 | 2884 | active | | 10.144.2.110 | 2883 | 2884 | active | | 10.144.2.111 | 2883 | 2884 | active | +--------------+------+-----------------+--------+ obtest running
查看集群
[admin@localhost ~]$ obd cluster list +------------------------------------------------------------+ | Cluster List | +--------+---------------------------------+-----------------+ | Name | Configuration Path | Status (Cached) | +--------+---------------------------------+-----------------+ | obtest | /home/admin/.obd/cluster/obtest | running | +--------+---------------------------------+-----------------+
创建业务租户、业务测试用户
创建普通租户
使用 OBD 创建普通租户。
[admin@localhost ~]$ obd cluster tenant create obtest -n tpcc Get local repositories and plugins ok [WARN] No such create_tenant plugin for obproxy-3.2.0 Open ssh connection ok Connect to observer ok Create tenant tpcc ok
连接 OceanBase
查看新建的资源单元
MySQL [oceanbase]> SELECT unit_config_id,name,max_cpu,min_cpu,max_memory,min_memory,max_disk_size FROM __all_unit_config; +----------------+-----------------+---------+---------+--------------+--------------+---------------+ | unit_config_id | name | max_cpu | min_cpu | max_memory | min_memory | max_disk_size | +----------------+-----------------+---------+---------+--------------+--------------+---------------+ | 1 | sys_unit_config | 5 | 2.5 | 17179869184 | 12884901888 | 1431297851392 | | 1001 | tpcc_unit | 9 | 9 | 141733920768 | 141733920768 | 1431297851392 | +----------------+-----------------+---------+---------+--------------+--------------+---------------+ 2 rows in set (0.002 sec)
查看集群内实例
MySQL [oceanbase]> select a.zone,concat(a.svr_ip,':',a.svr_port) observer, cpu_total, (cpu_total-cpu_assigned) cpu_free, round(mem_total/1024/1024/1024) mem_total_gb, round((mem_total-mem_assigned)/1024/1024/1024) mem_free_gb, usec_to_time(b.last_offline_time) last_offline_time, usec_to_time(b.start_service_time) start_service_time, b.status, usec_to_time(b.stop_time) stop_time, b.build_version -> from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port) -> order by a.zone, a.svr_ip -> ; +-------+-------------------+-----------+----------+--------------+-------------+----------------------------+----------------------------+--------+----------------------------+----------------------------------------------------------------------------------------+ | zone | observer | cpu_total | cpu_free | mem_total_gb | mem_free_gb | last_offline_time | start_service_time | status | stop_time | build_version | +-------+-------------------+-----------+----------+--------------+-------------+----------------------------+----------------------------+--------+----------------------------+----------------------------------------------------------------------------------------+ | zone1 | 10.144.2.106:2882 | 14 | 2.5 | 149 | 5 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:20.552658 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone1 | 10.144.2.107:2882 | 14 | 5 | 150 | 18 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:24.739132 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone2 | 10.144.2.108:2882 | 14 | 2.5 | 150 | 6 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:22.918302 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone2 | 10.144.2.109:2882 | 14 | 5 | 150 | 18 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:25.136243 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone3 | 10.144.2.110:2882 | 14 | 2.5 | 149 | 5 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:23.249400 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | | zone3 | 10.144.2.111:2882 | 14 | 5 | 148 | 16 | 1970-01-01 08:00:00.000000 | 2022-01-14 00:56:25.437576 | active | 1970-01-01 08:00:00.000000 | 3.1.2_10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d(Dec 30 2021 02:47:29) | +-------+-------------------+-----------+----------+--------------+-------------+----------------------------+----------------------------+--------+----------------------------+----------------------------------------------------------------------------------------+ 6 rows in set (0.005 sec)
查看资源池情况
MySQL [oceanbase]> select t1.name resource_pool_name, t2.name unit_config_name, t2.max_cpu, t2.min_cpu, t2.max_memory/1024/1024/1024 max_mem_gb, t2.min_memory/1024/1024/1024 min_mem_gb, t3.unit_id, t3.zone, concat(t3.svr_ip,':',t3.svr_port ) observer,t4.tenant_id, t4.tenant_name -> from __all_resource_pool t1 join __all_unit_config t2 on (t1.unit_config_id=t2.unit_config_id) -> join __all_unit t3 on (t1.resource_pool_id = t3.resource_pool_id ) -> left join __all_tenant t4 on (t1.tenant_id=t4.tenant_id) -> order by t1.resource_pool_id, t2.unit_config_id , t3.unit_id -> ; +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ | resource_pool_name | unit_config_name | max_cpu | min_cpu | max_mem_gb | min_mem_gb | unit_id | zone | observer | tenant_id | tenant_name | +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 1 | zone1 | 10.144.2.106:2882 | 1 | sys | | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 2 | zone2 | 10.144.2.108:2882 | 1 | sys | | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 3 | zone3 | 10.144.2.110:2882 | 1 | sys | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1001 | zone1 | 10.144.2.107:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1002 | zone1 | 10.144.2.106:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1003 | zone2 | 10.144.2.109:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1004 | zone2 | 10.144.2.108:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1005 | zone3 | 10.144.2.111:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1006 | zone3 | 10.144.2.110:2882 | 1001 | tpcc | +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ 9 rows in set (0.001 sec)
查看集群可用资源
MySQL [oceanbase]> select a.zone,concat(a.svr_ip,':',a.svr_port) observer, cpu_total, cpu_assigned, (cpu_total-cpu_assigned) cpu_free, mem_total/1024/1024/1024 mem_total_gb, mem_assigned/1024/1024/1024 mem_assign_gb, (mem_total-mem_assigned)/1024/1024/1024 mem_free_gb -> from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port) -> order by a.zone, a.svr_ip -> ; +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ | zone | observer | cpu_total | cpu_assigned | cpu_free | mem_total_gb | mem_assign_gb | mem_free_gb | +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ | zone1 | 10.144.2.106:2882 | 14 | 11.5 | 2.5 | 149.000000000000 | 144.000000000000 | 5.000000000000 | | zone1 | 10.144.2.107:2882 | 14 | 9 | 5 | 150.000000000000 | 132.000000000000 | 18.000000000000 | | zone2 | 10.144.2.108:2882 | 14 | 11.5 | 2.5 | 150.000000000000 | 144.000000000000 | 6.000000000000 | | zone2 | 10.144.2.109:2882 | 14 | 9 | 5 | 150.000000000000 | 132.000000000000 | 18.000000000000 | | zone3 | 10.144.2.110:2882 | 14 | 11.5 | 2.5 | 149.000000000000 | 144.000000000000 | 5.000000000000 | | zone3 | 10.144.2.111:2882 | 14 | 9 | 5 | 148.000000000000 | 132.000000000000 | 16.000000000000 | +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ 6 rows in set (0.004 sec)
查看一下资源分配细节。
MySQL [oceanbase]> select t1.name resource_pool_name, t2.name unit_config_name, t2.max_cpu, t2.min_cpu, t2.max_memory/1024/1024/1024 max_mem_gb, t2.min_memory/1024/1024/1024 min_mem_gb, t3.unit_id, t3.zone, concat(t3.svr_ip,':',t3.svr_port ) observer,t4.tenant_id, t4.tenant_name -> from __all_resource_pool t1 join __all_unit_config t2 on (t1.unit_config_id=t2.unit_config_id) -> join __all_unit t3 on (t1.resource_pool_id = t3.resource_pool_id ) -> left join __all_tenant t4 on (t1.tenant_id=t4.tenant_id) -> order by t1.resource_pool_id, t2.unit_config_id , t3.unit_id -> ; +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ | resource_pool_name | unit_config_name | max_cpu | min_cpu | max_mem_gb | min_mem_gb | unit_id | zone | observer | tenant_id | tenant_name | +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 1 | zone1 | 10.144.2.106:2882 | 1 | sys | | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 2 | zone2 | 10.144.2.108:2882 | 1 | sys | | sys_pool | sys_unit_config | 5 | 2.5 | 16.000000000000 | 12.000000000000 | 3 | zone3 | 10.144.2.110:2882 | 1 | sys | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1001 | zone1 | 10.144.2.107:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1002 | zone1 | 10.144.2.106:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1003 | zone2 | 10.144.2.109:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1004 | zone2 | 10.144.2.108:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1005 | zone3 | 10.144.2.111:2882 | 1001 | tpcc | | tpcc_pool | tpcc_unit | 9 | 9 | 132.000000000000 | 132.000000000000 | 1006 | zone3 | 10.144.2.110:2882 | 1001 | tpcc | +--------------------+------------------+---------+---------+------------------+------------------+---------+-------+-------------------+-----------+-------------+ 9 rows in set (0.006 sec)
查询剩余可用资源
MySQL [oceanbase]> select a.zone,concat(a.svr_ip,':',a.svr_port) observer, cpu_total, cpu_assigned, (cpu_total-cpu_assigned) cpu_free, mem_total/1024/1024/1024 mem_total_gb, mem_assigned/1024/1024/1024 mem_assign_gb, (mem_total-mem_assigned)/1024/1024/1024 mem_free_gb from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port) order by a.zone, a.svr_ip; +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ | zone | observer | cpu_total | cpu_assigned | cpu_free | mem_total_gb | mem_assign_gb | mem_free_gb | +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ | zone1 | 10.144.2.106:2882 | 14 | 11.5 | 2.5 | 149.000000000000 | 144.000000000000 | 5.000000000000 | | zone1 | 10.144.2.107:2882 | 14 | 9 | 5 | 150.000000000000 | 132.000000000000 | 18.000000000000 | | zone2 | 10.144.2.108:2882 | 14 | 11.5 | 2.5 | 150.000000000000 | 144.000000000000 | 6.000000000000 | | zone2 | 10.144.2.109:2882 | 14 | 9 | 5 | 150.000000000000 | 132.000000000000 | 18.000000000000 | | zone3 | 10.144.2.110:2882 | 14 | 11.5 | 2.5 | 149.000000000000 | 144.000000000000 | 5.000000000000 | | zone3 | 10.144.2.111:2882 | 14 | 9 | 5 | 148.000000000000 | 132.000000000000 | 16.000000000000 | +-------+-------------------+-----------+--------------+----------+------------------+------------------+-----------------+ 6 rows in set (0.001 sec)
查看成功创建的租户
MySQL [oceanbase]> select tenant_id,tenant_name,primary_zone from __all_tenant; +-----------+-------------+-------------------+ | tenant_id | tenant_name | primary_zone | +-----------+-------------+-------------------+ | 1 | sys | zone1;zone2,zone3 | | 1001 | tpcc | RANDOM | +-----------+-------------+-------------------+ 2 rows in set (0.000 sec)
创建业务用户
创建业务用户 使用业务租户的root用户登录 [admin@localhost ~]$ mysql -h10.144.2.106 -uroot@tpcc -P2883 -p -c -A oceanbase Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 7 Server version: 5.6.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [oceanbase]>
然后执行创建普通用户
MySQL [oceanbase]> create user benchmarksql@'%' identified by 'benchmarksql' ; Query OK, 0 rows affected (0.03 sec)
查看创建成功的用户
MySQL [oceanbase]> SELECT user FROM mysql.user; +--------------+ | user | +--------------+ | root | | ORAAUDITOR | | benchmarksql | +--------------+ 3 rows in set (0.06 sec)
授权
MySQL [oceanbase]> grant all privileges on *.* to benchmarksql@'%'; Query OK, 0 rows affected (0.02 sec)
查看授权
MySQL [oceanbase]> show grants for benchmarksql; +-----------------------------------------------+ | Grants for benchmarksql@% | +-----------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'benchmarksql' | +-----------------------------------------------+ 1 row in set (0.01 sec)
创建业务数据库
新建用户进行登录 退出sys租户登录,使用新建业务用户和租户登录。 [admin@localhost ~]$ obclient -h10.144.2.106 -ubenchmarksql@tpcc -P2883 -pbenchmarksql -c -A oceanbase Welcome to the OceanBase. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 5.6.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [oceanbase]>
创建业务库
MySQL [oceanbase]> create database benchmark; Query OK, 1 row affected (0.018 sec)
创建业务用户
使用业务租户的 root 用户登录
[admin@localhost ~]$ mysql -h10.144.2.106 -uroot@tpcc -P2883 -p -c -A oceanbase Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 8 Server version: 5.6.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [oceanbase]>
然后执行创建普通用户
MySQL [oceanbase]> create user transfer@'%' identified by 'transfer' ; Query OK, 0 rows affected (0.03 sec)
查看创建成功的用户
MySQL [oceanbase]> SELECT user FROM mysql.user; +--------------+ | user | +--------------+ | root | | ORAAUDITOR | | benchmarksql | | transfer | +--------------+ 4 rows in set (0.02 sec)
授权
MySQL [oceanbase]> grant all privileges on *.* to transfer@'%'; Query OK, 0 rows affected (0.02 sec)
查看授权
MySQL [oceanbase]> show grants for transfer; +-------------------------------------------+ | Grants for transfer@% | +-------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'transfer' | +-------------------------------------------+ 1 row in set (0.01 sec)
创建业务数据库
新建用户进行登录
退出sys租户登录,使用新建业务用户和租户登录。
[admin@localhost ~]$ obclient -h10.144.2.106 -utransfer@tpcc -P2883 -ptransfer -c -A oceanbase Welcome to the OceanBase. Commands end with ; or \g. Your MySQL connection id is 9 Server version: 5.6.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [oceanbase]>
创建业务库
MySQL [oceanbase]> create database transfer; Query OK, 1 row affected (0.026 sec)
————————————————
附录:
练习题:
实践练习一(必选):OceanBase Docker 体验
实践练习二(必选):手动部署 OceanBase 集群
实践练习三(可选):使用OBD 部署一个 三副本OceanBase 集群
实践练习四(必选):迁移 MySQL 数据到 OceanBase 集群
实践练习五(可选):对 OceanBase 做性能测试
实践练习六(必选):查看 OceanBase 执行计划
还没交作业的小伙伴要抓紧啦!
可以免费带走 OBCP 考试券喔~~
方法一:完成四道必选练习
方法二:任意一道练习题 ➕ 结业考试超过80分
已经有很多同学抢先答题了,
加入钉钉群(群号3582 5151),和大家一起学习、交流~~
进群二维码: