一 、本次实验,我准备了两台虚拟机,并且已经安装好了MYSQL8.0.24版本的数据库。
所有节点执行以下语句
SET SQL_LOG_BIN=0;
alter user user() identified by "root";
CREATE USER 'root'@'%' IDENTIFIED BY 'root';
grant all privileges on . to 'root'@'%' ;
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
并且一定要关闭防火墙和SELINUX
三台机器的IP和主机名为:
IP |
HOSTNAME |
操心系统 |
192.168.56.28 |
mysql80 |
Red Hat7.6 |
192.168.56.29 |
mysql80b |
二 、编辑/etc/hosts文件,向文件中添加三台机器IP和主机名:
vi /etc/hosts
192.168.56.28 mysql80
192.168.56.29 mysql80b
三 、所有节点做互信
上传我给大家的软件。
然后执行
sh sshUserSetup.sh -user root -hosts "mysql80 mysql80b" -advanced -noPromptPassphrase
测试你的互信(每个节点都做一下)
ssh mysql80 date
ssh mysql80b date
这里选yes,后面输入 root密码。
互信成功
四、 修改my.cnf文件,将下列内容添加到my.cnf里
修改中需要删除原有的 server-id 这个参数
#提前进库 查看uuid 然后修改,每个节点都需要修改
一节点:
vi /etc/my.cnf
#GITD#
gtid_mode = ON
enforce_gtid_consistency = ON
server-id = 1
transaction_isolation = READ-COMMITTED
log-slave-updates = 1
binlog_checksum = NONE
master_info_repository = TABLE
relay_log_info_repository = TABLE
relay-log = /data/log-bin/relay-bin-3306
relay-log-index = /data/log-bin/relay.index
plugin_load="group_replication=group_replication.so"
slave-net-timeout = 60
log_slave_updates = ON
#GROUOP REPLICATION#
transaction_write_set_extraction = XXHASH64
loose-group_replication_group_name = "bb8d01ee-5762-11ee-ae08-080027a0241f"
loose-group_replication_start_on_boot = OFF
loose-group_replication_local_address = "192.168.56.28:33006"
loose-group_replication_group_seeds = "192.168.56.28:33006,192.168.56.29:33006"
loose-group_replication_bootstrap_group = OFF
loose-group_replication_single_primary_mode = ON
loose-group_replication_enforce_update_everywhere_checks = OFF
loose-group_replication_ip_whitelist = "127.0.0.1/8,192.168.56.0/24"
二节点:
#GITD#
gtid_mode = ON
enforce_gtid_consistency = ON
server-id = 2
transaction_isolation = READ-COMMITTED
log-slave-updates = 1
binlog_checksum = NONE
master_info_repository = TABLE
relay_log_info_repository = TABLE
relay-log = /data/log-bin/relay-bin-3306
relay-log-index = /data/log-bin/relay.index
plugin_load="group_replication=group_replication.so"
slave-net-timeout = 60
log_slave_updates = ON
#GROUOP REPLICATION#
transaction_write_set_extraction = XXHASH64
loose-group_replication_group_name = "da3efe3c-5762-11ee-9bb0-0800275dd6f0"
loose-group_replication_start_on_boot = OFF
loose-group_replication_local_address = "192.168.56.29:33006"
loose-group_replication_group_seeds = "192.168.56.28:33006,192.168.56.29:33006"
loose-group_replication_bootstrap_group = OFF
loose-group_replication_single_primary_mode = ON
loose-group_replication_enforce_update_everywhere_checks = OFF
loose-group_replication_ip_whitelist = "127.0.0.1/8,192.168.56.0/24"
重新启动MYSQL
service mysqld restart
参数解释:
#开启GTID,必须开启,GTID (Global Transaction ID)是全局事务ID,当在主库上提交事务或者被从库应用时,可以定位和追踪每一个事务.
gtid_mode=ON
#强制GTID的一致性
enforce_gtid_consistency=ON
#binlog格式,MGR要求必须是ROW,不过就算不是MGR,也最好用row,我的配置文件之前已经加上了,这里不用加binlog_format=row
#server-id必须是唯一的,每个节点一个编号,不能重复
server-id = 1
#MGR使用乐观锁,所以官网建议隔离级别是RC,减少锁粒度
transaction_isolation = READ-COMMITTED
#因为集群会在故障恢复时互相检查binlog的数据,所以需要记录下集群内其他服务器发过来已经执行过的binlog,按GTID来区分是否执行过.
log-slave-updates=1
#binlog校验规则,5.6之后的高版本是CRC32,低版本都是NONE,但是MGR要求使用NONE
binlog_checksum=NONE
#基于安全的考虑,MGR集群要求复制模式要改成slave记录记录到表中,不然就报错
master_info_repository=TABLE
#同上配套
relay_log_info_repository=TABLE
#从节点心跳,当从节点超过该值时未收到任何信息报文的话,默认已和集群失去联系,然后重连并且追赶这段时间主库的数据。
slave-net-timeout
#从库从主库上复制的数据,是不写入从库的binlog日志的。所以从库做为其他从库的主库时需要在配置文件中添加log-slave-updates参数
log_slave_updates = ON
#组复制设置
#记录事务的算法,官网建议设置该参数使用 XXHASH64 算法
transaction_write_set_extraction = XXHASH64
#相当于此GROUP的名字,是绝对唯一值,可拿select uuid()生成
#主要是用来区分整个内网里边的各个不同的GROUP,而且也是这个group内的GTID值的UUID,
loose-group_replication_group_name = "b92c59d7-623c-11eb-b9ea-08002786ab56"
#IP地址白名单,默认只添加127.0.0.1,不会允许来自外部主机的连接,按需安全设置,选填参数不是必填参数
loose-group_replication_ip_whitelist = "127.0.0.1/8,192.168.56.0/24"
#是否随服务器启动而自动启动组复制,不建议直接启动,怕故障恢复时有扰乱数据准确性的特殊情况
loose-group_replication_start_on_boot = OFF
#本地MGR的IP地址和端口,host:port,是MGR的端口,不是数据库的端口
loose-group_replication_local_address = "192.168.10080.151:33006"
#需要接受本MGR实例控制的服务器IP地址和端口,是MGR的端口,不是数据库的端口
loose-group_replication_group_seeds = "192.168.56.28:33006,192.168.56.29:33006"
#开启引导模式,添加组成员,用于第一次搭建MGR或重建MGR的时候使用,只需要在集群内的其中一台开启,
loose-group_replication_bootstrap_group = OFF
#是否启动单主模式,如果启动,则本实例是主库,提供读写,其他实例仅提供读,如果为off就是多主模式了
loose-group_replication_single_primary_mode = ON
#多主模式下,强制检查每一个实例是否允许该操作,如果不是多主,可以关闭
loose-group_replication_enforce_update_everywhere_checks = on
插件以及插件库加载
plugin_load="group_replication=group_replication.so"
节点的权重值,每个节点可以根据不同设置不同的权重值,权重值的主要作用,就是当主节点宕机以后再slave中选择下一个主节点,这里谁的值大,谁就是喜爱一个主节点,当权重值一样时根绝server_uuid大小选择下载一个主节点
loose-group_replication_member_weight
登录mysql
mysql -uroot -proot
现在开始配置一(单)主多从的集群模式:两台都执行
SET SQL_LOG_BIN=0;
CREATE USER repl@"%" IDENTIFIED BY "repl";
ALTER USER "repl"@"%" IDENTIFIED WITH sha256_password BY "repl";
GRANT REPLICATION SLAVE ON . TO repl@"%";
GRANT BACKUP_ADMIN ON . TO repl@"%";
CREATE USER repl@"localhost" IDENTIFIED BY "repl";
ALTER USER "repl"@"localhost" IDENTIFIED WITH sha256_password BY "repl";
GRANT REPLICATION SLAVE ON . TO repl@"localhost";
GRANT BACKUP_ADMIN ON . TO repl@"localhost";
CREATE USER repl@"127.0.0.1" IDENTIFIED BY "repl";
ALTER USER "repl"@"127.0.0.1" IDENTIFIED WITH sha256_password BY "repl";
GRANT REPLICATION SLAVE ON . TO repl@"127.0.0.1";
GRANT BACKUP_ADMIN ON . TO repl@"127.0.0.1";
FLUSH PRIVILEGES;
SET SQL_LOG_BIN=1;
安装插件:
install PLUGIN group_replication SONAME "group_replication.so";
ERROR 1125 (HY000): Function "group_replication" already exists -->出现这个的话就代表已经安装了插件
确认一下:
show plugins;
| group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL |
两台都执行(三台的话也都执行)
CHANGE MASTER TO MASTER_USER="repl", MASTER_PASSWORD="repl" FOR CHANNEL "group_replication_recovery";
MASTER上运行:
SET GLOBAL group_replication_bootstrap_group = ON;
START GROUP_REPLICATION;
SET GLOBAL group_replication_bootstrap_group = OFF;
二节点:
以下步骤在从属节点上操作:(运行前请看以下说明)
START GROUP_REPLICATION;
#查看是否启动成功,看到online就是成功了(两个节点上都要做一遍)
SELECT * FROM performance_schema.replication_group_members;
出错解决方案一:
某节点长时间recovering。
然后运行以下SQL语句出现下列错误:
select * from performance_schema.replication_connection_status\G;
错误关键字为:
message: Authentication plugin "caching_sha2_password" reported error: Authentication requires secure connection.
使用下面的方法解决(从属节点)
stop group_replication;
set global group_replication_recovery_get_public_key=on;
start group_replication;
该参数说明。在安全状态下把主机的public_key同步到从机上
或使用下列语句去解决
ALTER USER "xxx"@"xxxx" IDENTIFIED WITH sha256_password BY "xxxxx"; xxx根据自己实际的情况去填写。
出错解决方案二:
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.
[MY-011526] [Repl] Plugin group_replication reported: 'This member has more executed transactions than those present in the group. Local transactions: 4bfbf21d-62b4-11eb-a0df-08002702977e:1-5 > Group transactions: 47afd6cd-62b4-11eb-ba85-080027c50940:1-5, b92c59d7-623c-11eb-b9ea-08002786ab56:1-4'
解决方案
reset master;
START GROUP_REPLICATION;
同步情况
在主库上创建数据库 (先观察谁是主库)
create database test;
使用创建出来的测试库
use test;
然后创建表
create table test (word varchar(100) primary key);
插入数据:
insert into test values ('MYSQL-GOOD');
查询数据,看是不是每个节点都能看到
select * from test.test;
一节点:
二节点:
测试完成