记一次dr_auto_sync计划内的切换测试过程

2024年 7月 1日 76.4k 0

一、背景

TiDB作为一款开源的分布式关系型数据库,提供了强大的数据同步和容灾能力,其中dr_auto_sync功能允许数据库在预定的时间或条件下自动从主数据中心切换到从数据中心,保证服务的连续性和数据的一致性。本文将记录一次TiDB dr_auto_sync计划内的切换测试过程。

记一次dr_auto_sync计划内的切换测试过程-1

二、测试环境

  • TiDB集群版本:7.1.5(请根据实际版本替换)
  • 主数据中心:DC1,包含2个TiDB节点、2个TiKV节点(voter)和2个PD节点
  • 从数据中心:DC2,包含2个TiDB节点、2个TiKV节点(follower/learner)和1个PD节点

集群节点拓扑如下:

记一次dr_auto_sync计划内的切换测试过程-2

集群副本配置如下:

副本leader全部运行在dc1中心

记一次dr_auto_sync计划内的切换测试过程-3

三、切换前准备

检查集群运行状态

granfna -> dr_test_overview -> service port status

记一次dr_auto_sync计划内的切换测试过程-4

检查集群主备同步状态

granfna -> dr_test_pd-> dr_autosync

记一次dr_auto_sync计划内的切换测试过程-5

准备切换文件

rules_reverse.json

[
{
  "group_id": "pd",
  "group_index": 0,
  "group_override": false,
  "rules": [
    {
      "group_id": "pd",
      "id": "voters",
      "start_key": "",
      "end_key": "",
      "role": "voter",
      "count": 2,
      "location_labels": ["dc", "logic", "rack", "host"],
      "label_constraints": [{"key": "dc", "op": "in", "values": ["dc2"]}]
    },
    {
      "group_id": "pd",
      "id": "followers",
      "start_key": "",
      "end_key": "",
      "role": "follower",
      "count": 1,
      "location_labels": ["dc", "logic", "rack", "host"],
      "label_constraints": [{"key": "logic", "op": "in", "values": ["logic2"]}]
    },
    {
      "group_id": "pd",
      "id": "learners",
      "start_key": "",
      "end_key": "",
      "role": "learner",
      "count": 1,
      "location_labels": ["dc", "logic", "rack", "host"],
      "label_constraints": [{"key": "logic", "op": "in", "values": ["logic1"]}]
    }
  ]
}
]

四、切换过程

PD Leader切换

[tidb@tidb1 dr_auto_sync]$ tiup ctl:v7.1.5 pd -u http://10.211.55.20:2379 member leader transfer pd-10.211.55.23-2379
Starting component ctl: /home/tidb/.tiup/components/ctl/v7.1.5/ctl pd -u http://10.211.55.20:2379 member leader transfer pd-10.211.55.23-2379
Success!
[tidb@tidb1 dr_auto_sync]$ tiup ctl:v7.1.5 pd -u http://10.211.55.20:2379 member leader show
Starting component ctl: /home/tidb/.tiup/components/ctl/v7.1.5/ctl pd -u http://10.211.55.20:2379 member leader show
{
"name": "pd-10.211.55.23-2379",
"member_id": 11296243912059560318,
"peer_urls": [
  "http://10.211.55.23:2380"
],
"client_urls": [
  "http://10.211.55.23:2379"
]
}

PD优先级

调高dc2中pd的优先级级别,调小dc1的pd的优先级别

[tidb@tidb1 dr_auto_sync]$ tiup ctl:v7.1.5 pd -u http://10.211.55.20:2379 member leader_priority pd-10.211.55.20-2379 1
Starting component ctl: /home/tidb/.tiup/components/ctl/v7.1.5/ctl pd -u http://10.211.55.20:2379 member leader_priority pd-10.211.55.20-2379 1
Success!
[tidb@tidb1 dr_auto_sync]$ 
[tidb@tidb1 dr_auto_sync]$ tiup ctl:v7.1.5 pd -u http://10.211.55.20:2379 member leader_priority pd-10.211.55.23-2379 5
Starting component ctl: /home/tidb/.tiup/components/ctl/v7.1.5/ctl pd -u http://10.211.55.20:2379 member leader_priority pd-10.211.55.23-2379 5
Success!

修改Placement-rules

[tidb@tidb1 dr_auto_sync]$ tiup ctl:v7.1.5 pd -u http://10.211.55.20:2379 config placement-rules rule-bundle save --in="/home/tidb/dr_auto_sync/rules_reverse.json" 
Starting component ctl: /home/tidb/.tiup/components/ctl/v7.1.5/ctl pd -u http://10.211.55.20:2379 config placement-rules rule-bundle save --in=/home/tidb/dr_auto_sync/rules_reverse.json
"Update rules and groups successfully."
​
[tidb@tidb1 dr_auto_sync]$ tiup ctl:v7.1.5 pd -u http://10.211.55.20:2379 config placement-rules show 
Starting component ctl: /home/tidb/.tiup/components/ctl/v7.1.5/ctl pd -u http://10.211.55.20:2379 config placement-rules show
[
{
  "group_id": "pd",
  "id": "followers",
  "start_key": "",
  "end_key": "",
  "role": "follower",
  "is_witness": false,
  "count": 1,
  "label_constraints": [
    {
      "key": "logic",
      "op": "in",
      "values": [
        "logic2"
      ]
    }
  ],
  "location_labels": [
    "dc",
    "logic",
    "rack",
    "host"
  ],
  "create_timestamp": 1719635156
},
{
  "group_id": "pd",
  "id": "learners",
  "start_key": "",
  "end_key": "",
  "role": "learner",
  "is_witness": false,
  "count": 1,
  "label_constraints": [
    {
      "key": "logic",
      "op": "in",
      "values": [
        "logic1"
      ]
    }
  ],
  "location_labels": [
    "dc",
    "logic",
    "rack",
    "host"
  ],
  "version": 1
},
{
  "group_id": "pd",
  "id": "voters",
  "start_key": "",
  "end_key": "",
  "role": "voter",
  "is_witness": false,
  "count": 2,
  "label_constraints": [
    {
      "key": "dc",
      "op": "in",
      "values": [
        "dc2"
      ]
    }
  ],
  "location_labels": [
    "dc",
    "logic",
    "rack",
    "host"
  ],
  "version": 1
}
]
​

五、切换后校验

检查主备同步状态

符合预期主备状态同步正常

记一次dr_auto_sync计划内的切换测试过程-1

检查副本分布状态

副本的leader角色全部切换至dc2中心

记一次dr_auto_sync计划内的切换测试过程-7

检查数据一致性

[tidb@tidb1 dr_auto_sync]$ mysql -h10.211.55.20 -P4000 -uroot -proot
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 405
Server version: 5.7.25-TiDB-v7.1.5 TiDB Server (Apache License 2.0) Enterprise Edition, MySQL 5.7 compatible
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
​
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
​
Database changed
MySQL [test]> select count(*) from test_part;
+----------+
| count(*) |
+----------+
| 1190420 |
+----------+
1 row in set (0.78 sec)
​

相关文章

Oracle如何使用授予和撤销权限的语法和示例
Awesome Project: 探索 MatrixOrigin 云原生分布式数据库
下载丨66页PDF,云和恩墨技术通讯(2024年7月刊)
社区版oceanbase安装
Oracle 导出CSV工具-sqluldr2
ETL数据集成丨快速将MySQL数据迁移至Doris数据库

发布评论