mongodb4.4.8分片集群搭建

2023年 7月 15日 69.7k 0

引用geekdemo课程

4核8G的分片集群

  • 配置域名解析和分片目录
  • 创建分片复复制集并初始化
  • 创建config复制集并初始化
  • 初始化分片集群
  • 加入第一个分片
  • 创建分片表
  • 加入第二个分片

架构如下

image-20210807190701985.png

分布情况

image-20210807190800494.png

安装

mkdir /data/db && cd /data/
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.4.8.tgz
tar xf mongodb-linux-x86_64-rhel70-4.4.8.tgz 
ln -sn mongodb-linux-x86_64-rhel70-4.4.8 mongodb
export PATH=$PATH:/data/mongodb/bin
echo "export PATH=$PATH:/data/mongodb/bin" >  /etc/profile.d/mongodb.sh
source /etc/profile.d/mongodb.sh
[root@Node-172_16_100_91 /data]# mongod -version
db version v4.4.8
Build Info: {
    "version": "4.4.8",
    "gitVersion": "83b8bb8b6b325d8d8d3dfd2ad9f744bdad7d6ca0",
    "openSSLVersion": "OpenSSL 1.0.1e-fips 11 Feb 2013",
    "modules": [],
    "allocator": "tcmalloc",
    "environment": {
        "distmod": "rhel70",
        "distarch": "x86_64",
        "target_arch": "x86_64"
    }
}

配置域名解析

配置hosts模拟域名记录

172.16.100.91 instance1 mongo1.linuxea.com mongo2.linuxea.com
172.16.100.92 instance2 mongo3.linuxea.com mongo4.linuxea.com
172.16.100.93 instance3 mongo5.linuxea.com mongo6.linuxea.com

添加之前,我们在每个节点修改hostname,分布是mogodb_instance1,mogodb_instance2,mogodb_instance3

for i in static pretty transient; do hostnamectl set-hostname mogodb_instance1 --$i; done

写到/etc/profile中

而后添加hosts

echo "172.16.100.91 mogodb_instance1 mongo1.linuxea.com mongo2.linuxea.com" >> /etc/hosts
echo "172.16.100.92 mogodb_instance2 mongo3.linuxea.com mongo4.linuxea.com" >> /etc/hosts
echo "172.16.100.93 mogodb_instance3 mongo5.linuxea.com mongo6.linuxea.com" >> /etc/hosts

创建目录

1,3,5执行

mkdir -p /data/shard1/ /data/config/

2,4,5执行

mkdir -p /data/shard2/ /data/mongos/

开始搭建

在三台节点都创建用户并修改文件属性

groupadd -r -g 490 mongodb
useradd -u 490 -s /sbin/nologin -c 'mongodb server' -g mongodb mongodb -M

初始化参数

echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
chown -R mongodb.mongodb /data/shard*

复制集

member1/member3/member5上执行以下命令。注意以下参数:

  • shardsvr: 表示这不是一个普通的复制集,而是分片集的一部分;
  • wiredTigerCacheSizeGB: 该参数表示MongoDB能够使用的缓存大小。默认值为(RAM - 1GB) / 2
  • 不建议配置超过默认值,有OOM的风险;
  • 因为我们当前测试会在一台服务器上运行多个实例,因此配置了较小的值;
  • bind_ip: 生产环境中强烈建议不要绑定外网IP,此处为了方便演示绑定了所有IP地址。类似的道理,生产环境中应开启认证--auth,此处为演示方便并未使用;

在三台节点都运行如下命令

sudo -u mongodb /data/mongodb/bin/mongod --bind_ip 0.0.0.0 --replSet shard1 --dbpath /data/shard1 --logpath /data/shard1/mongod.log --port 27010 --fork --shardsvr --wiredTigerCacheSizeGB 2

防火墙放行

firewall-cmd --permanent --zone=public   --add-rich-rule='rule family="ipv4" source address="172.16.100.0/24" port port="27010" protocol="tcp" accept'
firewall-cmd --reload
firewall-cmd --list-all

运行完成后,就有三台节点已经启动,而后组合这三台节点为复制集

用这三个实例搭建shard1复制集:

  • 任意连接到一个实例,例如我们连接到member1.example.com
  mongo --host mongo1.linuxea.com:27010
  • 初始化shard1复制集。我们使用如下配置初始化复制集:
  rs.initiate({
      _id: "shard1",
      "members" : [
          {
              "_id": 0,
              "host" : "mongo1.linuxea.com:27010"
          },
          {
              "_id": 1,
              "host" : "mongo3.linuxea.com:27010"
          },
          {
              "_id": 2,
              "host" : "mongo5.linuxea.com:27010"
          }
      ]
  });

执行

> rs.initiate({
...     _id: "shard1",
...     "members" : [
...         {
...             "_id": 0,
...             "host" : "mongo1.linuxea.com:27010"
...         },
...         {
...             "_id": 1,
...             "host" : "mongo3.linuxea.com:27010"
...         },
...         {
...             "_id": 2,
...             "host" : "mongo5.linuxea.com:27010"
...         }
...     ]
... });
{ "ok" : 1 }
shard1:SECONDARY> 

只需要稍等,shard1:SECONDARY> 状态将会切换为shard1:SECONDARY> 使用 rs.status()查看状态

shard1:PRIMARY> rs.status()
{
    "set" : "shard1",
    "date" : ISODate("2021-08-07T14:54:37.331Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "majorityVoteCount" : 2,
    "writeMajorityCount" : 2,
    "votingMembersCount" : 3,
    "writableVotingMembersCount" : 3,
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1628348069, 1),
            "t" : NumberLong(1)
        },
        "lastCommittedWallTime" : ISODate("2021-08-07T14:54:29.354Z"),
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1628348069, 1),
            "t" : NumberLong(1)
        },
        "readConcernMajorityWallTime" : ISODate("2021-08-07T14:54:29.354Z"),
        "appliedOpTime" : {
            "ts" : Timestamp(1628348069, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1628348069, 1),
            "t" : NumberLong(1)
        },
        "lastAppliedWallTime" : ISODate("2021-08-07T14:54:29.354Z"),
        "lastDurableWallTime" : ISODate("2021-08-07T14:54:29.354Z")
    },
    "lastStableRecoveryTimestamp" : Timestamp(1628348049, 2),
    "electionCandidateMetrics" : {
        "lastElectionReason" : "electionTimeout",
        "lastElectionDate" : ISODate("2021-08-07T14:54:08.916Z"),
        "electionTerm" : NumberLong(1),
        "lastCommittedOpTimeAtElection" : {
            "ts" : Timestamp(0, 0),
            "t" : NumberLong(-1)
        },
        "lastSeenOpTimeAtElection" : {
            "ts" : Timestamp(1628348037, 1),
            "t" : NumberLong(-1)
        },
        "numVotesNeeded" : 2,
        "priorityAtElection" : 1,
        "electionTimeoutMillis" : NumberLong(10000),
        "numCatchUpOps" : NumberLong(0),
        "newTermStartDate" : ISODate("2021-08-07T14:54:09.234Z"),
        "wMajorityWriteAvailabilityDate" : ISODate("2021-08-07T14:54:10.227Z")
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "mongo1.linuxea.com:27010",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 204,
            "optime" : {
                "ts" : Timestamp(1628348069, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2021-08-07T14:54:29Z"),
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "electionTime" : Timestamp(1628348048, 1),
            "electionDate" : ISODate("2021-08-07T14:54:08Z"),
            "configVersion" : 1,
            "configTerm" : -1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 1,
            "name" : "mongo3.linuxea.com:27010",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 39,
            "optime" : {
                "ts" : Timestamp(1628348069, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1628348069, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2021-08-07T14:54:29Z"),
            "optimeDurableDate" : ISODate("2021-08-07T14:54:29Z"),
            "lastHeartbeat" : ISODate("2021-08-07T14:54:36.949Z"),
            "lastHeartbeatRecv" : ISODate("2021-08-07T14:54:36.137Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncSourceHost" : "mongo1.linuxea.com:27010",
            "syncSourceId" : 0,
            "infoMessage" : "",
            "configVersion" : 1,
            "configTerm" : -1
        },
        {
            "_id" : 2,
            "name" : "mongo5.linuxea.com:27010",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 39,
            "optime" : {
                "ts" : Timestamp(1628348069, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1628348069, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2021-08-07T14:54:29Z"),
            "optimeDurableDate" : ISODate("2021-08-07T14:54:29Z"),
            "lastHeartbeat" : ISODate("2021-08-07T14:54:36.949Z"),
            "lastHeartbeatRecv" : ISODate("2021-08-07T14:54:36.048Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncSourceHost" : "mongo1.linuxea.com:27010",
            "syncSourceId" : 0,
            "infoMessage" : "",
            "configVersion" : 1,
            "configTerm" : -1
        }
    ],
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1628348069, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1628348069, 1)
}
shard1:PRIMARY> 

config 复制集

  • configsvr是标注这是config服务器的

shard1类似的方式,我们可以搭建config服务器。在member1/member3/member5上执行以下命令:

  • 运行config实例:

    授权目录

    chown -R mongodb.mongodb /data/config/

    启动

    sudo -u mongodb /data/mongodb/bin/mongod --bind_ip 0.0.0.0 --replSet config --dbpath /data/config --logpath /data/config/mongod.log --port 27019 --fork --configsvr --wiredTigerCacheSizeGB 1

    防火墙放行

    firewall-cmd --permanent --zone=public   --add-rich-rule='rule family="ipv4" source address="172.16.100.0/24" port port="27019" protocol="tcp" accept'
    firewall-cmd --reload
    firewall-cmd --list-all
  • 连接到member1

    mongo --host mongo1.linuxea.com:27019
  • 初始化config复制集:

    rs.initiate({
        _id: "config",
        "members" : [
            {
                "_id": 0,
                "host" : "mongo1.linuxea.com:27019"
            },
            {
                "_id": 1,
                "host" : "mongo3.linuxea.com:27019"
            },
            {
                "_id": 2,
                "host" : "mongo5.linuxea.com:27019"
            }
        ]
    });

执行

> rs.initiate({
...     _id: "config",
...     "members" : [
...         {
...             "_id": 0,
...             "host" : "mongo1.linuxea.com:27019"
...         },
...         {
...             "_id": 1,
...             "host" : "mongo3.linuxea.com:27019"
...         },
...         {
...             "_id": 2,
...             "host" : "mongo5.linuxea.com:27019"
...         }
...     ]
... });
{
    "ok" : 1,
    "$gleStats" : {
        "lastOpTime" : Timestamp(1628348518, 1),
        "electionId" : ObjectId("000000000000000000000000")
    },
    "lastCommittedOpTime" : Timestamp(0, 0)
}

稍等等状态从config:SECONDARY> 变为config:PRIMARY>

config:SECONDARY> 
config:SECONDARY> 
config:SECONDARY> 
config:PRIMARY> 
config:PRIMARY> 
config:PRIMARY> 

mongos配置

mongos的搭建比较简单,我们在member2/member4/member6上搭建3个mongos。注意以下参数:

  • configdb: 表示config使用的集群地址;

开始搭建:

目录授权

chown -R mongodb.mongodb /data/mongos/
  • 运行mongos进程:

    sudo -u mongodb /data/mongodb/bin/mongos --bind_ip 0.0.0.0 --logpath /data/mongos/mongos.log --port 27017 --configdb config/mongo1.linuxea.com:27019,mongo3.linuxea.com:27019,mongo5.linuxea.com:27019 --fork

    放行防火墙

    firewall-cmd --permanent --zone=public   --add-rich-rule='rule family="ipv4" source address="172.16.100.0/24" port port="27017" protocol="tcp" accept'
    firewall-cmd --reload
    firewall-cmd --list-all
  • 连接到任意一个mongos,此处我们使用member1

    mongo --host mongo1.linuxea.com:27017
  • shard1加入到集群中:

    sh.addShard("shard1/mongo1.linuxea.com:27010,mongo3.linuxea.com:27010,mongo5.linuxea.com:27010");

执行

mongos> sh.addShard("shard1/mongo1.linuxea.com:27010,mongo3.linuxea.com:27010,mongo5.linuxea.com:27010");
{
    "shardAdded" : "shard1",
    "ok" : 1,
    "operationTime" : Timestamp(1628349200, 3),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1628349200, 3),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
mongos> 

查看

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
      "_id" : 1,
      "minCompatibleVersion" : 5,
      "currentVersion" : 6,
      "clusterId" : ObjectId("610ea073af74e0f6aa13bd64")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/mongo1.linuxea.com:27010,mongo3.linuxea.com:27010,mongo5.linuxea.com:27010",  "state" : 1 }
  active mongoses:
        "4.4.8" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
mongos> 

创建分片表

上述示例中我们搭建了一个只有1个分片的分片集。在继续之前我们先来测试一下这个分片集。

  • 连接到分片集:

    mongo --host mongo1.linuxea.com:27017
    sh.status();
    mongos> sh.status()
    --- Sharding Status --- 
      sharding version: {
          "_id" : 1,
          "minCompatibleVersion" : 5,
          "currentVersion" : 6,
          "clusterId" : ObjectId("610ea073af74e0f6aa13bd64")
      }
      shards:
            {  "_id" : "shard1",  "host" : "shard1/mongo1.linuxea.com:27010,mongo3.linuxea.com:27010,mongo5.linuxea.com:27010",  "state" : 1 }
      active mongoses:
            "4.4.8" : 3
      autosplit:
            Currently enabled: yes
      balancer:
            Currently enabled:  yes
            Currently running:  no
            Failed balancer rounds in last 5 attempts:  0
            Migration Results for the last 24 hours: 
                    No recent migrations
      databases:
            {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                    config.system.sessions
                            shard key: { "_id" : 1 }
                            unique: false
                            balancing: true
                            chunks:
                                    shard1    1024
                            too many chunks to print, use verbose if you want to force print
  • 创建一个分片表:

    foo为库名

    sh.enableSharding("foo");  // 指定库
    sh.shardCollection("foo.bar", {_id: 'hashed'});  // 指定foo.bar表,分片键id作为hashed
    sh.status();

执行

mongos> sh.enableSharding("foo");
{
    "ok" : 1,
    "operationTime" : Timestamp(1628349504, 4),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1628349504, 4),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}

1

mongos> sh.shardCollection("foo.bar", {_id: 'hashed'}); 
{
    "collectionsharded" : "foo.bar",
    "collectionUUID" : UUID("0bec2da5-e75b-4de4-82be-292584633af9"),
    "ok" : 1,
    "operationTime" : Timestamp(1628349791, 2),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1628349791, 2),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
mongos> 

sh.status()查看状态

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
      "_id" : 1,
      "minCompatibleVersion" : 5,
      "currentVersion" : 6,
      "clusterId" : ObjectId("610ea073af74e0f6aa13bd64")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/mongo1.linuxea.com:27010,mongo3.linuxea.com:27010,mongo5.linuxea.com:27010",  "state" : 1 }
  active mongoses:
        "4.4.8" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1    1024
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "foo",  "primary" : "shard1",  "partitioned" : true,  "version" : {  "uuid" : UUID("eb727625-7da5-4da6-a162-25ce2ee2973d"),  "lastMod" : 1 } }
                foo.bar
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1    2
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong(0) } on : shard1 Timestamp(1, 0) 
                        { "_id" : NumberLong(0) } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 1) 
  • 任意写入若干数据:

    use foo
    for (var i = 0; i < 10000; i++) {
        db.bar.insert({i: i});
    }

执行插入10000条数据

mongos> use foo
switched to db foo
mongos> for (var i = 0; i < 10000; i++) {
...     db.bar.insert({i: i});
... }
WriteResult({ "nInserted" : 1 })

此时在查看databases字段多了一个foo

"_id" : "foo", "primary": 表示每个分片集群里面都有一个主shard,有些操作会在primary上进行

partitioned: true: 表示这张表已经分片了,并且会在下面列出

"_id" : "foo",  "primary" : "shard1",  "partitioned" : true, 
mongos> sh.status()
--- Sharding Status --- 
..............
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1    1024
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "foo",  "primary" : "shard1",  "partitioned" : true,  "version" : {  "uuid" : UUID("eb727625-7da5-4da6-a162-25ce2ee2973d"),  "lastMod" : 1 } }
                foo.bar
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1    2
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong(0) } on : shard1 Timestamp(1, 0) 
                        { "_id" : NumberLong(0) } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 1) 

向分片集群加入新的分片

创建复制集

下面我们搭建shard2并将其加入分片集中,观察发生的效果。

使用类似shard1的方式搭建shard2。在member2/member4/member6上执行以下命令:

授权

chown -R mongodb.mongodb /data/shard2/

启动

sudo -u mongodb /data/mongodb/bin/mongod --bind_ip 0.0.0.0 --replSet shard2 --dbpath /data/shard2 --logpath /data/shard2/mongod.log --port 27011 --fork --shardsvr --wiredTigerCacheSizeGB 1
firewall-cmd --permanent --zone=public   --add-rich-rule='rule family="ipv4" source address="172.16.100.0/24" port port="27011" protocol="tcp" accept'
firewall-cmd --reload
firewall-cmd --list-all

用这三个实例搭建shard2复制集:

  • 任意连接到一个实例,例如我们连接到member2.example.com

    mongo --host mongo2.linuxea.com:27011
  • 初始化shard2复制集。我们使用如下配置初始化复制集:

    rs.initiate({
        _id: "shard2",
        "members" : [
            {
                "_id": 0,
                "host" : "mongo2.linuxea.com:27011"
            },
            {
                "_id": 1,
                "host" : "mongo4.linuxea.com:27011"
            },
            {
                "_id": 2,
                "host" : "mongo6.linuxea.com:27011"
            }
        ]
    });

执行

> rs.initiate({
...     _id: "shard2",
...     "members" : [
...         {
...             "_id": 0,
...             "host" : "mongo2.linuxea.com:27011"
...         },
...         {
...             "_id": 1,
...             "host" : "mongo4.linuxea.com:27011"
...         },
...         {
...             "_id": 2,
...             "host" : "mongo6.linuxea.com:27011"
...         }
...     ]
... });
{ "ok" : 1 }

而后稍作等待shard2:SECONDARY> 切换为shard2:PRIMARY>

shard2:SECONDARY> 
shard2:SECONDARY> 
shard2:SECONDARY> 
shard2:PRIMARY> 
shard2:PRIMARY> 

如果没有看到也要在rs.status()中找到primary

shard2:PRIMARY> rs.status()
{
..................
    "members" : [
        {
            "_id" : 0,
            "name" : "mongo2.linuxea.com:27011",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 110,
            "optime" : {
                "ts" : Timestamp(1628350780, 1),
                "t" : NumberLong(1)
................
        },
        {
            "_id" : 1,
            "name" : "mongo4.linuxea.com:27011",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 66,
            "optime" : {
.............
        },
        {
            "_id" : 2,
            "name" : "mongo6.linuxea.com:27011",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 66,
            "optime" : {
                "ts" : Timestamp(1628350780, 1),
                "t" : NumberLong(1)
..........
}

加入第二个分片

  • 连接到任意一个mongos。此处使用mongo1

    mongo --host mongo1.linuxea.com:27017
  • shard2加入到集群中:

    sh.addShard("shard2/mongo2.linuxea.com:27011,mongo4.linuxea.com:27011,mongo6.linuxea.com:27011");
    mongos> sh.addShard("shard2/mongo2.linuxea.com:27011,mongo4.linuxea.com:27011,mongo6.linuxea.com:27011");
    {
        "shardAdded" : "shard2",
        "ok" : 1,
        "operationTime" : Timestamp(1628351344, 3),
        "$clusterTime" : {
            "clusterTime" : Timestamp(1628351344, 4),
            "signature" : {
                "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                "keyId" : NumberLong(0)
            }
        }
    }
  • 观察sh.status()

    sh.status();
    mongos> sh.status()
    --- Sharding Status --- 
    .......
      shards:
            {  "_id" : "shard1",  "host" : "shard1/mongo1.linuxea.com:27010,mongo3.linuxea.com:27010,mongo5.linuxea.com:27010",  "state" : 1 }
            {  "_id" : "shard2",  "host" : "shard2/mongo2.linuxea.com:27011,mongo4.linuxea.com:27011,mongo6.linuxea.com:27011",  "state" : 1 }
      active mongoses:
    ........
                            unique: false
                            balancing: true
                            chunks:
                                    shard1    1
                                    shard2    1
                            { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(2, 0) 
                            { "_id" : NumberLong(0) } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(2, 1) 
    mongos> 

可以发现原本shard1上的两个chunk被均衡到了shard2上,这就是MongoDB的自动均衡机制。

现在就可以通过mongos进行操作了

集群重启恢复

先启动config

sudo -u mongodb /data/mongodb/bin/mongod --bind_ip 0.0.0.0 --replSet config --dbpath /data/config --logpath /data/config/mongod.log --port 27019 --fork --configsvr --wiredTigerCacheSizeGB 1

在启动复制集1和复制集2,有几个启动几个

sudo -u mongodb /data/mongodb/bin/mongod --bind_ip 0.0.0.0 --replSet shard1 --dbpath /data/shard1 --logpath /data/shard1/mongod.log --port 27010 --fork --shardsvr --wiredTigerCacheSizeGB 2
sudo -u mongodb /data/mongodb/bin/mongod --bind_ip 0.0.0.0 --replSet shard2 --dbpath /data/shard2 --logpath /data/shard2/mongod.log --port 27011 --fork --shardsvr --wiredTigerCacheSizeGB 1

最后启动mongos

sudo -u mongodb /data/mongodb/bin/mongos --bind_ip 0.0.0.0 --logpath /data/mongos/mongos.log --port 27017 --configdb config/mongo1.linuxea.com:27019,mongo3.linuxea.com:27019,mongo5.linuxea.com:27019 --fork

相关文章

Oracle如何使用授予和撤销权限的语法和示例
Awesome Project: 探索 MatrixOrigin 云原生分布式数据库
下载丨66页PDF,云和恩墨技术通讯(2024年7月刊)
社区版oceanbase安装
Oracle 导出CSV工具-sqluldr2
ETL数据集成丨快速将MySQL数据迁移至Doris数据库

发布评论