RGW 简介
Ceph Object Gateway也称为对象网关 ,RGW支持S2协议和SWIFT协议对外提供对象存储,两者的对象访问方式均以HTTP方式访问。用户的对象经由RGW处理后,会最终存放在后端RADOS系统。
RGW对外提供三种基础数据逻辑实体:
1.用户
RGW兼容aws s3和OpenStack Switft,用户数据包含:
2.存储桶(bucket)
存储桶是对象的容器,为了方便管理和操作具有相同属性的一类对象而引起的一级管理单元存储桶信息包含:
3.对象
RGW中的应用对象对应RADOS对象。应用对象上传分整体上传和分段上传,不同的方式应用对象对应RADOS对象的方式不同
3.1 整体上传
3.2 分段上传
RGW依照条带大小将应用对象的每一个分段分成多个RADOS对象,每个分段的第一个 RADOS 对象名称为:
当所有的分段上传结束后,RGW 会从 data_extra_pool 中的分段上传临时对象中读取各个分段信息,主要是各分段的 manifest 信息,组成一个 manifest;然后生成一个新的 RADOS 对象,即 head obj,用来保存分段上传的应用对象的元数据信息和各分段的manifest。
RGW 高可用架构
RGW 添加节点及配置文件修改
Ceph集群信息
192.168.31.20 | mon、mgr、mds(主)、osd、rgw | ceph-01 | rgw端口都修改为81 |
192.168.31.21 | mon、mgr、mds、osd、rgw添加rgw节点 | ceph-02 | rgw添加节点 |
192.168.31.22 | mon、mgr(主)、mds、osd | ceph-03 | |
192.168.31.23 | osd | ceph-04 | |
192.168.31.120 | VIP |
Ceph基础知识和基础架构
新闻联播老司机
目前我们ceph集群中rgw节点只有一个,只有一个的ceph-01节点。现在我们需要添加一个ceph-02节点,来配合ceph-01添加高可用。
[root@ceph-01 ~]# ceph -s cluster: id: c8ae7537-8693-40df-8943-733f82049642 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 37h) mgr: ceph-03(active, since 3d), standbys: ceph-02, ceph-01 mds: cephfs-abcdocker:1 {0=ceph-01=up:active} 2 up:standby osd: 4 osds: 4 up (since 3d), 4 in (since 3w) rgw: 1 daemon active (ceph-01) task status: data: pools: 9 pools, 384 pgs objects: 3.16k objects, 11 GiB usage: 36 GiB used, 144 GiB / 180 GiB avail pgs: 384 active+clean
我们ceph-01已经有rgw,目前只需要在ceph-02上安装rgw,并且在ceph-01和ceph-02节点同时安装keeplived和haproxy或者nginx进行代理rgw
在ceph-02上安装rgw
#我们ceph-01安装了ceph-deploy,所以需要进入到ceph-deploy节点进行安装配置 [root@ceph-01 ~]# cd ceph-deploy #通过ceph-deploy创建rgw节点,这里实际上可以参考创建集群的步骤 [root@ceph-01 ceph-deploy]# ceph-deploy rgw create ceph-02 #ceph-02为hostname ceph-02节点
通过ceph -s已经可以看到rgw节点
[root@ceph-01 ceph-deploy]# ceph -s cluster: id: c8ae7537-8693-40df-8943-733f82049642 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 11h) mgr: ceph-03(active, since 4d), standbys: ceph-02, ceph-01 mds: cephfs-abcdocker:1 {0=ceph-01=up:active} 2 up:standby osd: 4 osds: 4 up (since 4d), 4 in (since 3w) rgw: 2 daemons active (ceph-01, ceph-02) #ceph-02已经添加完毕 task status: data: pools: 9 pools, 384 pgs objects: 3.16k objects, 11 GiB usage: 36 GiB used, 144 GiB / 180 GiB avail pgs: 384 active+clean io: client: 3.2 KiB/s rd, 0 B/s wr, 3 op/s rd, 2 op/s wr
目前我们可以看到rgw默认端口为7480,下面我们需要修改一下默认端口号,将7480修改为81端口
修改ceph-01 ceph-02端口号
[root@ceph-01 ceph-deploy]# vim /root/ceph-deploy/ceph.conf #修改ceph.conf,直接修改ceph-deploy目录下的文件,后面可以通过命令直接分发到各个节点 [client.rgw.ceph-01] rgw_frontends = "civetweb port=81" [client.rgw.ceph-02] rgw_frontends = "civetweb port=81" #将ceph-01复制为ceph-02,端口都修改为81
修改完毕后push config文件到ceph-01 ceph-02节点
[root@ceph-01 ceph-deploy]# ceph-deploy --overwrite-conf config push ceph-01 ceph-02 #接下来手动重启radosgw服务 #ceph-01节点和ceph-02节点都需要手动重启 [root@ceph-01 ceph-deploy]# systemctl restart ceph-radosgw.target [root@ceph-01 ceph-deploy]# lsof -i:81 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME radosgw 12755 ceph 45u IPv4 1308899 0t0 TCP *:81 (LISTEN) #ceph-02节点手动重启 [root@ceph-01 ceph-deploy]# ssh ceph-02 Last login: Wed Feb 23 15:30:58 2022 from ceph-01 [root@ceph-02 ~]# systemctl restart ceph-radosgw.target [root@ceph-02 ~]# lsof -i:81 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME radosgw 21845 ceph 45u IPv4 823286 0t0 TCP *:81 (LISTEN) [root@ceph-02 ~]# curl 127.0.0.01:81 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult
Keeplived搭建
在ceph-01和ceph-02节点上安装keeplived
[root@ceph-01 ~]# yum install -y keepalived [root@ceph-02 ~]# yum install -y keepalived
备份ceph-01和ceph-02默认配置文件
[root@ceph-01 ~]# mv /etc/keepalived/keepalived.conf{,.bak_2022-03-16} [root@ceph-02 ~]# mv /etc/keepalived/keepalived.conf{,.bak_2022-03-16}
修改ceph-01配置
VIP:192.168.31.120network:eth0
cat >>/etc/keepalived/keepalived.conf<<EOF ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 weight -2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.31.120 } track_script { chk_haproxy } } EOF
修改ceph-02配置
cat >>/etc/keepalived/keepalived.conf<<EOF ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 weight -2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.31.120 } track_script { chk_haproxy } } EOF
在ceph-01和ceph-02执行启动命令
systemctl enable --now keepalived
启动完成测试keepalived
[root@ceph-01 ~]# ping 192.168.31.120 PING 192.168.31.120 (192.168.31.120) 56(84) bytes of data. 64 bytes from 192.168.31.120: icmp_seq=1 ttl=64 time=0.138 ms 64 bytes from 192.168.31.120: icmp_seq=2 ttl=64 time=0.134 ms 64 bytes from 192.168.31.120: icmp_seq=3 ttl=64 time=0.123 ms ^C --- 192.168.31.120 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.123/0.131/0.138/0.014 ms [root@ceph-01 ~]#
Haproxy 代理RGW
接下来安装haproxy,在ceph-01和ceph-02上执行
yum install -y haproxy
覆盖ceph-01 和ceph-02配置具体配置参数根据需求修改,需要修改ip和hostname
cat >/etc/haproxy/haproxy.cfg<<EOF global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend http_web *:80 mode http default_backend rgw backend rgw balance roundrobin mode http server ceph-01 192.168.31.20:81 server ceph-02 192.168.31.21:81 EOF
重启ceph-01和ceph-02上的haproxy
systemctl enable --now haproxy
查看服务
[root@ceph-01 ~]# lsof -i:80 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME haproxy 16030 haproxy 5u IPv4 1330793 0t0 TCP *:http (LISTEN) [root@ceph-01 ~]# curl 127.0.0.1 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
目前haproxy已经配置完成
[root@ceph-01 ~]# curl 192.168.31.120 #curl vip地址测试 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult
RGW客户端修改
rgw客户端创建bucket可以参考下面文章
Ceph-deploy 快速部署Ceph集群
新闻联播老司机
之前我们使用的是s3cmd命令,接下来修改rgw地址
sed -i 's/192.168.31.10/192.168.31.120/g' /root/.s3cfg
执行查看bucket
eph-01 ~]# sed -i 's/192.168.31.20/192.168.31.120/g' /root/.s3cfg [root@ceph-01 ~]# vim .s3cfg [root@ceph-01 ~]# s3cmd ls 2022-01-27 07:31 s3://ceph-s3-bucket 2022-01-27 10:07 s3://s3cmd-abcdocker-demo
相关文章:
- Ceph-deploy 快速部署Ceph集群
- Ceph集群日常使用命令
- Ceph OSD扩容与缩容
- Ceph RBD 备份与恢复