实战RAC迁移项目第3篇:RAC替换存储

2023年 12月 24日 104.8k 0

接上2篇文章:

实战RAC迁移项目第1篇:RAC-RAC的DG搭建

实战RAC迁移项目第2篇:RAC-RAC主备切换/IP更换/DG恢复

本篇为最终章,主要是描述一下最后一步操作,更换ASM存储从原来的生产存储换为另1台存储。

整个过程其实就是在线替换ASM磁盘,通过加盘、删盘、重平衡完成。也适用于生产上做为存储在线迁移。

我这边的测试环境是通过ISCSI来模拟的存储操作。

1、现有环境描述

虚拟ISCSI存储IP:192.168.57.200/192.168.58.200

RAC1-ISCSI网卡:192.168.57.10

RAC2-ISCSI网卡:192.168.58.10

卷名

操作系统显示磁盘

UDEV绑定设备

磁盘大小

ASM磁盘组

备注

votedisk01

/dev/sdb

/dev/asm-diskb

5G

VOTEDISK

原有磁盘

data01

/dev/sdc

/dev/asm-diskc

20G

DATA

原有磁盘

votedisk02

/dev/sdd

/dev/asm-diskd

5G

VOTEDISK

本次要新加的

data02

/dev/sde

/dev/asm-diske

20G

DATA

本次要新加的

以下是需要用到的脚本文件

磁盘规则的生成脚本

for i in b c;
do echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asmdisk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"";
done

ASM规则文件

[root@rac1 ~]# cat /etc/udev/rules.d/80-asm.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360003ff44dc75adca0aca6bc521f7e08", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360003ff44dc75adcbc1a135921716933", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"

以下脚本文件放到/home/grid目录下

检查ASM磁盘所属于磁盘组、磁盘路径、diskname、总大小、使用大小。把改脚本存为list-asmdisk1.sql

SET LINESIZE 200
SET PAGESIZE 100
col groupname format a25 heading 'Disk Group'
col path format a35 heading 'LUN Path'
col diskname format a20 heading 'Disk Name'
col sector_size format 9,999 heading 'Sector |Size'
col block_size format 99,999 heading 'Block |Size'
col state format a10 heading 'State'
col au format 9,999,999 heading 'AU Size'
col total_gig format 999,999 heading 'Group |Total |GB'
col dtotal_gig format 999,999 heading 'Disk |Total |GB'
col free_gig format 999,999 heading 'Group |Free |GB'
col dfree_gig format 999,999 heading 'Disk |Free |GB'
select
g.name groupname,
d.path,
d.name diskname,
d.total_mb/1024 dtotal_gig,
d.free_mb/1024 dfree_gig,

from
v$asm_diskgroup g, v$asm_disk d
where
d.group_number = g.group_number
order by
g.name, d.disk_number;

查看所有ASMDISK磁盘情况,把改脚本存为list-asmdisk2.sql

col name for a40;
set line 200;
col path for a30;
select name,path,state,HEADER_STATUS from v$asm_disk;

查看ASM磁盘组信息,把改脚本存为list-diskgroup.sql

select state,name,type,total_mb, free_mb from v$asm_diskgroup_stat ;

查看磁盘reblance同步状态脚本,把脚本存为list-reblance.sql

select * from v$asm_operation;

2、映射新磁盘

创建ISCSI磁盘votedisk02、data02

两个节点扫描ISCSI新增存储,发现新增加的sdd、sde两块磁盘。以下为命令输出:

--RAC1查看当前磁盘
[root@rac1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 120G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 119.5G 0 part
├─VolGroup-lv_root (dm-0) 253:0 0 103.5G 0 lvm /
└─VolGroup-lv_swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
sdb 8:16 0 5G 0 disk
sdc 8:32 0 20G 0 disk
--RAC1扫描新增加磁盘
[root@rac1 ~]# iscsiadm -m session --rescan
Rescanning session [sid: 2, target: iqn.1991-05.com.microsoft:tgt-rac-target, portal: 192.168.58.200,3260]
[root@rac1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 120G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 119.5G 0 part
├─VolGroup-lv_root (dm-0) 253:0 0 103.5G 0 lvm /
└─VolGroup-lv_swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
sdb 8:16 0 5G 0 disk
sdc 8:32 0 20G 0 disk
sdd 8:48 0 5G 0 disk
sde 8:64 0 20G 0 disk

--RAC2查看当前磁盘
[root@rac2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 120G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 119.5G 0 part
├─VolGroup-lv_root (dm-0) 253:0 0 103.5G 0 lvm /
└─VolGroup-lv_swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
sdb 8:16 0 5G 0 disk
sdc 8:32 0 20G 0 disk
--RAC2扫描新增加磁盘
[root@rac2 ~]# iscsiadm -m session --rescan
Rescanning session [sid: 2, target: iqn.1991-05.com.microsoft:tgt-rac-target, portal: 192.168.57.200,3260]
[root@rac2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 120G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 119.5G 0 part
├─VolGroup-lv_root (dm-0) 253:0 0 103.5G 0 lvm /
└─VolGroup-lv_swap (dm-1) 253:1 0 16G 0 lvm [SWAP]
sdb 8:16 0 5G 0 disk
sdc 8:32 0 20G 0 disk
sdd 8:48 0 5G 0 disk
sde 8:64 0 20G 0 disk

创建新的UDEV磁盘

[root@rac2 dev]# for i in d e;
> do echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"";
> done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360003ff44dc75adc80f5f5ea50872cb9", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360003ff44dc75adc83ef9200280817fb", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"

添加生成的字符串到规则文件,两个节点都要执行,添加到asm规则文件里

[root@rac1 ~]# cat /etc/udev/rules.d/80-asm.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360003ff44dc75adca0aca6bc521f7e08", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360003ff44dc75adcbc1a135921716933", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360003ff44dc75adc80f5f5ea50872cb9", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="360003ff44dc75adc83ef9200280817fb", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"

使规则生效start_udev两个节点都生效

[root@rac1 ~]# start_udev
Starting udev: [ OK ]
[root@rac1 ~]# ls -l /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Dec 24 19:50 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Dec 24 19:50 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Dec 24 19:50 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Dec 24 19:50 /dev/asm-diske
[root@rac2 dev]# start_udev
Starting udev: [ OK ]
[root@rac2 dev]# ls -l /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Dec 24 19:51 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Dec 24 19:51 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Dec 24 19:51 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Dec 24 19:51 /dev/asm-diske

3、向现有的磁盘组中添加磁盘

以下操作在1个节点操作即可

使用grid用户,执行sqlplus / as sysasm登录ASM实例,确认ASM磁盘组情况,目前是两个磁盘组分别为DATA和VOTEDISK,都为extern模式

SQL> @list-diskgroup.sql

STATE NAME TYPE TOTAL_MB FREE_MB
----------- ------------------------------ ------ ---------- ----------
MOUNTED DATA EXTERN 20480 17440
MOUNTED VOTEDISK EXTERN 5120 4724

确认ASM磁盘情况,可以看到后生成的/dev/asm-diske和/dev/asm-diskd状态都为CANDIDATE,表示可以使用

SQL> @list-asmdisk1.sql

Disk Disk
Total Free
Disk Group LUN Path Disk Name GB GB
------------------------- ----------------------------------- -------------------- -------- --------
DATA /dev/asm-diskc DATA_0001 20 17
VOTEDISK /dev/asm-diskb VOTEDISK_0000 5 5
SQL> @list-asmdisk2.sql

NAME LUN Path State HEADER_STATU
---------------------------------------- ------------------------------ ---------- ------------
/dev/asm-diskd NORMAL CANDIDATE
/dev/asm-diske NORMAL CANDIDATE
DATA_0001 /dev/asm-diskc NORMAL MEMBER
VOTEDISK_0000 /dev/asm-diskb NORMAL MEMBER

向磁盘组中添加磁盘,并删除旧的磁盘

--方法1:操作后,即显示完成,返回结束,但需要等待后台同步完成(执行@list-reblance.sql查看,无返回结果表示同步完成),才是真正替换。
alter diskgroup votedisk add disk '/dev/asm-diskd' drop disk VOTEDISK_0000 rebalance power 11;
alter diskgroup data add disk '/dev/asm-diske' drop disk DATA_0001 rebalance power 11;
--方法2:操作后,等同步全部完成(完成添加替换、reblacne),返回结束。
alter diskgroup votedisk add disk '/dev/asm-diskd' drop disk VOTEDISK_0000 rebalance power 11 wait;
alter diskgroup data add disk '/dev/asm-diske' drop disk DATA_0001 rebalance power 11 wait;

我使用了方法2,执行过程记录,删除之后的磁盘HEADER_STATU为FORMER

SQL> alter diskgroup votedisk add disk '/dev/asm-diskd' drop disk VOTEDISK_0000 rebalance power 11 wait;

Diskgroup altered.

SQL> @list-asmdisk1.sql

Disk Disk
Total Free
Disk Group LUN Path Disk Name GB GB
------------------------- ----------------------------------- -------------------- -------- --------
DATA /dev/asm-diskc DATA_0001 20 17
VOTEDISK /dev/asm-diskd VOTEDISK_0001 5 5

SQL> @list-asmdisk2.sql

NAME LUN Path State HEADER_STATU
---------------------------------------- ------------------------------ ---------- ------------
/dev/asm-diske NORMAL CANDIDATE
/dev/asm-diskb NORMAL FORMER
DATA_0001 /dev/asm-diskc NORMAL MEMBER
VOTEDISK_0001 /dev/asm-diskd NORMAL MEMBER

SQL> alter diskgroup data add disk '/dev/asm-diske' drop disk DATA_0001 rebalance power 11 wait;

Diskgroup altered.

SQL> @list-asmdisk2.sql

NAME LUN Path State HEADER_STATU
---------------------------------------- ------------------------------ ---------- ------------
/dev/asm-diskc NORMAL FORMER
/dev/asm-diskb NORMAL FORMER
DATA_0000 /dev/asm-diske NORMAL MEMBER
VOTEDISK_0001 /dev/asm-diskd NORMAL MEMBER

执行期间asm实例日志输出

---ASM实例日志
[grid@rac1 trace]$ tail -f alert_+ASM1.log
Sun Dec 24 20:07:04 2023
---执行第1调命令输出
SQL> alter diskgroup votedisk add disk '/dev/asm-diskd' drop disk VOTEDISK_0000 rebalance power 11 wait
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (2,1) to disk (/dev/asm-diskd)
NOTE: requesting all-instance membership refresh for group=2
NOTE: initializing header on grp 2 disk VOTEDISK_0001
NOTE: requesting all-instance disk validation for group=2
Sun Dec 24 20:07:07 2023
NOTE: skipping rediscovery for group 2/0x68c934c4 (VOTEDISK) on local instance.
NOTE: requesting all-instance disk validation for group=2
NOTE: skipping rediscovery for group 2/0x68c934c4 (VOTEDISK) on local instance.
NOTE: Attempting voting file relocation on diskgroup VOTEDISK
NOTE: Failed voting file relocation on diskgroup VOTEDISK
NOTE: initiating PST update: grp = 2
Sun Dec 24 20:07:11 2023
GMON updating group 2 at 9 for pid 38, osid 17044
NOTE: PST update grp = 2 completed successfully
NOTE: membership refresh pending for group 2/0x68c934c4 (VOTEDISK)
GMON querying group 2 at 10 for pid 18, osid 8087
NOTE: cache opening disk 1 of grp 2: VOTEDISK_0001 path:/dev/asm-diskd
NOTE: Attempting voting file refresh on diskgroup VOTEDISK
NOTE: Refresh completed on diskgroup VOTEDISK
. Found 1 voting file(s).
NOTE: Voting file relocation is required in diskgroup VOTEDISK
NOTE: Attempting voting file relocation on diskgroup VOTEDISK
NOTE: Failed voting file relocation on diskgroup VOTEDISK
GMON querying group 2 at 11 for pid 18, osid 8087
SUCCESS: refreshed membership for 2/0x68c934c4 (VOTEDISK)
NOTE: starting rebalance of group 2/0x68c934c4 (VOTEDISK) at power 11
Sun Dec 24 20:07:17 2023
Starting background process ARB0
Sun Dec 24 20:07:17 2023
ARB0 started with pid=40, OS id=17266
NOTE: assigning ARB0 to group 2/0x68c934c4 (VOTEDISK) with 11 parallel I/Os
cellip.ora not found.
NOTE: F1X0 copy 1 relocating from 0:2 to 1:2 for diskgroup 2 (VOTEDISK)
Sun Dec 24 20:07:32 2023
NOTE: Attempting voting file refresh on diskgroup VOTEDISK
NOTE: Refresh completed on diskgroup VOTEDISK
. Found 1 voting file(s).
NOTE: Voting file relocation is required in diskgroup VOTEDISK
NOTE: Attempting voting file relocation on diskgroup VOTEDISK
NOTE: voting file allocation on grp 2 disk VOTEDISK_0001
NOTE: voting file deletion on grp 2 disk VOTEDISK_0000
NOTE: Successful voting file relocation on diskgroup VOTEDISK
Sun Dec 24 20:07:43 2023
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 2/0x68c934c4 (VOTEDISK)
Sun Dec 24 20:07:45 2023
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=2
Sun Dec 24 20:07:48 2023/d/as
GMON updating for reconfiguration, group 2 at 12 for pid 40, osid 17349
NOTE: group 2 PST updated.
SUCCESS: grp 2 disk VOTEDISK_0000 emptied
NOTE: erasing header on grp 2 disk VOTEDISK_0000
NOTE: process _x000_+asm1 (17349) initiating offline of disk 0.3916022838 (VOTEDISK_0000) with mask 0x7e in group 2
NOTE: initiating PST update: grp = 2, dsk = 0/0xe969c436, mask = 0x6a, op = clear
GMON updating disk modes for group 2 at 13 for pid 40, osid 17349
NOTE: group VOTEDISK: updated PST location: disk 0001 (PST copy 0)
NOTE: PST update grp = 2 completed successfully
NOTE: initiating PST update: grp = 2, dsk = 0/0xe969c436, mask = 0x7e, op = clear
GMON updating disk modes for group 2 at 14 for pid 40, osid 17349
NOTE: cache closing disk 0 of grp 2: VOTEDISK_0000
NOTE: PST update grp = 2 completed successfully
GMON updating for reconfiguration, group 2 at 15 for pid 40, osid 17349
NOTE: cache closing disk 0 of grp 2: (not open) VOTEDISK_0000
NOTE: group 2 PST updated.
NOTE: membership refresh pending for group 2/0x68c934c4 (VOTEDISK)
NOTE: Attempting voting file refresh on diskgroup VOTEDISK
NOTE: Refresh completed on diskgroup VOTEDISK
. Found 1 voting file(s).
NOTE: Voting file relocation is required in diskgroup VOTEDISK
NOTE: Attempting voting file relocation on diskgroup VOTEDISK
NOTE: Successful voting file relocation on diskgroup VOTEDISK
GMON querying group 2 at 16 for pid 18, osid 8087
GMON querying group 2 at 17 for pid 18, osid 8087
NOTE: Disk VOTEDISK_0000 in mode 0x0 marked for de-assignment
SUCCESS: refreshed membership for 2/0x68c934c4 (VOTEDISK)
Sun Dec 24 20:07:54 2023
NOTE: Attempting voting file refresh on diskgroup VOTEDISK
NOTE: Refresh completed on diskgroup VOTEDISK
. Found 1 voting file(s).
Sun Dec 24 20:07:55 2023
SUCCESS: alter diskgroup votedisk add disk '/dev/asm-diskd' drop disk VOTEDISK_0000 rebalance power 11 wait
Sun Dec 24 20:08:59 2023
---下面为第二执行的输入
SQL> alter diskgroup data add disk '/dev/asm-diske' drop disk DATA_0001 rebalance power 11 wait
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (1,0) to disk (/dev/asm-diske)
NOTE: requesting all-instance membership refresh for group=1
NOTE: initializing header on grp 1 disk DATA_0000
NOTE: requesting all-instance disk validation for group=1
Sun Dec 24 20:09:01 2023
NOTE: skipping rediscovery for group 1/0x68b934c3 (DATA) on local instance.
NOTE: requesting all-instance disk validation for group=1
NOTE: skipping rediscovery for group 1/0x68b934c3 (DATA) on local instance.
NOTE: initiating PST update: grp = 1
Sun Dec 24 20:09:07 2023
GMON updating group 1 at 18 for pid 38, osid 17044
NOTE: PST update grp = 1 completed successfully
NOTE: membership refresh pending for group 1/0x68b934c3 (DATA)
GMON querying group 1 at 19 for pid 18, osid 8087
NOTE: cache opening disk 0 of grp 1: DATA_0000 path:/dev/asm-diske
Sun Dec 24 20:09:12 2023
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Sun Dec 24 20:09:18 2023
GMON querying group 1 at 20 for pid 18, osid 8087
SUCCESS: refreshed membership for 1/0x68b934c3 (DATA)
NOTE: starting rebalance of group 1/0x68b934c3 (DATA) at power 11
Starting background process ARB0
Sun Dec 24 20:09:22 2023
ARB0 started with pid=28, OS id=17581
NOTE: assigning ARB0 to group 1/0x68b934c3 (DATA) with 11 parallel I/Os
cellip.ora not found.
NOTE: F1X0 copy 1 relocating from 1:321 to 0:2 for diskgroup 1 (DATA)
Sun Dec 24 20:09:25 2023
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Sun Dec 24 20:11:50 2023
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 1/0x68b934c3 (DATA)
Sun Dec 24 20:11:50 2023
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=1
Sun Dec 24 20:11:53 2023
GMON updating for reconfiguration, group 1 at 21 for pid 40, osid 17349
NOTE: group 1 PST updated.
SUCCESS: grp 1 disk DATA_0001 emptied
NOTE: erasing header on grp 1 disk DATA_0001
NOTE: process _x000_+asm1 (17349) initiating offline of disk 1.3916022837 (DATA_0001) with mask 0x7e in group 1
NOTE: initiating PST update: grp = 1, dsk = 1/0xe969c435, mask = 0x6a, op = clear
GMON updating disk modes for group 1 at 22 for pid 40, osid 17349
NOTE: group DATA: updated PST location: disk 0000 (PST copy 0)
NOTE: PST update grp = 1 completed successfully
NOTE: initiating PST update: grp = 1, dsk = 1/0xe969c435, mask = 0x7e, op = clear
GMON updating disk modes for group 1 at 23 for pid 40, osid 17349
NOTE: cache closing disk 1 of grp 1: DATA_0001
NOTE: PST update grp = 1 completed successfully
GMON updating for reconfiguration, group 1 at 24 for pid 40, osid 17349
NOTE: cache closing disk 1 of grp 1: (not open) DATA_0001
NOTE: group 1 PST updated.
NOTE: membership refresh pending for group 1/0x68b934c3 (DATA)
GMON querying group 1 at 25 for pid 18, osid 8087
GMON querying group 1 at 26 for pid 18, osid 8087
NOTE: Disk DATA_0001 in mode 0x0 marked for de-assignment
SUCCESS: refreshed membership for 1/0x68b934c3 (DATA)
NOTE: Attempting voting file refresh on diskgroup DATA
NOTE: Refresh completed on diskgroup DATA. No voting file found.
Sun Dec 24 20:12:00 2023
SUCCESS: alter diskgroup data add disk '/dev/asm-diske' drop disk DATA_0001 rebalance power 11 wait

--数据库实例告警日志显示
[oracle@rac1 trace]$ tail -f alert_orcl1.log
Sun Dec 24 20:09:09 2023
SUCCESS: disk DATA_0000 (0.3916022843) added to diskgroup DATA
Sun Dec 24 20:09:12 2023
Starting background process SMCO
Sun Dec 24 20:09:12 2023
SMCO started with pid=30, OS id=17575
Sun Dec 24 20:11:54 2023
NOTE: disk 1 (DATA_0001) in group 1 (DATA) is offline for reads
NOTE: disk 1 (DATA_0001) in group 1 (DATA) is offline for writes
SUCCESS: disk DATA_0001 (1.3916022837) dropped from diskgroup DATA

4、清理旧磁盘

我这里还有有点问题,模拟环境取消了映射后,虽然能删除磁盘,ISCSI扫描还有问题。

我实际生产测试过,如果是FC-SAN的话没有这个问题出现。这里的结果大家参考一下就可以。

删除前查看磁盘会话

[root@rac1 ~]# /etc/init.d/iscsi status
iSCSI Transport Class version 2.0-870
version 6.2.0-873.26.el6
Target: iqn.1991-05.com.microsoft:tgt-rac-target (non-flash)
Current Portal: 192.168.58.200:3260,1
Persistent Portal: 192.168.58.200:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:rac1
Iface IPaddress: 192.168.58.10
Iface HWaddress:
Iface Netdev:
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username:
password: ********
username_in:
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: No
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 18 State: running
scsi18 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running
scsi18 Channel 00 Id 0 Lun: 1
Attached scsi disk sdc State: running
scsi18 Channel 00 Id 0 Lun: 2
Attached scsi disk sdd State: running
scsi18 Channel 00 Id 0 Lun: 3
Attached scsi disk sde State: running

在两个节分别删除磁盘

echo 1 > /sys/block/sdb/device/delete
echo 1 > /sys/block/sdc/device/delete

在存储端取消映射,删除UDEV规则文件里的多余内容

重启udev

[root@rac2 ~]# start_udev
Starting udev: [ OK ]

再登录asm实例查看

[root@rac2 ~]# su - grid
[grid@rac2 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.4.0 Production on Sun Dec 24 21:02:03 2023
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> @list-asmdisk2.sql
NAME PATH STATE HEADER_STATU
VOTEDISK_0001 /dev/asm-diskd NORMAL MEMBER
DATA_0000 /dev/asm-diske NORMAL MEMBER
SQL> exit

5、补充

5.1、关于v$asm_operation显示内容的说明

数据类型

描述

GROUP_NUMBER

NUMBER

磁盘组编号(主键)。这是视图的外键。V$ASM_DISKGROUP

OPERATION

CHAR(5)

操作类型:

  • REBAL- 此组的重新平衡待定。磁盘组正在重新平衡。
  • REMIRROR- 此组的重新镜像处于挂起状态。
  • SCRUB- 此组正在等待清理。

从 Oracle Database 12c 开始,新查询应使用该列而不是此列。PASS

PASS

VARCHAR2(9)

操作类型:

  • COMPACT- Oracle ASM 将用户数据更紧密地联系在一起,从而通过缩短寻道距离来提高性能
  • PREPARE- 完成与准备SQL操作对应的工作。此阶段仅对冗余磁盘组启用,并且必须设置为或更高。FLEXEXTENDEDCOMPATIBLE.ASM12.2
  • REBALANCE- 此组的重新平衡待定。磁盘组正在重新平衡。
  • REBUILD- 恢复强制磁盘的冗余。强制磁盘是那些已使用该选项删除的磁盘。FORCE
  • RESILVER- 启用 WriteBack FlashCache 时,此值显示在 Oracle Exadata 环境中
  • RESYNC- 正在执行重新同步操作,以使一个或多个 Oracle ASM 磁盘联机
  • SCRUBBING- 磁盘组正在清理。

从 Oracle Database 12c 开始,新查询应使用此列而不是该列。OPERATION

STATE

VARCHAR2(4)

运行状态:

  • WAIT- 组未运行任何操作
  • EST- 根据再平衡要完成的工作量计算估算值
  • RUN- 为组运行的操作
  • REAP- 操作正在运行
  • DONE- 显示已完成的传递
  • ERRS- 操作因错误而停止

在执行工作时,在后台并行计算估计值。因此,从状态到状态的转换可能非常快。ESTRUN

POWER

NUMBER

ASM_POWER_LIMIT初始化参数或命令语法指定的操作请求的功率。或者,由清理 SQL 语法的 power 选项指定的操作请求的电源。

ACTUAL

NUMBER

分配给操作的功率

SOFAR

NUMBER

操作每分钟移动的分配单元数。或者,到目前为止已清理的分配单元数。

EST_WORK

NUMBER

操作必须移动的分配单元的估计数。或者,清理操作必须清理的分配单元的估计数。

EST_RATE

NUMBER

操作每分钟移动的估计分配单元数

EST_MINUTES

NUMBER

操作剩余时间的估计时间(以分钟为单位)

ERROR_CODE

VARCHAR2(44)

Oracle 外部错误代码;如果没有错误,则为 NULL

CON_ID

NUMBER

与数据相关的容器的 ID。可能的值包括:

  • 0:此值用于包含与整个 CDB 相关的数据的行。此值也用于非 CDB 中的行。
  • 1:此值用于包含仅与根相关的数据的行
  • n:其中 n 是包含数据的行的适用容器 ID

对于此视图,该值始终为 。0

5.2、关于reblance power

该值越大,reblance速度越快,白天时间尽量不要修改过大,以免影响生产。

5.3、参考

online-oracle-database-storage-migration-with-oracle-asm

https://ronekins.com/2021/09/28/online-oracle-database-storage-migration-with-oracle-asm/

相关文章

Oracle如何使用授予和撤销权限的语法和示例
Awesome Project: 探索 MatrixOrigin 云原生分布式数据库
下载丨66页PDF,云和恩墨技术通讯(2024年7月刊)
社区版oceanbase安装
Oracle 导出CSV工具-sqluldr2
ETL数据集成丨快速将MySQL数据迁移至Doris数据库

发布评论