前言
HP-UX (Hewlett-Packard Unix) 是惠普公司开发的类 Unix 操作系统。自 1980 年代问世以来,HP-UX 在技术和功能上不断发展,适应了多种硬件平台和企业计算需求。以下是 HP-UX 的发展历史概述:
1980 年代:起源与早期发展
-
1983 年:HP-UX 1.0 发布,这是基于 UNIX System III 的操作系统,最初用于 HP 9000 系列计算机。
-
1984 年:HP-UX 2.0 发布,基于 UNIX System V,这一版本增强了内存管理和文件系统性能。
1990 年代:扩展与创新
-
1991 年:HP-UX 8.0 发布,引入了对多处理器的支持,并增强了图形用户界面功能。
-
1993 年:HP-UX 9.0 发布,进一步改进了多任务处理和文件系统性能。
-
1995 年:HP-UX 10.0 发布,这是一个重要的版本,带来了许多新特性,包括 Logical Volume Manager (LVM)、增强的用户界面、改进的安全功能和对对称多处理 (SMP) 的支持。
-
1997 年:HP-UX 10.20 发布,提供了对 64 位计算和更大内存寻址的支持。
2000 年代:64 位架构与企业级功能
-
2000 年:HP-UX 11i 发布,提供了更高的可靠性、可用性和服务性 (RAS) 特性,并引入了用于关键任务企业计算的改进,如动态分区和虚拟分区。
-
2003 年:HP-UX 11i v2 (11.23) 发布,支持 Integrity 服务器,并提供更强的安全性和可管理性特性。
-
2007 年:HP-UX 11i v3 (11.31) 发布,进一步改进了虚拟化、安全性、性能和管理功能。
2010 年代:持续改进与现代化
-
2010 年:HP-UX 11i v3 2010 更新版发布,增强了云计算和大数据支持功能。
-
2013 年:HP-UX 11i v3 2013 更新版发布,提供了对最新硬件的支持,并进一步改进了虚拟化和安全性。
-
2017 年:HPE 宣布继续对 HP-UX 提供长期支持,并承诺为现有客户提供增强的功能和安全更新。
2020 年代:向长期支持转型
-
2020 年及以后:尽管不再推出重大版本更新,HP-UX 仍然继续为企业客户提供关键任务支持。HPE 继续提供补丁、安全更新和技术支持,确保现有 HP-UX 环境的稳定运行
随着时代的发展X86机器的各项性能也不再逊色于小型机了,HP小型机也越来越少了,可能在一些金融行业还有HP小机的影子,但是确实少见很多了。这篇文章是介绍HPUX小机添加ASM磁盘,从文章的长度上就知道有多复杂,这里发一下仅供大家参考,一窥早期DBA的战战兢兢,如履薄冰!
Environment:
OS:HPUX11.31
DATABASE: oracle 10.0.2.4 3*nodes RAC
Storage:HP-6100 FC disk
Cluster:HPCM
1. cut 2*300G RAID5 FC EVA disk to all nodes of sfc12rc rac
2. os list and find new disk on all nodes:
ioscan -N -fCdisk
ioscan -N -m lun
ioscan -fnCdisk
smh to check disk properties: SMH->Disks and File Systems->Disks->Details
VD name UUID LUN sfc12rc1 sfc12rc2 sfc12rc3
----------------------------------------------------------------------------------------------------------------
VD_12_ORA_DATA_13 6001-4380-02a5-7554-0001-0000-0096-0000 18 disk98 disk97 disk99
VD_12_ORA_DATA_14 6001-4380-02a5-7554-0001-0000-0099-0000 19 disk103 disk103 disk104
3. stop oracle cluster and OS cluster
sqlplus to close database from sfc12rc1 to sfc12rc3
--sqlplus / as sysdba shutdown immediate on each node.
stop all oracle cluster service:
crs_stop -all
crsctl stop crs
stop OS cluster:
cmviewcl
cmhaltcl -f
cmviewcl
session log
sfc12rc1:/# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
sfc12rc1:/# crsctl stop crs --stop cluster
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
sfc12rc1:/#
sfc12rc1:/#
sfc12rc1:/# cmviewcl -- view information about a high availability cluster
CLUSTER STATUS
clu_RACSFC12 up
NODE STATUS STATE
sfc12rc1 up running
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP1 up running enabled sfc12rc1
pkgSFC12DB1 up running disabled sfc12rc1
NODE STATUS STATE
sfc12rc2 up running
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP2 up running enabled sfc12rc2
pkgSFC12DB2 up running disabled sfc12rc2
NODE STATUS STATE
sfc12rc3 up running
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP3 up running enabled sfc12rc3
pkgSFC12DB3 up running disabled sfc12rc3
sfc12rc1:/# cmhaltcl --check a a high availability cluster healthy
Package pkgSFC12IP1 is still running on sfc12rc1.
Package pkgSFC12IP2 is still running on sfc12rc2.
Package pkgSFC12IP3 is still running on sfc12rc3.
Package pkgSFC12DB1 is still running on sfc12rc1.
Package pkgSFC12DB2 is still running on sfc12rc2.
Package pkgSFC12DB3 is still running on sfc12rc3.
Use the -f option to forcefully halt the cluster/node including halting packages.
sfc12rc1:/# cmhaltcl -f
Disabling all packages from starting on nodes to be halted.
Warning: Do not modify or enable packages until the halt operation is completed.
Disabling automatic failover for failover packages to be halted.
Halting package pkgSFC12IP1
Successfully halted package pkgSFC12IP1
Halting package pkgSFC12DB1
Successfully halted package pkgSFC12DB1
Halting package pkgSFC12IP2
Successfully halted package pkgSFC12IP2
Halting package pkgSFC12DB2
Successfully halted package pkgSFC12DB2
Halting package pkgSFC12IP3
Successfully halted package pkgSFC12IP3
Halting package pkgSFC12DB3
Successfully halted package pkgSFC12DB3
This operation may take some time.
Waiting for nodes to halt ... done
Successfully halted all nodes specified.
Halt operation complete.
sfc12rc1:/# cmviewcl --check os cluster is down
CLUSTER STATUS
clu_RACSFC12 down
NODE STATUS STATE
sfc12rc1 down unknown
sfc12rc2 down unknown
sfc12rc3 down unknown
UNOWNED_PACKAGES
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP1 down halted enabled unowned
pkgSFC12IP2 down halted enabled unowned
pkgSFC12IP3 down halted enabled unowned
pkgSFC12DB1 down halted disabled unowned
pkgSFC12DB2 down halted disabled unowned
pkgSFC12DB3 down halted disabled unowned
sfc12rc1:/#
----------------------------------------------------------------------
4. create VG&LV in sfc12rc1, then export VG mapfile
Sfc12rc1:
pvcreate /dev/rdisk/disk98
pvcreate /dev/rdisk/disk103
mkdir /dev/vg_ora_data04
mknod /dev/vg_ora_data04/group c 64 0x070000
vgcreate -l 10 -s 32 vg_ora_data06 /dev/disk/disk114
vgextend vg_ora_data06 /dev/disk/disk119
vgdisplay vg_ora_data04
vgdisplay -v vg_ora_data04
lvcreate -l 9597 -n lvdata05 vg_ora_data04
lvcreate -l 9597 -n lvdata06 vg_ora_data04
mkdir -p /tmp/20111231 --from sfc12rc1 to sfc12rc3
vgexport -p -v -s -m /tmp/20111231/vg_ora_data04.map vg_ora_data04
rcp /tmp/20111231/vg_ora_data04.map sfc12rc2:/tmp/20111231
rcp /tmp/20111231/vg_ora_data04.map sfc12rc3:/tmp/20111231
sfc12rc1:/# pvcreate /dev/rdisk/disk98 --create physical volume
Physical volume "/dev/rdisk/disk98" has been successfully created.
sfc12rc1:/# pvcreate /dev/rdisk/disk103 --create physical volume
Physical volume "/dev/rdisk/disk103" has been successfully created.
sfc12rc1:/# ls -lrt /dev/vg_ora_data04
/dev/vg_ora_data04 not found
sfc12rc1:/# mkdir /dev/vg_ora_data04
sfc12rc1:/# mknod /dev/vg_ora_data06/group c 64 0x090000
--最后编号与其他VG不同即可
sfc12rc1:/# ls -lrt /dev/vg_ora_data04
total 0
crw-r--r-- 1 root sys 64 0x070000 Dec 31 08:53 group
sfc12rc1:/# vgdisplay vg_ora_data04 --display information about LVM volume groups
--- Volume groups ---
VG Name /dev/vg_ora_data04
VG Write Access read/write
VG Status available
Max LV 10
Cur LV 0
Open LV 0
Max PV 16
Cur PV 2
Act PV 2
Max PE per PV 9599
VGDA 4
PE Size (Mbytes) 32
Total PE 19198
Alloc PE 0
Free PE 19198
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 4914688m
VG Max Extents 153584
sfc12rc1:/# vgdisplay -v vg_ora_data04
--- Volume groups ---
VG Name /dev/vg_ora_data04
VG Write Access read/write
VG Status available
Max LV 10
Cur LV 0
Open LV 0
Max PV 16
Cur PV 2
Act PV 2
Max PE per PV 9599
VGDA 4
PE Size (Mbytes) 32
Total PE 19198
Alloc PE 0
Free PE 19198
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 4914688m
VG Max Extents 153584
--- Physical volumes ---
PV Name /dev/disk/disk98
PV Status available
Total PE 9599
Free PE 9599
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk103
PV Status available
Total PE 9599
Free PE 9599
Autoswitch On
Proactive Polling On
sfc12rc1:/#
sfc12rc1:/# lvcreate -l 9597 -n lvdata08 vg_ora_data06
--create logical volume / Max PE per PV 9599 但是要预留2个pe给系统使用
Logical volume "/dev/vg_ora_data04/lvdata05" has been successfully created with
character device "/dev/vg_ora_data04/rlvdata05".
Logical volume "/dev/vg_ora_data04/lvdata05" has been successfully extended.
Volume Group configuration for /dev/vg_ora_data04 has been saved in /etc/lvmconf/vg_ora_data04.conf
sfc12rc1:/# lvcreate -l 9597 -n lvdata09 vg_ora_data06
Logical volume "/dev/vg_ora_data04/lvdata06" has been successfully created with
character device "/dev/vg_ora_data04/rlvdata06".
Logical volume "/dev/vg_ora_data04/lvdata06" has been successfully extended.
Volume Group configuration for /dev/vg_ora_data04 has been saved in /etc/lvmconf/vg_ora_data04.conf
sfc12rc1:/#
sfc12rc1:/# lvdisplay /dev/vg_ora_data04/lvdata05 --show LV information
--- Logical volumes ---
LV Name /dev/vg_ora_data04/lvdata05
VG Name /dev/vg_ora_data04
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 307104
Current LE 9597
Allocated PE 9597
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation strict
IO Timeout (Seconds) default
sfc12rc1:/#
sfc12rc1:/# mkdir -p /tmp/20111231
sfc12rc1:/# vgexport -p -v -s -m /tmp/20181001/vg_ora_data06.map vg_ora_data06
--将vg配置信息导入到文件
Beginning the export process on Volume Group "vg_ora_data04".
vgexport: Volume group "vg_ora_data04" is still active.
/dev/disk/disk98
/dev/disk/disk103
vgexport: Preview of vgexport on volume group "vg_ora_data04" succeeded.
sfc12rc1:/#
sfc12rc1:/# rcp /tmp/20181001/vg_ora_data06.map sfc12rc2:/tmp/20181001
sfc12rc1:/# rcp /tmp/20181001/vg_ora_data06.map sfc12rc3:/tmp/20181001
--将VG配置文件copy 到另外两个节点
sfc12rc1:/#
5. import VG mapfile from sfc12rc1 to sfc12rc2:
sfc12rc2:
mkdir /dev/vg_ora_data04
mknod /dev/vg_ora_data04/group c 64 0x070000
vgimport -v -m /tmp/20181001/vg_ora_data06.map vg_ora_data06 /dev/disk/disk118 \
/dev/disk/disk119
sfc12rc2:/# mkdir /dev/vg_ora_data06
sfc12rc2:/# mknod /dev/vg_ora_data06/group c 64 0x090000
sfc12rc2:/# ls -lrt /dev/vg_ora_data06
total 0
crw-r--r-- 1 root sys 64 0x070000 Dec 31 09:03 group
sfc12rc2:/# vgimport -v -m /tmp/20111231/vg_ora_data04.map vg_ora_data04 /dev/disk/disk97 \
> /dev/disk/disk103
--将vg配置信息导入到该节点
Beginning the import process on Volume Group "vg_ora_data04".
vgimport: Warning: Volume Group belongs to different CPU ID.
Can not determine if Volume Group is in use on another system. Continuing.
Logical volume "/dev/vg_ora_data04/lvdata05" has been successfully created
with lv number 1.
Logical volume "/dev/vg_ora_data04/lvdata06" has been successfully created
with lv number 2.
vgimport: Volume group "/dev/vg_ora_data04" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.
sfc12rc2:/# vgdisplay vg_ora_data04
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "vg_ora_data04".
sfc12rc2:/# vgdisplay -v vg_ora_data04
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "vg_ora_data04".
sfc12rc2:/# lvdisplay /dev/vg_ora_data04/lvdata05
lvdisplay: Couldn't query logical volume "/dev/vg_ora_data04/lvdata05":
Volume group not activated.
lvdisplay: Cannot display logical volume "/dev/vg_ora_data04/lvdata05".
sfc12rc2:/#
----------------------------------------------------------------------
6. import VG mapfile from sfc12rc1 to sfc12rc3:
sfc12rc3:
mkdir /dev/vg_ora_data06
mknod /dev/vg_ora_data06/group c 64 0x090000
vgimport -v -m /tmp/20181001/vg_ora_data06.map vg_ora_data06 /dev/disk/disk118 \
/dev/disk/disk119
sfc12rc3:/#
sfc12rc3:/# mkdir /dev/vg_ora_data04
sfc12rc3:/# mknod /dev/vg_ora_data04/group c 64 0x070000
sfc12rc3:/# vgimport -v -m /tmp/20111231/vg_ora_data04.map vg_ora_data04 /dev/disk/disk99 \
> /dev/disk/disk104
--将vg配置信息导入到该节点
Beginning the import process on Volume Group "vg_ora_data04".
vgimport: Warning: Volume Group belongs to different CPU ID.
Can not determine if Volume Group is in use on another system. Continuing.
Logical volume "/dev/vg_ora_data04/lvdata05" has been successfully created
with lv number 1.
Logical volume "/dev/vg_ora_data04/lvdata06" has been successfully created
with lv number 2.
vgimport: Volume group "/dev/vg_ora_data04" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.
sfc12rc3:/#
----------------------------------------------------------------------
7. modify cluster configration file and apply in sfc12rc1:
sfc12rc1:
cd /etc/cmcluster/
备份并且修改 /etc/cmcluster/cluster_sfc12.ascii 文件
在原有的vg下面,加入新的卷組信息
OPS_VOLUME_GROUP /dev/vg_ora_data04
cmcheckconf -v -C /etc/cmcluster/cluster_sfc12.ascii
cmapplyconf -v -C /etc/cmcluster/cluster_sfc12.ascii
rcp -p cluster_sfc12.ascii sfc12rc2:/etc/cmcluster/
rcp -p cluster_sfc12.ascii sfc12rc3:/etc/cmcluster/
cd pkgSFC12DB1
修改pkgSFC12DB1配置
备份并且在pkgSFC12DB1.cntl中加入
VG[5]=vg_ora_data04
cmapplyconf -v -C /etc/cmcluster/cluster_sfc12.ascii -P \
/etc/cmcluster/pkgSFC12DB1/pkgSFC12DB1.conf
sfc12rc1:/# cd /etc/cmcluster/
sfc12rc1:/etc/cmcluster# ls -lrt
total 208
-r-------- 1 bin bin 118 Mar 16 2007 cmknowncmds
drwxr-xr-x 2 bin bin 8192 Oct 22 2008 cfs
dr-xr-xr-x 2 bin bin 96 Oct 22 2008 examples
dr-xr-xr-x 4 root root 96 Oct 22 2008 modules
dr-xr-xr-x 5 bin bin 96 Jan 7 2009 scripts
dr-xr-xr-x 2 bin bin 8192 Jan 7 2009 lib
-rw-r--r-- 1 root sys 11 Jan 15 2009 mapfile
---------- 1 root root 0 Jan 15 2009 config.lck
drwxr-xr-x 2 root sys 96 Jan 15 2009 pkgSFC12IP1
drwxr-xr-x 2 root sys 96 Feb 9 2009 pkgSFC12IP3
drwxr-xr-x 2 root sys 96 Feb 20 2009 pkgSFC12IP2
-rw-r--r-- 1 root sys 10458 Jun 29 2009 cluster_sfc12.ascii20100921
drwx------ 2 root sys 8192 Sep 21 2010 pkgSFC12DB1
-rw-r--r-- 1 root sys 10495 Oct 2 2010 cluster_sfc12.ascii
-rw------- 1 root root 30916 Dec 31 08:45 cmclconfig
-rw------- 1 root root 0 Dec 31 08:45 cmclconfig.tmp
sfc12rc1:/etc/cmcluster# cp cluster_sfc12.ascii cluster_sfc12.ascii20111231
-- –修改系统文件前 先备份
sfc12rc1:/etc/cmcluster# vi /etc/cmcluster/cluster_sfc12.ascii
--在文件最后加入新的卷组信息OPS_VOLUME_GROUP /dev/vg_ora_data04
sfc12rc1:/etc/cmcluster#
sfc12rc1:/etc/cmcluster# cmcheckconf -v -C /etc/cmcluster/cluster_sfc12.ascii
---检查修改后的文件是否有错误
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cluster_sfc12.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 20 devices on node sfc12rc1
Found 20 devices on node sfc12rc2
Found 20 devices on node sfc12rc3
Analysis of 60 devices should take approximately 5 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 8 volume groups on node sfc12rc1
Found 8 volume groups on node sfc12rc2
Found 8 volume groups on node sfc12rc3
Analysis of 24 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Cluster clu_RACSFC12 is an existing cluster
Cluster clu_RACSFC12 is an existing cluster
Checking for inconsistencies
Maximum configured packages parameter is 150.
Configuring 6 package(s).
144 package(s) can be added to this cluster.
200 access policies can be added to this cluster.
Modifying configuration on node sfc12rc1
Modifying configuration on node sfc12rc2
Modifying configuration on node sfc12rc3
Modifying the cluster configuration for cluster clu_RACSFC12
Modifying node sfc12rc1 in cluster clu_RACSFC12
Modifying node sfc12rc2 in cluster clu_RACSFC12
Modifying node sfc12rc3 in cluster clu_RACSFC12
cmcheckconf: Verification completed with no errors found.
Use the cmapplyconf command to apply the configuration.
sfc12rc1:/etc/cmcluster#
sfc12rc1:/etc/cmcluster# cmapplyconf -v -C /etc/cmcluster/cluster_sfc12.ascii
--使修改生效
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cluster_sfc12.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 20 devices on node sfc12rc1
Found 20 devices on node sfc12rc2
Found 20 devices on node sfc12rc3
Analysis of 60 devices should take approximately 5 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 8 volume groups on node sfc12rc1
Found 8 volume groups on node sfc12rc2
Found 8 volume groups on node sfc12rc3
Analysis of 24 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Cluster clu_RACSFC12 is an existing cluster
Cluster clu_RACSFC12 is an existing cluster
Checking for inconsistencies
Maximum configured packages parameter is 150.
Configuring 6 package(s).
144 package(s) can be added to this cluster.
200 access policies can be added to this cluster.
Modifying configuration on node sfc12rc1
Modifying configuration on node sfc12rc2
Modifying configuration on node sfc12rc3
Modifying the cluster configuration for cluster clu_RACSFC12
Modifying node sfc12rc1 in cluster clu_RACSFC12
Modifying node sfc12rc2 in cluster clu_RACSFC12
Modifying node sfc12rc3 in cluster clu_RACSFC12
Modify the cluster configuration ([y]/n)? y
Marking/unmarking volume groups for use in the cluster
Completed the cluster creation
sfc12rc1:/etc/cmcluster#
sfc12rc1:/etc/cmcluster# rcp -p cluster_sfc12.ascii sfc12rc2:/etc/cmcluster/
sfc12rc1:/etc/cmcluster# rcp -p cluster_sfc12.ascii sfc12rc3:/etc/cmcluster/
--将修改OK的cluster_sfc12.ascii文件copy到另外两个节点
sfc12rc1:/etc/cmcluster#
sfc12rc1:/etc/cmcluster# cd pkgSFC12DB1
sfc12rc1:/etc/cmcluster/pkgSFC12DB1# ls -lrt
total 464
-rwx------ 1 root sys 26764 Jan 15 2009 pkgSFC12DB1.conf
-rwx------ 1 root sys 64281 Jun 29 2009 pkgSFC12DB1.cntl20100921
-rwx------ 1 root sys 64301 Oct 2 2010 pkgSFC12DB1.cntl
-rw-r--r-- 1 root root 67729 Dec 31 08:47 pkgSFC12DB1.cntl.log
sfc12rc1:/etc/cmcluster/pkgSFC12DB1# cp pkgSFC12DB1.cntl pkgSFC12DB1.cntl20111231
sfc12rc1:/etc/cmcluster/pkgSFC12DB1# vi pkgSFC12DB1.cntl
#VG[0]=""
VG[0]=vg_ora_vote
VG[1]=vg_ora_data01
VG[2]=vg_ora_arch01
VG[3]=vg_ora_data02
VG[4]=vg_ora_data03
VG[5]=vg_ora_data04 --新加
sfc12rc1:/etc/cmcluster/pkgSFC12DB1#
sfc12rc1:/etc/cmcluster/pkgSFC12DB1# cmapplyconf -v -C /etc/cmcluster/cluster_sfc12.ascii -P \
> /etc/cmcluster/pkgSFC12DB1/pkgSFC12DB1.conf
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cluster_sfc12.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 20 devices on node sfc12rc1
Found 20 devices on node sfc12rc2
Found 20 devices on node sfc12rc3
Analysis of 60 devices should take approximately 5 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 8 volume groups on node sfc12rc1
Found 8 volume groups on node sfc12rc2
Found 8 volume groups on node sfc12rc3
Analysis of 24 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Cluster clu_RACSFC12 is an existing cluster
Parsing package file: /etc/cmcluster/pkgSFC12DB1/pkgSFC12DB1.conf.
/etc/cmcluster/pkgSFC12DB1/pkgSFC12DB1.conf: A legacy package is being used.
Package pkgSFC12DB1 already exists. It will be modified.
Cluster clu_RACSFC12 is an existing cluster
Checking for inconsistencies
Maximum configured packages parameter is 150.
Configuring 6 package(s).
144 package(s) can be added to this cluster.
200 access policies can be added to this cluster.
Modifying configuration on node sfc12rc1
Modifying configuration on node sfc12rc2
Modifying configuration on node sfc12rc3
Modifying the cluster configuration for cluster clu_RACSFC12
Modifying node sfc12rc1 in cluster clu_RACSFC12
Modifying node sfc12rc2 in cluster clu_RACSFC12
Modifying node sfc12rc3 in cluster clu_RACSFC12
Modifying the package configuration for package pkgSFC12DB1.
Modify the cluster configuration ([y]/n)? y
Marking/unmarking volume groups for use in the cluster
Completed the cluster creation
sfc12rc1:/etc/cmcluster/pkgSFC12DB1#
---------------------------------------------------------------------
8. modify cluster configration file and apply in sfc12rc2:
sfc12rc2:
cd /etc/cmcluster/pkgSFC12DB2
修改pkgSFC12DB2配置
备份并在pkgSFC12DB2.cntl中加入
VG[5]=vg_ora_data04
cmapplyconf -v -C /etc/cmcluster/cluster_sfc12.ascii -P \
/etc/cmcluster/pkgSFC12DB2/pkgSFC12DB2.conf
sfc12rc2:/# cd /etc/cmcluster/pkgSFC12DB2
sfc12rc2:/etc/cmcluster/pkgSFC12DB2# ls -lrt
total 448
-rwx------ 1 root sys 26764 Jan 15 2009 pkgSFC12DB2.conf
-rwx------ 1 root sys 64281 Jun 29 2009 pkgSFC12DB2.cntl20100921
-rwx------ 1 root sys 64301 Oct 2 2010 pkgSFC12DB2.cntl
-rw-r--r-- 1 root root 65051 Dec 31 08:47 pkgSFC12DB2.cntl.log
sfc12rc2:/etc/cmcluster/pkgSFC12DB2# vi pkgSFC12DB2.cntl
#VG[0]=""
VG[0]=vg_ora_vote
VG[1]=vg_ora_data01
VG[2]=vg_ora_arch01
VG[3]=vg_ora_data02
VG[4]=vg_ora_data03
VG[5]=vg_ora_data04 --新加
sfc12rc2:/etc/cmcluster/pkgSFC12DB2#
sfc12rc2:/etc/cmcluster/pkgSFC12DB2# cmapplyconf -v -C /etc/cmcluster/cluster_sfc12.ascii -P \
> /etc/cmcluster/pkgSFC12DB2/pkgSFC12DB2.conf
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cluster_sfc12.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 20 devices on node sfc12rc1
Found 20 devices on node sfc12rc2
Found 20 devices on node sfc12rc3
Analysis of 60 devices should take approximately 5 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 8 volume groups on node sfc12rc1
Found 8 volume groups on node sfc12rc2
Found 8 volume groups on node sfc12rc3
Analysis of 24 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Cluster clu_RACSFC12 is an existing cluster
Parsing package file: /etc/cmcluster/pkgSFC12DB2/pkgSFC12DB2.conf.
/etc/cmcluster/pkgSFC12DB2/pkgSFC12DB2.conf: A legacy package is being used.
Package pkgSFC12DB2 already exists. It will be modified.
Cluster clu_RACSFC12 is an existing cluster
Checking for inconsistencies
Maximum configured packages parameter is 150.
Configuring 6 package(s).
144 package(s) can be added to this cluster.
200 access policies can be added to this cluster.
Modifying configuration on node sfc12rc1
Modifying configuration on node sfc12rc2
Modifying configuration on node sfc12rc3
Modifying the cluster configuration for cluster clu_RACSFC12
Modifying node sfc12rc1 in cluster clu_RACSFC12
Modifying node sfc12rc2 in cluster clu_RACSFC12
Modifying node sfc12rc3 in cluster clu_RACSFC12
Modifying the package configuration for package pkgSFC12DB2.
Modify the cluster configuration ([y]/n)? y
Marking/unmarking volume groups for use in the cluster
Completed the cluster creation
sfc12rc2:/etc/cmcluster/pkgSFC12DB2#
---------------------------------------------------------------------
9. modify cluster configration file and apply in sfc12rc3:
sfc12rc3:
cd /etc/cmcluster/pkgSFC12DB3
修改pkgSFC12DB3配置
备份并在pkgSFC12DB3.cntl中加入
VG[5]=vg_ora_data04
cmapplyconf -v -C /etc/cmcluster/cluster_sfc12.ascii -P \
/etc/cmcluster/pkgSFC12DB3/pkgSFC12DB3.conf
sfc12rc3:/# cd /etc/cmcluster/pkgSFC12DB3
sfc12rc3:/etc/cmcluster/pkgSFC12DB3# ls -lrt
total 448
-rwx------ 1 root sys 26764 Jan 15 2009 pkgSFC12DB3.conf
-rwx------ 1 root sys 64281 Jun 29 2009 pkgSFC12DB3.cntl20100921
-rwx------ 1 root sys 64301 Oct 2 2010 pkgSFC12DB3.cntl
-rw-r--r-- 1 root root 62722 Dec 31 08:47 pkgSFC12DB3.cntl.log
sfc12rc3:/etc/cmcluster/pkgSFC12DB3# cp pkgSFC12DB3.cntl pkgSFC12DB3.cntl20111231
sfc12rc3:/etc/cmcluster/pkgSFC12DB3# vi pkgSFC12DB3.cntl
#VG[0]=""
VG[0]=vg_ora_vote
VG[1]=vg_ora_data01
VG[2]=vg_ora_arch01
VG[3]=vg_ora_data02
VG[4]=vg_ora_data03
VG[5]=vg_ora_data04 --新加
sfc12rc3:/etc/cmcluster/pkgSFC12DB3# cmapplyconf -v -C /etc/cmcluster/cluster_sfc12.ascii -P \
> /etc/cmcluster/pkgSFC12DB3/pkgSFC12DB3.conf
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cluster_sfc12.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 20 devices on node sfc12rc1
Found 20 devices on node sfc12rc2
Found 20 devices on node sfc12rc3
Analysis of 60 devices should take approximately 5 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 8 volume groups on node sfc12rc1
Found 8 volume groups on node sfc12rc2
Found 8 volume groups on node sfc12rc3
Analysis of 24 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Cluster clu_RACSFC12 is an existing cluster
Parsing package file: /etc/cmcluster/pkgSFC12DB3/pkgSFC12DB3.conf.
/etc/cmcluster/pkgSFC12DB3/pkgSFC12DB3.conf: A legacy package is being used.
Package pkgSFC12DB3 already exists. It will be modified.
Cluster clu_RACSFC12 is an existing cluster
Checking for inconsistencies
Maximum configured packages parameter is 150.
Configuring 6 package(s).
144 package(s) can be added to this cluster.
200 access policies can be added to this cluster.
Modifying configuration on node sfc12rc1
Modifying configuration on node sfc12rc2
Modifying configuration on node sfc12rc3
Modifying the cluster configuration for cluster clu_RACSFC12
Modifying node sfc12rc1 in cluster clu_RACSFC12
Modifying node sfc12rc2 in cluster clu_RACSFC12
Modifying node sfc12rc3 in cluster clu_RACSFC12
Modifying the package configuration for package pkgSFC12DB3.
Modify the cluster configuration ([y]/n)? y
Marking/unmarking volume groups for use in the cluster
Completed the cluster creation
sfc12rc3:/etc/cmcluster/pkgSFC12DB3#
--------------------------------------------------------------------
10. start OS cluster in sfc12rc1:
vgchange -a n vg_ora_data04
cmruncl
cmviewcl
cmrunpkg -n sfc12rc1 pkgSFC12DB1
cmrunpkg -n sfc12rc2 pkgSFC12DB2
cmrunpkg -n sfc12rc3 pkgSFC12DB3
cmviewcl
sfc12rc1:/#
sfc12rc1:/# vgchange -a n vg_ora_data04
Volume group "vg_ora_data04" has been successfully changed.
sfc12rc1:/# vgdisplay /dev/vg_ora_data04
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vg_ora_data04".
sfc12rc1:/# cmruncl --启动cluster
cmruncl: Validating network configuration...
cmruncl: Network validation complete
cmruncl: Validating cluster lock disk .... Done
Waiting for cluster to form ....... done
Cluster successfully formed.
Check the syslog files on all nodes in the cluster to verify that no warnings occurred during startup.
sfc12rc1:/# cmviewcl --查看状态
CLUSTER STATUS
clu_RACSFC12 up
NODE STATUS STATE
sfc12rc1 up running
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP1 up running enabled sfc12rc1
NODE STATUS STATE
sfc12rc2 up running
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP2 up running enabled sfc12rc2
NODE STATUS STATE
sfc12rc3 up running
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP3 up running enabled sfc12rc3
UNOWNED_PACKAGES
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12DB1 down halted disabled unowned
pkgSFC12DB2 down halted disabled unowned
pkgSFC12DB3 down halted disabled unowned
sfc12rc1:/# cmrunpkg -n sfc12rc1 pkgSFC12DB1 -PKG不会自动启动 需要手动启动
Running package pkgSFC12DB1 on node sfc12rc1
Successfully started package pkgSFC12DB1 on node sfc12rc1
cmrunpkg: All specified packages are running
sfc12rc1:/# cmrunpkg -n sfc12rc2 pkgSFC12DB2
Running package pkgSFC12DB2 on node sfc12rc2
Successfully started package pkgSFC12DB2 on node sfc12rc2
cmrunpkg: All specified packages are running
sfc12rc1:/# cmrunpkg -n sfc12rc3 pkgSFC12DB3
Running package pkgSFC12DB3 on node sfc12rc3
Successfully started package pkgSFC12DB3 on node sfc12rc3
cmrunpkg: All specified packages are running
sfc12rc1:/# cmviewcl --show cluster status
CLUSTER STATUS
clu_RACSFC12 up
NODE STATUS STATE
sfc12rc1 up running
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP1 up running enabled sfc12rc1
pkgSFC12DB1 up running disabled sfc12rc1
NODE STATUS STATE
sfc12rc2 up running
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP2 up running enabled sfc12rc2
pkgSFC12DB2 up running disabled sfc12rc2
NODE STATUS STATE
sfc12rc3 up running
PACKAGE STATUS STATE AUTO_RUN NODE
pkgSFC12IP3 up running enabled sfc12rc3
pkgSFC12DB3 up running disabled sfc12rc3
sfc12rc1:/#
-------------------------------------------------------------------------------
11. chdange rlv properties and start oracle cluster from sfc12rc1 to sfc12rc3:
chown oracle:dba /dev/vg_ora_data06/rlvdata08
chmod 660 /dev/vg_ora_data06/rlvdata08
chown oracle:dba /dev/vg_ora_data06/rlvdata09
chmod 660 /dev/vg_ora_data06/rlvdata09
ls -lrt /dev/vg_ora_data06/rlvdata*
vi /apps/oracle/admin/+ASM/pfile/init.ora
append ",'/dev/vg_ora_data06/rlvdata08','/dev/vg_ora_data06/rlvdata09'" to asm_diskstring
crsctl start crs
sfc12rc1:/#
sfc12rc1:/# chown oracle:dba /dev/vg_ora_data04/rlvdata05 --change owner
sfc12rc1:/# chmod 660 /dev/vg_ora_data04/rlvdata05 --change mode
sfc12rc1:/# chown oracle:dba /dev/vg_ora_data04/rlvdata06
sfc12rc1:/# chmod 660 /dev/vg_ora_data04/rlvdata06
sfc12rc1:/# ls -lrt /dev/vg_ora_data04/rlvdata*
crw-rw---- 1 oracle dba 64 0x070001 Dec 31 08:57 /dev/vg_ora_data04/rlvdata05
crw-rw---- 1 oracle dba 64 0x070002 Dec 31 08:57 /dev/vg_ora_data04/rlvdata06
sfc12rc1:/#
sfc12rc1:/# crsctl start crs --start crs
Attempting to start CRS stack
The CRS stack will be started shortly
sfc12rc1:/# crsctl check crs --check crs status
CSS appears healthy
CRS appears healthy
EVM appears healthy
sfc12rc1:/#
--------------------------------------------------------------------------------
12. check all node find new disks,then add 2 disks to DGDATA diskgroup:
column name format a20
select name,state,type,total_mb,free_mb,unbalanced from v$asm_diskgroup;
select name,path,total_mb,free_mb,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS from v$asm_disk order by 1,2;
alter diskgroup dgdata add disk '/dev/vg_ora_data04/rlvdata05';
alter diskgroup dgdata add disk '/dev/vg_ora_data04/rlvdata06';
--
alter diskgroup dgdata rebalance power 11;
select * from v$asm_operation;
select name,total_mb,free_mb,unbalanced from v$asm_diskgroup;
select group_number,name,path,total_mb,free_mb,header_status from v$asm_disk order by 1,2;
idle> column name format a20
idle> select name,state,type,total_mb,free_mb,unbalanced from v$asm_diskgroup;
NAME STATE TYPE TOTAL_MB FREE_MB UN
-------------------- ---------------------- ------------ ---------- ---------- --
DGARCH MOUNTED EXTERN 307104 304314 N
DGDATA MOUNTED EXTERN 1228416 168358 N
idle> select name,path,total_mb,free_mb,MOUNT_STATUS,HEADER_STATUS,MODE_STATUS from v$asm_disk order by 1,2;
NAME PATH TOTAL_MB FREE_MB MOUNT_STATUS HEADER_STATUS MODE_STATUS
-------------------- ------------------------------ ---------- ---------- -------------- ------------------------ --------------
DGARCH_0000 /dev/vg_ora_arch01/rlvarch01 307104 304314 CACHED MEMBER ONLINE
DGDATA_0000 /dev/vg_ora_data01/rlvdata01 307104 42084 CACHED MEMBER ONLINE
DGDATA_0001 /dev/vg_ora_data02/rlvdata02 307104 42090 CACHED MEMBER ONLINE
DGDATA_0002 /dev/vg_ora_data03/rlvdata04 307104 42096 CACHED MEMBER ONLINE
DGDATA_0003 /dev/vg_ora_data03/rlvdata03 307104 42088 CACHED MEMBER ONLINE
/dev/vg_ora_data04/rlvdata05 307104 0 CLOSED CANDIDATE ONLINE
/dev/vg_ora_data04/rlvdata06 307104 0 CLOSED CANDIDATE ONLINE
7 rows selected.
idle>
idle> alter diskgroup dgdata add disk '/dev/vg_ora_data06/rlvdata08';
--add disk to diskgroup
Diskgroup altered.
idle> alter diskgroup dgdata add disk '/dev/vg_ora_data06/rlvdata09';
--add disk to diskgroup
Diskgroup altered.
idle> alter diskgroup dgdata rebalance power 11; --rebalance diskgroup
Diskgroup altered.
idle>
idle>
idle> select * from v$asm_operation;
select name,total_mb,free_mb,unbalanced from v$asm_diskgroup;
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES
------------ ---------- -------- ---------- ---------- ---------- ---------- ---------- -----------
2 REBAL RUN 11 11 35 353309 6160 57
idle>
select group_number,name,path,total_mb,free_mb,header_status from v$asm_disk order by 1,2;
NAME TOTAL_MB FREE_MB UN
-------------------- ---------- ---------- --
DGARCH 307104 304314 N
DGDATA 1842624 782558 N
idle> idle>
GROUP_NUMBER NAME PATH TOTAL_MB FREE_MB HEADER_STATUS
------------ -------------------- ------------------------------ ---------- ---------- ------------------------
1 DGARCH_0000 /dev/vg_ora_arch01/rlvarch01 307104 304314 MEMBER
2 DGDATA_0000 /dev/vg_ora_data01/rlvdata01 307104 42095 MEMBER
2 DGDATA_0001 /dev/vg_ora_data02/rlvdata02 307104 42102 MEMBER
2 DGDATA_0002 /dev/vg_ora_data03/rlvdata04 307104 42107 MEMBER
2 DGDATA_0003 /dev/vg_ora_data03/rlvdata03 307104 42098 MEMBER
2 DGDATA_0004 /dev/vg_ora_data04/rlvdata05 307104 307078 MEMBER
2 DGDATA_0005 /dev/vg_ora_data04/rlvdata06 307104 307078 MEMBER
7 rows selected.
idle>
-------------------------------------------------------------------------------------------------------------
--do from sfc12rc1 to sfc12rc3 if necessary, check init file is right:
alter system set asm_diskstring='/dev/vg_ora_arch01/rlvarch01','/dev/vg_ora_data01/rlvdata01','/dev/vg_ora_data02/rlvdata02','/dev/vg_ora_data03/rlvdata03','/dev/vg_ora_data03/rlvdata04','/dev/vg_ora_data04/rlvdata05','/dev/vg_ora_data04/rlvdata06';
--------------历史文章推荐---------------
利用ASM功能实现Oracle RAC零停机存储迁移
Oracle rac如何替换OCR和VOTE磁盘组
查询rownum伪列引起的sql性能问题分析
数据库圈子里最冷门的八个公众号
ORA-609频繁出现在alert.log,如何解决?
Solaris安装Oracle RAC配置手册
Oracle小机利用ZFS实现在线存储迁移
数据库如何预防勒索病毒