###########sample 1
OCR corruption messages are reported in crsd.log, automatic OCR backup is failing. Ocrcheck complains "Device/File intergrity check failed":
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3372
Available space (kbytes) : 258748
ID : 1423232882
Device/File Name : +DBFS_DG
Device/File integrity check failed <<<<<<<<<<<<<<<<<<<<<<<<
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check failed
Logical corruption check bypassed due to insufficient quorum
alert<racnode1>.log shows:
[crsd(77158)]CRS-1006:The OCR location +DBFS_DG is inaccessible. Details in /u01/app/11.2.0.3/grid/log/racnode1/crsd/crsd.log.
2014-07-28 19:12:18.023:
[/u01/app/11.2.0.3/grid/bin/orarootagent.bin(77413)]CRS-5822:Agent '/u01/app/11.2.0.3/grid/bin/orarootagent_root' disconnected from server. Details at (:CRSAGF00117:) {0:2:6} in /u01/app/11.2.0.3/grid/log/racnode1/agent/crsd/orarootagent_root/orarootagent_root.log.
2014-07-28 19:12:47.718:
[ohasd(40904)]CRS-2765:Resource 'ora.crsd' has failed on server 'racnode1'.
2014-07-28 19:12:54.369:
[crsd(9099)]CRS-1012:The OCR service started on node racnode1.
2014-07-28 19:12:55.702:
[crsd(9099)]CRS-1201:CRSD started on node racnode1.
2014-07-29 03:45:36.471:
[crsd(9099)]CRS-1006:The OCR location +DBFS_DG is inaccessible. Details in /u01/app/11.2.0.3/grid/log/racnode1/crsd/crsd.log.
crsd.log shows:
2014-07-31 07:13:09.244: [ OCRASM][2175183168]proprasmres: Block from mirror #1 is same as buffer passed
2014-07-31 07:13:09.254: [ OCRASM][2175183168]proprasmres: Block from mirror #2 is same as buffer passed
2014-07-31 07:13:09.278: [ OCRASM][2175183168]proprasmres: Total 2 mirrors detected
2014-07-31 07:13:09.278: [ OCRASM][2175183168]proprasmres: Block from mirror #1 same as block from mirror #2
2014-07-31 07:13:09.278: [ OCRASM][2175183168]proprasmres: 2 mirrors found in this disk group.
2014-07-31 07:13:09.278: [ OCRASM][2175183168]proprasmres: The buffer passed matches the buffers read from all 2 mirrors.
2014-07-31 07:13:09.278: [ OCRASM][2175183168]proprasmres: Need to invoke checkdg. The buffer passed matches with buffer from all mirrors.
2014-07-31 07:13:09.488: [ OCRASM][2175183168]proprasmres: Successfully returned after calling Check DG.
2014-07-31 07:13:09.488: [ OCRRAW][2175183168]proprior:1 ASM re silver returned [22]
2014-07-31 07:13:09.488: [ OCRRAW][2175183168]gst: Dev/Page/Block [0/843/1904] is CORRUPT (header) <<<
2014-07-31 07:13:09.488: [ OCRRAW][2175183168]rbkp:2: Problem [26]. Could not read the free list
2014-07-31 07:13:09.488: [ OCRRAW][2175183168]gst:could not read fcl page 1
2014-07-31 07:13:09.488: [ OCRRAW][2175183168]rbkp:2: Problem [26]. Could not read the free list
2014-07-31 07:13:09.488: [ OCRRAW][2175183168]gst:could not read fcl page 2
2014-07-31 07:13:09.488: [ OCRSRV][2175183168]th_snap:6''':Failed corruption check reading device [+DBFS_DG]. Not taking backup. <<<
2014-07-31 07:13:09.488: [ OCRSRV][2175183168]th_snap:8:failed to take backup retval [0] corruption [1]
2014-07-31 07:13:36.549: [UiServer][2156271936] CS(0x2628b20)set Properties ( grid,0x7f747c01ff30)
CHANGES
No recent Change
CAUSE
The OCR is corrupted for some reason, the root cause is unknown. This also leads to automatic OCR backup failure.
SOLUTION
Restore OCR from a good backup is the only way to move forward. Please refer to Note 1062983.1 How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems for details. Here is the simplified steps for restore OCR only:
1. Locate the latest automatic OCR backup, check all nodes in the cluster:
2. Make sure the Grid Infrastructure is shutdown on all nodes, as root user:
3. Start the CRS stack in exclusive mode on the node where the ocr backup is located:
4. Restore the latest OCR backup
# <GRID_HOME>/bin/ocrconfig -restore backup00.ocr << replace the backup00.ocr with a proper ocr backup file name
5. Shutdown and restart Grid Infrastructure (on all nodes)
# <GRID_HOME>/bin/crsctl start crs
6. Rerun ocrcheck command to verify it now reports "Device/File integrity check succeeded"
###########sample 2
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
The goal of this note is to provide steps to add, remove, replace or move an Oracle Cluster Repository (OCR) and/or Voting Disk in Oracle Clusterware 10gR2, 11gR1 and 11gR2 environment. It will also provide steps to move OCR / voting and ASM devices from raw device to block device. For Oracle Clusterware 12c, please refer to Document 1558920.1 Software Patch Level and 12c Grid Infrastructure OCR Backup/Restore.
This article is intended for DBA and Support Engineers who need to modify, or move OCR and voting disks files, customers who have an existing clustered environment deployed on a storage array and might want to migrate to a new storage array with minimal downtime.
Typically, one would simply cp or dd the files once the new storage has been presented to the hosts. In this case, it is a little more difficult because:
1. The Oracle Clusterware has the OCR and voting disks open and is actively using them. (Both primary and mirrors)
2. There is an API provided for this function (ocrconfig and crsctl), which is the appropriate interface than typical cp and/or dd commands.
It is highly recommended to take a backup of the voting disk, and OCR device before making any changes.
SOLUTION
Prepare the disks
For OCR or voting disk addition or replacement, new disks need to be prepared. Please refer to Clusteware/Gird Infrastructure installation guide for different platform for the disk requirement and preparation.
1. Size
For 10.1:
OCR device minimum size (each): 100M
Voting disk minimum size (each): 20M
For 10.2:
OCR device minimum size (each): 256M
Voting disk minimum size (each): 256M
For 11.1:
OCR device minimum size (each): 280M
Voting disk minimum size (each): 280M
For 11.2:
OCR device minimum size (each): 300M
Voting disk minimum size (each): 300M
2. For raw or block device (pre 11.2)
Please refer to Clusterware installation guide on different platform for more details.
On windows platform the new raw device link is created via $CRS_HOME\bin\GUIOracleOBJManager.exe, for example:
\\.\VOTEDSK2
\\.\OCR2
3. For ASM disks (11.2+)
On Windows platform, please refer to Document 331796.1 How to setup ASM on Windows
On Linux platform, please refer to Document 580153.1 How To Setup ASM on Linux Using ASMLIB Disks, Raw Devices or Block Devices?
For other platform, please refer to Clusterware/Gird Infrastructure installation guide on OTN (Chapter: Oracle Automatic Storage Management Storage Configuration).
4. For cluster file system
If OCR is on cluster file system, the new OCR or OCRMIRROR file must be touched before add/replace command can be issued. Otherwise PROT-21: Invalid parameter (10.2/11.) or PROT-30 The Oracle Cluster Registry location to be added is not accessible (for 11.2) will occur.
# touch /cluster_fs/ocrdisk.dat
# touch /cluster_fs/ocrmirror.dat
# chown root:oinstall /cluster_fs/ocrdisk.dat /cluster_fs/ocrmirror.dat
# chmod 640 /cluster_fs/ocrdisk.dat /cluster_fs/ocrmirror.dat
It is not required to pre-touch voting disk file on cluster file system.
After delete command is issued, the ocr/voting files on the cluster file system require to be removed manually.
5. Permissions
For OCR device:
chown root:oinstall <OCR device>
chmod 640 <OCR device>
For Voting device:
chown <crs/grid>:oinstall <Voting device>
chmod 644 <Voting device>
For ASM disks used for OCR/Voting disk:
chown griduser:asmadmin <asm disks>
chmod 660 <asm disks>
6. Redundancy
For Voting disks (never use even number of voting disks):
External redundancy requires minimum of 1 voting disk (or 1 failure group)
Normal redundancy requires minimum of 3 voting disks (or 3 failure group)
High redundancy requires minimum of 5 voting disks (or 5 failure group)
Insufficient failure group in respect of redundancy requirement could cause voting disk creation failure. For example: ORA-15274: Not enough failgroups (3) to create voting files
For OCR:
10.2 and 11.1, maximum 2 OCR devices: OCR and OCRMIRROR
11.2+, upto 5 OCR devices can be added.
For more information, please refer to platform specific Oracle® Grid Infrastructure Installation Guide.
ADD/REMOVE/REPLACE/MOVE OCR Device
Please ensure CRS is running on ALL cluster nodes during this operation, otherwise the change will not reflect in the CRS down node, CRS will have problem to startup from this down node. "ocrconfig -repair" option will be required to fix the ocr.loc file on the CRS down node.
For 11.2+ with OCR on ASM diskgroup, due to unpublished Bug 8604794 - FAIL TO CHANGE OCR LOCATION TO DG WITH 'OCRCONFIG -REPAIR -REPLACE', "ocrconfig -repair" to change OCR location to different ASM diskgroup does not work currently. Workaround is to manually edit /etc/oracle/ocr.loc or /var/opt/oracle/ocr.loc or Windows registry HYKEY_LOCAL_MACHINE\SOFTWARE\Oracle\ocr, point to desired diskgroup.
If there is any issue with OLR, please refer to How to restore OLR in 11.2 Grid Infrastructure Note 1193643.1.
Make sure there is a recent copy of the OCR file before making any changes:
ocrconfig -showbackup
If there is not a recent backup copy of the OCR file, an export can be taken for the current OCR file. Use the following command to generate an export of the online OCR file:
In 10.2
# ocrconfig -export <OCR export_filename> -s online
In 11.1 and 11.2
# ocrconfig -manualbackup
node1 2008/08/06 06:11:58 /crs/cdata/crs/backup_20080807_003158.ocr
To recover using this file, the following command can be used:
From 11.2+, please also refer How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems Document 1062983.1
To see whether OCR is healthy, run an ocrcheck, which should return with like below.
# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 497928
Used space (kbytes) : 312
Available space (kbytes) : 497616
ID : 576761409
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2
Device/File integrity check succeeded
Cluster registry integrity check succeeded
For 11.1+, ocrcheck as root user should also show:
Logical corruption check succeeded
1. To add an OCRMIRROR device when only OCR device is defined:
To add an OCR mirror device, provide the full path including file name.
10.2 and 11.1:
# ocrconfig -replace ocrmirror <filename>
eg:
# ocrconfig -replace ocrmirror /dev/raw/raw2
# ocrconfig -replace ocrmirror /dev/sdc1
# ocrconfig -replace ocrmirror /cluster_fs/ocrdisk.dat
> ocrconfig -replace ocrmirror \\.\OCRMIRROR2 - for Windows
11.2+: From 11.2 onwards, upto 4 ocrmirrors can be added
# ocrconfig -add <filename>
eg:
# ocrconfig -add +OCRVOTE2
# ocrconfig -add /cluster_fs/ocrdisk.dat
2. To remove an OCR device
To remove an OCR device:
10.2 and 11.1:
# ocrconfig -replace ocr
11.2+:
# ocrconfig -delete <filename>
eg:
# ocrconfig -delete +OCRVOTE1
* Once an OCR device is removed, ocrmirror device automatically changes to be OCR device.
* It is not allowed to remove OCR device if only 1 OCR device is defined, the command will return PROT-16.
To remove an OCR mirror device:
10.2 and 11.1:
# ocrconfig -replace ocrmirror
11.2+:
# ocrconfig -delete <ocrmirror filename>
eg:
# ocrconfig -delete +OCRVOTE2
After removal, the old OCR/OCRMIRROR can be deleted if they are on cluster filesystem.
3. To replace or move the location of an OCR device
2. If an OCR device is replaced with a device of a different size, the size of the new device will not be reflected until the clusterware is restarted.
10.2 and 11.1:
To replace the OCR device with <filename>, provide the full path including file name.
# ocrconfig -replace ocr <filename>
eg:
# ocrconfig -replace ocr /dev/sdd1
$ ocrconfig -replace ocr \\.\OCR2 - for Windows
To replace the OCR mirror device with <filename>, provide the full path including file name.
# ocrconfig -replace ocrmirror <filename>
eg:
# ocrconfig -replace ocrmirror /dev/raw/raw4
# ocrconfig -replace ocrmirror \\.\OCRMIRROR2 - for Windows
11.2+:
The command is same for replace either OCR or OCRMIRRORs (at least 2 OCR exist for replace command to work):
eg:
# ocrconfig -replace /cluster_file/ocr.dat -replacement +OCRVOTE
# ocrconfig -replace +CRS -replacement +OCRVOTE
4. To restore an OCR when clusterware is down
When OCR is not accessible, CRSD process will not start, hence the clusterware stack will not start completely. A restore of OCR device access and good OCR content is required.
To view the automatic OCR backup:
To restore the OCR backup:
For 11.2+: If OCR is located in ASM disk and ASM disk is also lost, please check out:
How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems Document 1062983.1
How to Restore OCR After the 1st ASM Diskgroup is Lost on Windows Document 1294915.1
If there is no valid backup of OCR presented, reinitialize OCR and Voting is required.
For 10.2 and 11.1:
Please refer to How to Recreate OCR/Voting Disk Accidentally Deleted Document 399482.1
For 11.2+:
Deconfig the clusterware stack and rerun root.sh on all nodes is required.
ADD/DELETE/MOVE Voting Disk
2. For 11.2+, when using ASM disks for OCR and voting, the command is same for Windows and Unix platform.
For pre 11.2, to take a backup of voting disk:
$ dd if=voting_disk_name of=backup_file_name
For Windows:
ocopy \\.\votedsk1 o:\backup\votedsk1.bak
For 11.2+, it is no longer required to back up the voting disk. The voting disk data is automatically backed up in OCR as part of any configuration change. The voting disk files are backed up automatically by Oracle Clusterware if the contents of the files have changed in the following ways:
-
Configuration parameters, for example
misscount
, have been added or modified -
After performing voting disk
add
ordelete
operations
The voting disk contents are restored from a backup automatically when a new voting disk is added or replaced.
For 10gR2 release
Shutdown the Oracle Clusterware (crsctl stop crs as root) on all nodes before making any modification to the voting disk. Determine the current voting disk location using:
crsctl query css votedisk
1. To add a Voting Disk, provide the full path including file name:
# crsctl add css votedisk <VOTEDISK_LOCATION> -force
eg:
# crsctl add css votedisk /dev/raw/raw1 -force
# crsctl add css votedisk /cluster_fs/votedisk.dat -force
> crsctl add css votedisk \\.\VOTEDSK2 -force - for windows
2. To delete a Voting Disk, provide the full path including file name:
# crsctl delete css votedisk <VOTEDISK_LOCATION> -force
eg:
# crsctl delete css votedisk /dev/raw/raw1 -force
# crsctl delete css votedisk /cluster_fs/votedisk.dat -force
> crsctl delete css votedisk \\.\VOTEDSK1 -force - for windows
3. To move a Voting Disk, provide the full path including file name, add a device first before deleting the old one:
# crsctl add css votedisk <NEW_LOCATION> -force
# crsctl delete css votedisk <OLD_LOCATION> -force
eg:
# crsctl add css votedisk /dev/raw/raw4 -force
# crsctl delete css votedisk /dev/raw/raw1 -force
After modifying the voting disk, start the Oracle Clusterware stack on all nodes
# crsctl start crs
Verify the voting disk location using
# crsctl query css votedisk
For 11gR1 release
Starting with 11.1.0.6, the below commands can be performed online (CRS is up and running).
1. To add a Voting Disk, provide the full path including file name:
eg:
# crsctl add css votedisk /dev/raw/raw1
# crsctl add css votedisk /cluster_fs/votedisk.dat
> crsctl add css votedisk \\.\VOTEDSK2 - for windows
2. To delete a Voting Disk, provide the full path including file name:
eg:
# crsctl delete css votedisk /dev/raw/raw1 -force
# crsctl delete css votedisk /cluster_fs/votedisk.dat
> crsctl delete css votedisk \\.\VOTEDSK1 - for windows
3. To move a Voting Disk, provide the full path including file name:
# crsctl delete css votedisk <OLD_LOCATION>
eg:
# crsctl add css votedisk /dev/raw/raw4
# crsctl delete css votedisk /dev/raw/raw1
Verify the voting disk location:
For 11gR2 release and later
From 11.2, votedisk can be stored on either ASM diskgroup or cluster file systems. The following commands can only be executed when Grid Infrastructure is running. As grid user:
1. To add a Voting Disk
a. When votedisk is on cluster file system:
b. When votedisk is on ASM diskgroup, no add option available.
The number of votedisk is determined by the diskgroup redundancy. If more copies of votedisks are desired, one can move votedisk to a diskgroup with higher redundancy. See step 4.
If a votedisk is removed from a normal or high redundancy diskgroup for abnormal reason, it can be added back using:
2. To delete a Voting Disk
a. When votedisk is on cluster file system:
or
$ crsctl delete css votedisk <vdiskGUID> (vdiskGUID is the File Universal Id from 'crsctl query css votedisk')
b. When votedisk is on ASM, no delete option available, one can only replace the existing votedisk group with another ASM diskgroup
3. To move a Voting Disk on cluster file system
$ crsctl delete css votedisk <old_cluster_fs/filename>
or
$ crsctl delete css votedisk <vdiskGUID>
4. To move voting disk on ASM from one diskgroup to another diskgroup due to redundancy change or disk location change
Example here is moving from external redundancy +OCRVOTE diskgroup to normal redundancy +CRS diskgroup
2. $ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 5e391d339a594fc7bf11f726f9375095 (ORCL:ASMDG02) [+OCRVOTE]
Located 1 voting disk(s).
3. $ crsctl replace votedisk +CRS
Successful addition of voting disk 941236c324454fc0bfe182bd6ebbcbff.
Successful addition of voting disk 07d2464674ac4fabbf27f3132d8448b0.
Successful addition of voting disk 9761ccf221524f66bff0766ad5721239.
Successful deletion of voting disk 5e391d339a594fc7bf11f726f9375095.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced
4. $ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 941236c324454fc0bfe182bd6ebbcbff (ORCL:CRSD1) [CRS]
2. ONLINE 07d2464674ac4fabbf27f3132d8448b0 (ORCL:CRSD2) [CRS]
3. ONLINE 9761ccf221524f66bff0766ad5721239 (ORCL:CRSD3) [CRS]
Located 3 voting disk(s).
5. To move voting disk between ASM diskgroup and cluster file system
a. Move from ASM diskgroup to cluster file system:
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 6e5850d12c7a4f62bf6e693084460fd9 (ORCL:CRSD1) [CRS]
2. ONLINE 56ab5c385ce34f37bf59580232ea815f (ORCL:CRSD2) [CRS]
3. ONLINE 4f4446a59eeb4f75bfdfc4be2e3d5f90 (ORCL:CRSD3) [CRS]
Located 3 voting disk(s).
$ crsctl replace votedisk /rac_shared/oradata/vote.test3
Now formatting voting disk: /rac_shared/oradata/vote.test3.
CRS-4256: Updating the profile
Successful addition of voting disk 61c4347805b64fd5bf98bf32ca046d6c.
Successful deletion of voting disk 6e5850d12c7a4f62bf6e693084460fd9.
Successful deletion of voting disk 56ab5c385ce34f37bf59580232ea815f.
Successful deletion of voting disk 4f4446a59eeb4f75bfdfc4be2e3d5f90.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 61c4347805b64fd5bf98bf32ca046d6c (/rac_shared/oradata/vote.disk) []
Located 1 voting disk(s).
b. Move from cluster file system to ASM diskgroup
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 61c4347805b64fd5bf98bf32ca046d6c (/rac_shared/oradata/vote.disk) []
Located 1 voting disk(s).
$ crsctl replace votedisk +CRS
CRS-4256: Updating the profile
Successful addition of voting disk 41806377ff804fc1bf1d3f0ec9751ceb.
Successful addition of voting disk 94896394e50d4f8abf753752baaa5d27.
Successful addition of voting disk 8e933621e2264f06bfbb2d23559ba635.
Successful deletion of voting disk 61c4347805b64fd5bf98bf32ca046d6c.
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
[oragrid@auw2k4 crsconfig]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 41806377ff804fc1bf1d3f0ec9751ceb (ORCL:CRSD1) [CRS]
2. ONLINE 94896394e50d4f8abf753752baaa5d27 (ORCL:CRSD2) [CRS]
3. ONLINE 8e933621e2264f06bfbb2d23559ba635 (ORCL:CRSD3) [CRS]
Located 3 voting disk(s).
6. To verify:
For online OCR/Voting diskgroup change
For disk storage migration, if using ASM diskgroup and the size / diskgroup redundancy remain the same, then one can use add failure group contain new storage and drop failure group which contain old storage to achieve this online change.
For more information, please refer to How to Swap Voting Disks Across Storage in a Diskgroup (Doc ID 1558007.1) and Exact Steps To Migrate ASM Diskgroups To Another SAN/Disk-Array/DAS/etc Without Downtime. (Doc ID 837308.1)
For Voting disk maintenance in Extended Cluster
Please refer to Oracle White paper: Oracle Clusterware 11g Release 2 (11.2) – Using standard NFS to support a third voting file for extended cluster configurations
If there is any issue using asmca tool, please refer to How to Manually Add NFS voting disk to an Extended Cluster using ASM in 11.2 Note 1421588.1 for detailed commands.
Community Discussions
Still have questions? Use the communities window below to search for similar discussions or start a new discussion on this subject. (Window is the live community not a screenshot)
Click here to open in main browser window
#######sample 1 10g 更换vote 和 OCR
一、voting disk换盘
注意:voting disk是在crs,rdbms状态正常的情况下进行
1、raw4用于新的voting disk,raw5用于新的ocr disk,注意其权限
[root@node1 ~]# ls -l /dev/raw/raw4
crw-rw---- 1 oracle dba 162, 4 Feb 28 15:22 /dev/raw/raw4
[root@node1 ~]# ls -l /dev/raw/raw5
crw-rw---- 1 oracle dba 162, 5 Feb 28 15:22 /dev/raw/raw5
2、观察目前系统中的voting disk
[root@node1 oracle]# crsctl query css votedisk
0. 0 /dev/raw/raw2
located 1 votedisk(s).
3、系统中添加voting disk
[root@node1 oracle]# crsctl add css votedisk /dev/raw/raw4
Cluster is not in a ready state for online disk addition
[root@node1 oracle]# crsctl add css votedisk /dev/raw/raw4 -force
Now formatting voting disk: /dev/raw/raw4
successful addition of votedisk /dev/raw/raw4.
4、系统中删除voting disk
[root@node1 oracle]# crsctl delete css votedisk /dev/raw/raw2
Cluster is not in a ready state for online disk removal
[root@node1 oracle]# crsctl delete css votedisk /dev/raw/raw2 -force
successful deletion of votedisk /dev/raw/raw2.
5、可以看到voting disk换盘成功
[root@node1 oracle]# crsctl query css votedisk
0. 0 /dev/raw/raw4
located 1 votedisk(s).
二、ocr换盘
ocr换盘关闭crs,以下为简要操作步骤
1、双节点关闭crs
[root@node1 oracle]# crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
2、ocr检查
[root@node1 oracle]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 511744
Used space (kbytes) : 3832
Available space (kbytes) : 507912
ID : 1127674663
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
3、导出ocr盘内容
[root@node1 oracle]# ocrconfig -export /tmp/ocrfile.dmp
[root@node1 oracle]# ls -l /tmp/ocrfile.dmp
-rw-r--r-- 1 root root 85125 Feb 28 15:40 /tmp/ocrfile.dmp
4、修改双节点/etc/oracle/ocr.loc内容,将ocr位置替换为新的ocr盘
[root@node1 oracle]# cat /etc/oracle/ocr.loc
ocrconfig_loc=/dev/raw/raw5
local_only=FALSE
5、将ocr内容导入至新盘
[root@node1 oracle]# ocrconfig -import /tmp/ocrfile.dmp
6、检查ocr新位置是否生效
[root@node1 oracle]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 511744
Used space (kbytes) : 3832
Available space (kbytes) : 507912
ID : 1884769518
Device/File Name : /dev/raw/raw5
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded