重建ocr(recreate ocr)mos文章NOTE:399482.1- How to Recreate OCR/Voting Disk Accidentally Deleted

谯嘉懿
2023-12-01

                                                   重建ocr(recreate ocr)


没有备份的情况下如何重建ocr,mos上有比较详细的步骤,这里我只贴出我的语句和命令供参考

1.查看当前状态记录asm,database配置,如数据库名为orcl asm两个实例是+ASM1
[oracle@rac1 ~]$ crs_stat -t       
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.orcl.db    application    ONLINE    ONLINE    rac1        
ora....l1.inst application    ONLINE    ONLINE    rac1        
ora....l2.inst application    ONLINE    ONLINE    rac2        
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora....SM2.asm application    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    ONLINE    ONLINE    rac2  
2.停止crs(两个节点)
[root@rac1 bin]# ./crsctl stop crs
Stopping resources.
Successfully stopped CRS resources 
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued
[root@rac2 bin]# ./crsctl stop crs
Stopping resources.
Successfully stopped CRS resources 
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued
3.备份crs目录
[root@rac1 10.2.0]# cp -a crs_1 crs_2
[root@rac2 10.2.0]# cp -a crs_1 crs_2
4.在所有节点运行rootdelete.sh
 [root@rac1install]# pwd
/u01/app/oracle/product/10.2.0/crs_1/install
[root@rac1 install]# ./rootdelete.sh 
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Error while stopping resources. Possible cause: CRSD is down.
Stopping CSSD.
Unable to communicate with the CSS daemon.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'


    
[root@rac2 install]# ./rootdelete.sh 
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Error while stopping resources. Possible cause: CRSD is down.
Stopping CSSD.
Unable to communicate with the CSS daemon.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'


5.在执行安装的节点执行rootdeinstall.sh
[root@rac1 install]# ./rootdeinstall.sh 


Removing contents from OCR device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 2.76236 seconds, 3.8 MB/s


6.检查crs进程是否存在
[root@rac1 install]# ps -e | grep -i 'ocsd'
[root@rac1 install]# ps -e | grep -i 'crd.bin'
[root@rac1 install]# ps -e | grep -i 'evd.bin'


[root@rac1 install]# ps -e | grep -i 'ocssd'
[root@rac1 install]# ps -e | grep -i 'crsd.bin'
[root@rac1 install]# ps -e | grep -i 'evmd.bin'
7.执行root.sh
[root@rac1 crs_1]# ./root.sh 
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured


Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw2
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
CSS is inactive on these nodes.
        rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

在剩下的节点执行root.sh,实验环境只有两个节点
[root@rac2 crs_1]# ./root.sh 
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured


Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.


NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]
节点二报错,是由于网络问题。将公网和私网的网段写入
[root@rac2 bin]# /oifcfg setif -global eth0/192.168.56.0:public
-bash: /oifcfg: No such file or directory
[root@rac2 bin]# ./oifcfg setif -global eth0/192.168.56.0:public
[root@rac2 bin]# ./oifcfg setif -global eth1/10.0.0.0:cluster_interconnect
[root@rac2 bin]# ./oifcfg getif
eth0  192.168.56.0  global  public
eth1  10.0.0.0  global  cluster_interconnect


按照/etc/hosts文件里的网段写入
[root@rac2 bin]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               rac2 localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
#pub
192.168.56.167 rac2
192.168.56.166 rac1
#priv
10.0.0.2 rac1-priv
10.0.0.3 rac2-priv
#vip
192.168.56.168 rac1-vip
192.168.56.169 rac2-vip


使用vnc用root用户开启vipca配置vip网络


[root@rac2 bin]# ./crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora.rac2.gsd   application    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    ONLINE    ONLINE    rac2 
8.使用netca配置监听
先mv掉原来的listener.ora
[oracle@rac1 admin]$ mv listener.ora listener.ora.bak
[oracle@rac2 admin]$ mv listener.ora listener.ora.bak
netca 创建完进程节点2的listener报错
[oracle@rac2 admin]$ lsnrctl status


LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 04-MAY-2016 03:20:10


Copyright (c) 1991, 2005, Oracle.  All rights reserved.


Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 111: Connection refused
[oracle@rac2 admin]$ lsnrctl start
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 04-MAY-2016 03:20:16


Copyright (c) 1991, 2005, Oracle.  All rights reserved.


Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...


TNSLSNR for Linux: Version 10.2.0.1.0 - Production
System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
Error listening on: (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
TNS-12542: TNS:address already in use
 TNS-12560: TNS:protocol adapter error
  TNS-00512: Address already in use
   Linux Error: 98: Address already in use
 使用命令查看,将进程杀掉,重启lisntener
 [root@rac2 bin]# lsof -i :1521 | grep "\(LISTEN\)" 
tnslsnr 18960 oracle    8u  IPv4 283282      0t0  TCP rac2-vip:ncube-lm (LISTEN)
tnslsnr 18960 oracle   10u  IPv4 283285      0t0  TCP rac2:ncube-lm (LISTEN)
[root@rac2 bin]# kill -9 18960
[root@rac2 bin]# lsof -i :1521 | grep "\(LISTEN\)" 
[root@rac2 bin]# su - oracle
[oracle@rac2 ~]$ lsnrctl start


LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 04-MAY-2016 03:20:55


Copyright (c) 1991, 2005, Oracle.  All rights reserved.


Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...


TNSLSNR for Linux: Version 10.2.0.1.0 - Production
System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac2)(PORT=1521)))


Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date                04-MAY-2016 03:20:55
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac2)(PORT=1521)))
The listener supports no services
The command completed successfully
此时集群中的资源状态
[oracle@rac2 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    ONLINE    ONLINE    rac2  


9.配置ons
[oracle@rac1 bin]$ ./racgons add_config rac1:6251 rac2:6251
[oracle@rac1 bin]$ onsctl ping    
Number of onsconfiguration retrieved, numcfg = 2
onscfg[0]
   {node = rac1, port = 6251}
Adding remote host rac1:6251
onscfg[1]
   {node = rac2, port = 6251}
Adding remote host rac2:6251
ons is not running ...
[oracle@rac1 bin]$ onsctl start
[oracle@rac1 bin]$ onsctl ping
Number of onsconfiguration retrieved, numcfg = 2
onscfg[0]
   {node = rac1, port = 6251}
Adding remote host rac1:6251
onscfg[1]
   {node = rac2, port = 6251}
Adding remote host rac2:6251
ons is running ..


10.添加asm、database、service到集群
asm(将asm实例资源加到$ORACLE_HOME里,可以写直接路径)
[oracle@rac1 bin]$ ./srvctl add asm -n rac1 -i +ASM1 -o $ORACLE_HOME
[oracle@rac1 bin]$ ./srvctl add asm -n rac2 -i +ASM2 -o /u01/app/oracle/product/10.2.0/db_1
database (添加数据库)
[oracle@rac1 bin]$ ./srvctl add database -d orcl -o /u01/app/oracle/product/10.2.0/db_1
instance (添加实例)
[oracle@rac1 bin]$ ./srvctl add instance -d orcl -i orcl1 -n rac1
[oracle@rac1 bin]$ ./srvctl add instance -d orcl -i orcl2 -n rac2
service  (添加 服务)
[oracle@rac1 bin]$ ./srvctl add service -d orcl -s oltp -r orcl1,orcl2 -P BASIC
此时集群中的资源状态,上面刚添加的状态为offline
[oracle@rac1 bin]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.orcl.db    application    OFFLINE   OFFLINE               
ora....oltp.cs application    OFFLINE   OFFLINE               
ora....cl1.srv application    OFFLINE   OFFLINE               
ora....cl2.srv application    OFFLINE   OFFLINE               
ora....l1.inst application    OFFLINE   OFFLINE               
ora....l2.inst application    OFFLINE   OFFLINE               
ora....SM1.asm application    OFFLINE   OFFLINE               
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora....SM2.asm application    OFFLINE   OFFLINE               
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    ONLINE    ONLINE    rac2 


11.开启所有资源并检查验证
[oracle@rac1 bin]$ ./srvctl start asm -n rac1
[oracle@rac1 bin]$ ./srvctl start asm -n rac2
[oracle@rac1 bin]$ ./srvctl start database -d orcl
[oracle@rac1 bin]$ ./srvctl start service -d orcl
[oracle@rac1 bin]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.orcl.db    application    ONLINE    ONLINE    rac1        
ora....oltp.cs application    ONLINE    ONLINE    rac1        
ora....cl1.srv application    ONLINE    ONLINE    rac1        
ora....cl2.srv application    ONLINE    ONLINE    rac2        
ora....l1.inst application    ONLINE    ONLINE    rac1        
ora....l2.inst application    ONLINE    ONLINE    rac2        
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora....SM2.asm application    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    ONLINE    ONLINE    rac2  
验证
[oracle@rac1 bin]$ ./cluvfy stage -post crsinst -n rac1,rac2


Performing post-checks for cluster services setup 


Checking node reachability...
Node reachability check passed from node "rac1".




Checking user equivalence...
User equivalence check passed for user "oracle".


Checking Cluster manager integrity... 




Checking CSS daemon...
Daemon status check passed for "CSS daemon".


Cluster manager integrity check passed.


Checking cluster integrity... 




Cluster integrity check passed




Checking OCR integrity...


Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.


Uniqueness check for OCR device passed.


Checking the version of OCR...
OCR of correct Version "2" exists.


Checking data integrity of OCR...
Data integrity check for OCR passed.


OCR integrity check passed.


Checking CRS integrity...


Checking daemon liveness...
Liveness check passed for "CRS daemon".


Checking daemon liveness...
Liveness check passed for "CSS daemon".


Checking daemon liveness...
Liveness check passed for "EVM daemon".


Checking CRS health...
CRS health check passed.


CRS integrity check passed.


Checking node application existence... 




Checking existence of VIP node application (required)
Check passed. 


Checking existence of ONS node application (optional)
Check passed. 


Checking existence of GSD node application (optional)
Check passed. 




Post-check for cluster services setup was successful. 


 类似资料: