REMOVE / DELETE NODE RAC NODE
FROM 11gR2 CLUSTER
In
today's demonstration, I am gonna show you how to remove node from existing or
dead cluster. Here we have 3 node RAC
11gR2 (11.2.0.3) Grid Infrastructure and cluster database.
SERVER : rac121.ora.com
rac122.ora.com
rac123.ora.com
OS : Red Hat Enterprise Linux Server release
6.5 (x64)
INSTANCE : orcl1
orcl2
orcl3
Database : orcl
In
this scenarios I am going to delete node3 (rac123.ora.com) from my existing
cluster.
ON NODE 1
Get
the details for existing cluster status.
[oracle@rac121
rpm]$ ps -ef | grep pmon
oracle 3160 18490
0 14:11 pts/0 00:00:00 grep
pmon
oracle 11068
1 0 06:47 ? 00:00:02 asm_pmon_+ASM1
oracle 27759
1 0 08:03 ? 00:00:02 ora_pmon_orcl1
[oracle@rac121 rpm]$
. oraenv
ORACLE_SID = [+ASM1]
?
[oracle@rac121
rpm]$ olsnodes -s -n -t -i
rac121 1
rac121-vip Active Unpinned
rac122 2
rac122-vip Active Unpinned
rac123 3
rac123-vip Active Unpinned
For
this demonstration I have stop and disable the cluster services. As I am going to delete the dead cluster from
my existing cluster.
Now check
the status after disable and stopped component on node3 (rac123)
[oracle@rac121
rpm]$ olsnodes -s -n -t -i
rac121 1
rac121-vip Active Unpinned
rac122 2
rac122-vip Active Unpinned
rac123 3
rac123-vip Inactive Unpinned
Before
installation the node removal process we have to ensure that we must proceed
the OCR backup for the safer size if anything goes wrong.
[oracle@rac121
rpm]$ sudo $ORACLE_HOME/bin/ocrconfig -showbackup
rac121 2016/04/01 11:03:02
/grid/app/11.2.0.3/grid/cdata/rac-cluster/backup00.ocr
rac121 2016/04/01 11:03:02
/grid/app/11.2.0.3/grid/cdata/rac-cluster/day.ocr
rac121 2016/04/01 11:03:02
/grid/app/11.2.0.3/grid/cdata/rac-cluster/week.ocr
rac121 2016/04/01 14:29:20 /grid/app/11.2.0.3/grid/cdata/rac-cluster/backup_20160401_142920.ocr
[oracle@rac121
rpm]$ sudo $ORACLE_HOME/bin/ocrconfig -manualbackup
rac121 2016/04/01 14:31:20
/grid/app/11.2.0.3/grid/cdata/rac-cluster/backup_20160401_143120.ocr
rac121 2016/04/01 14:29:20
/grid/app/11.2.0.3/grid/cdata/rac-cluster/backup_20160401_142920.ocr
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121
rpm]$ sudo $ORACLE_HOME/bin/ocrconfig -showbackup
rac121 2016/04/01 11:03:02
/grid/app/11.2.0.3/grid/cdata/rac-cluster/backup00.ocr
rac121 2016/04/01 11:03:02
/grid/app/11.2.0.3/grid/cdata/rac-cluster/day.ocr
rac121 2016/04/01 11:03:02
/grid/app/11.2.0.3/grid/cdata/rac-cluster/week.ocr
rac121 2016/04/01 14:31:20
/grid/app/11.2.0.3/grid/cdata/rac-cluster/backup_20160401_143120.ocr
rac121 2016/04/01 14:29:20 /grid/app/11.2.0.3/grid/cdata/rac-cluster/backup_20160401_142920.ocr
Get
the RDBMS instance status on this cluster.
[oracle@rac121
rpm]$ srvctl status database -d orcl
Instance orcl1 is
running on node rac121
Instance orcl2 is
running on node rac122
Instance orcl3 is
not running on node rac123
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121
rpm]$ crsctl check cluster -all
**************************************************************
rac121:
CRS-4537: Cluster
Ready Services is online
CRS-4529: Cluster
Synchronization Services is online
CRS-4533: Event
Manager is online
**************************************************************
rac122:
CRS-4537: Cluster
Ready Services is online
CRS-4529: Cluster
Synchronization Services is online
CRS-4533: Event
Manager is online
**************************************************************
[oracle@rac121
rpm]$ olsnodes -n
rac121 1
rac122 2
rac123 3
Modify
or relocate the service before going to remove database instance from NODE 3
(rac123). Here in our case initially I
have configured preferred service on all node but due to issue on rac123
database is only running on rac121/rac122.
[oracle@rac121
rpm]$ srvctl status service -d orcl
Service crm_orcl is
running on instance(s) orcl1,orcl2
Modify
the service from 3 node to 2 node for orcl database.
[oracle@rac121
rpm]$ srvctl modify service -d orcl -s crm_orcl -n -i "orcl1,orcl2"
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121
rpm]$ srvctl status service -d orcl
Service crm_orcl is
running on instance(s) orcl1,orcl2
[oracle@rac121
rpm]$
[oracle@rac121
rpm]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE
SERVER
STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DG_DATA.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.DG_DATA2.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.DG_FLASH.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.DG_OCR.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.LISTENER.lsnr
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.asm
ONLINE ONLINE
rac121 Started
ONLINE ONLINE
rac122 Started
ora.gsd
OFFLINE OFFLINE rac121
OFFLINE OFFLINE rac122
ora.net1.network
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.ons
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE rac122
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE rac121
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE rac121
ora.cvu
1
ONLINE ONLINE rac121
ora.oc4j
1
ONLINE ONLINE rac121
ora.orcl.crm_orcl.svc
1
ONLINE ONLINE rac121
2
ONLINE ONLINE rac122
ora.orcl.db
1
ONLINE ONLINE rac121 Open
2
ONLINE ONLINE rac122
Open
3
ONLINE OFFLINE Instance
Shutdown
ora.rac121.vip
1
ONLINE ONLINE rac121
ora.rac122.vip
1
ONLINE ONLINE rac122
ora.rac123.vip
1
ONLINE INTERMEDIATE rac121 FAILED OVER
ora.scan1.vip
1
ONLINE ONLINE rac122
ora.scan2.vip
1
ONLINE ONLINE rac121
ora.scan3.vip
1
ONLINE ONLINE rac121
Remove
instance from DBCA on any of the 2 nodes (rac121/122) . I am using rac121 from removing database
instance.
Once
instance is removed from DBCA. Validate
the instance component in OCR.
[oracle@rac121
rpm]$ srvctl status database -d orcl
Instance orcl1 is
running on node rac121
Instance orcl2 is
running on node rac122
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121
rpm]$ srvctl config database -d orcl -v
Database unique
name: orcl
Database name: orcl
Oracle home:
/u01/app/oracle/product/11.2.0.3/DB_1
Oracle user: oracle
Spfile:
+DG_DATA2/orcl/spfileorcl.ora
Domain: ora.com
Start options: open
Stop options:
immediate
Database role:
PRIMARY
Management policy:
AUTOMATIC
Server pools: orcl
Database instances:
orcl1,orcl2
Disk Groups:
DG_DATA2
Mount point paths:
Services: crm_orcl
Type: RAC
Database is
administrator managed
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121
rpm]$ srvctl status database -d orcl -v
Instance orcl1 is
running on node rac121 with online services crm_orcl. Instance status: Open.
Instance orcl2 is
running on node rac122 with online services crm_orcl. Instance status: Open.
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121
rpm]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET
STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DG_DATA.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.DG_DATA2.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.DG_FLASH.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.DG_OCR.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.LISTENER.lsnr
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.asm
ONLINE ONLINE
rac121 Started
ONLINE ONLINE
rac122 Started
ora.gsd
OFFLINE OFFLINE rac121
OFFLINE OFFLINE rac122
ora.net1.network
ONLINE
ONLINE rac121
ONLINE ONLINE
rac122
ora.ons
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE rac122
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE rac121
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE
rac121
ora.cvu
1
ONLINE ONLINE rac121
ora.oc4j
1
ONLINE ONLINE rac121
ora.orcl.crm_orcl.svc
1
ONLINE ONLINE rac121
2
ONLINE ONLINE rac122
ora.orcl.db
1
ONLINE ONLINE rac121 Open
2
ONLINE ONLINE rac122 Open
ora.rac121.vip
1
ONLINE ONLINE rac121
ora.rac122.vip
1
ONLINE ONLINE rac122
ora.rac123.vip
1
ONLINE INTERMEDIATE rac121 FAILED OVER
ora.scan1.vip
1
ONLINE ONLINE rac122
ora.scan2.vip
1
ONLINE ONLINE rac121
ora.scan3.vip
1
ONLINE ONLINE rac121
Since DBCA will remove the
public thread / redo logs and Undo tablespace for node 3 (rac123). We need to validate that these or gone or
not. If not then we will proceed for
removal the manually.
[oracle@rac121 rpm]$
. oraenv
ORACLE_SID = [orcl1]
? orcl1
The Oracle base has
been changed from /grid/app/oracle to /u01/app/oracle
[oracle@rac121 rpm]$
sqlplus / as sysdba
SQL>
ALTER DATABASE DISABLE THREAD 3 ;
Database altered.
SQL> COL MEMBER
FOR A80
SQL> SET LIN200
PAGES 200
SQL>
SELECT * FROM V$LOGFILE ;
GROUP# STATUS TYPE
MEMBER IS_
------ -------
------- ----------------------------------------------- ---
2
ONLINE
+DG_DATA2/orcl/onlinelog/group_2.262.908006225 NO
1
ONLINE
+DG_DATA2/orcl/onlinelog/group_1.261.908006223 NO
3
ONLINE
+DG_DATA2/orcl/onlinelog/group_3.269.908006557 NO
4
ONLINE
+DG_DATA2/orcl/onlinelog/group_4.270.908006557 NO
SQL>
SELECT INST_ID,GROUP# ,THREAD#,SEQUENCE#,MEMBERS FROM GV$LOG ;
INST_ID GROUP#
THREAD# SEQUENCE# MEMBERS
------- ---------- ---------
---------- ---------
2
1 1
9 1
2
2 1
8
1
2
3 2
1 1
2
4 2
2 1
1
1 1
9 1
1
2 1
8 1
1
3 2
1 1
1
4 2
2 1
SQL>
select THREAD#,STATUS,ENABLED,GROUPS,INSTANCE,OPEN_TIME,CURRENT_GROUP#,SEQUENCE#
from gv$thread ;
THREAD# STATUS
ENABLED GROUPS INSTANCE OPEN_TIME
------- ------
-------- ----- -------- --------- -
1 OPEN
PUBLIC 2 orcl1
01-APR-16
2 OPEN
PUBLIC 2 orcl2
01-APR-16
1 OPEN
PUBLIC 2 orcl1
01-APR-16
2 OPEN
PUBLIC 2 orcl2
01-APR-16
TABLESPACE_NAME
ALLOCATED (MB) USED (MB) FREE (MB)
USED (%) FREE (%)
---------------
-------------- ---------- ---------- ---------- ----------
EXAMPLE 313.13 310.13 3
99.04 .96
SYSTEM 720 711.69 8.31
98.85 1.15
TEMP 33 31 2
93.94 6.06
SYSAUX 640 600.44
39.56 93.82 6.18
USERS 5 4.06 .94 81.2
18.75
UNDOTBS2 25 11.12
13.88 44.48 55.5
UNDOTBS1 120 22.37
97.63 18.64 81.35
7 rows selected.
Finally
... Redo/Undo/Public Thread gone. Now move to server where you want to
decommission the cluster nodes (rac123)
ON NODE 3
As a privilege
user enable the Cluster.
[oracle@rac123
~]$ sudo $ORACLE_HOME/bin/crsctl enable
has
CRS-4622: Oracle
High Availability Services autostart is enabled.
[oracle@rac123
~]$ sudo $ORACLE_HOME/bin/crsctl enable
crs
CRS-4622: Oracle
High Availability Services autostart is enabled.
[oracle@rac123
~]$ crsctl check crs
CRS-4639: Could not
contact Oracle High Availability Services
[oracle@rac123
~]$ ps -ef | grep d.bin
oracle 3953 25219
0 15:53 pts/0 00:00:00 grep
d.bin
Deconfig the node
from the cluster on rac123
[oracle@rac123
~]$ sudo
/grid/app/11.2.0.3/grid/crs/install/rootcrs.pl -deconfig -force
Using configuration
parameter file: /grid/app/11.2.0.3/grid/crs/install/crsconfig_params
PRCR-1119 : Failed
to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed
to query resources
Cannot communicate
with crsd
PRCR-1070 : Failed
to check if resource ora.gsd is registered
Cannot communicate
with crsd
PRCR-1070 : Failed
to check if resource ora.ons is registered
Cannot communicate
with crsd
CRS-4544: Unable to
connect to OHAS
CRS-4000: Command
Stop failed, or completed with errors.
Successfully deconfigured
Oracle clusterware stack on this node
[oracle@rac123 ~]$
[oracle@rac123 ~]$
[oracle@rac123 ~]$
ON NODE 1
Move to node1
(rac121) and delete the cluster node (rac123) from entire custer.
[oracle@rac121 rpm]$
. oraenv
ORACLE_SID = [orcl1]
? +ASM1
The Oracle base has
been changed from /u01/app/oracle to /grid/app/oracle
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
[oracle@rac121 rpm]$
cd $ORACLE_HOME/bin
[oracle@rac121
bin]$ sudo $ORACLE_HOME/bin/crsctl delete node -n rac123
CRS-4661: Node
rac123 successfully deleted.
Update
the Oracle Inventory
[oracle@rac121 bin]$
cd ../oui/bin/
[oracle@rac121 bin]$
pwd
/grid/app/11.2.0.3/grid/oui/bin
UPDATE ORACLE INVENTORY ON NODE RAC121
FOR GRID HOME
[oracle@rac121
bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/grid/app/11.2.0.3/grid
"CLUSTER_NODES={rac121,rac122}"
Starting Oracle
Universal Installer...
Checking swap space:
must be greater than 500 MB. Actual
7861 MB Passed
The inventory
pointer is located at /etc/oraInst.loc
The inventory is
located at /grid/app/oraInventory
'UpdateNodeList' was
successful.
FOR DB HOME
[oracle@rac121
bin]$ /u01/app/oracle/product/11.2.0.3/DB_1/oui/bin/runInstaller
-updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/DB_1
"CLUSTER_NODES={rac121,rac122}"
Starting Oracle
Universal Installer...
Checking swap space:
must be greater than 500 MB. Actual
7860 MB Passed
The inventory
pointer is located at /etc/oraInst.loc
The inventory is
located at /grid/app/oraInventory
'UpdateNodeList' was
successful.
UPDATE ORACLE INVENTORY ON NODE RAC122
FOR GRID HOME
[oracle@rac122
bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/grid/app/11.2.0.3/grid
"CLUSTER_NODES={rac121,rac122}"
Starting Oracle
Universal Installer...
Checking swap space:
must be greater than 500 MB. Actual
7999 MB Passed
The inventory pointer
is located at /etc/oraInst.loc
The inventory is
located at /grid/app/oraInventory
'UpdateNodeList' was
successful.
FOR DB HOME
[oracle@rac122
bin]$ /u01/app/oracle/product/11.2.0.3/DB_1/oui/bin/runInstaller
-updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/DB_1
"CLUSTER_NODES={rac121,rac122}"
Starting Oracle
Universal Installer...
Checking swap space:
must be greater than 500 MB. Actual
7999 MB Passed
The inventory
pointer is located at /etc/oraInst.loc
The inventory is
located at /grid/app/oraInventory
'UpdateNodeList' was
successful.
UPDATE ORACLE INVENTORY ON NODE RAC123
FOR GRID HOME
[oracle@rac123
~]$ cd /grid/app/11.2.0.3/grid/oui/bin
[oracle@rac123
bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/grid/app/11.2.0.3/grid
"CLUSTER_NODES={rac123}" CRS=TRUE -local
Starting Oracle
Universal Installer...
Checking swap space:
must be greater than 500 MB. Actual
7998 MB Passed
The inventory
pointer is located at /etc/oraInst.loc
The inventory is
located at /grid/app/oraInventory
'UpdateNodeList' was
successful.
FOR DB HOME
[oracle@rac123
bin]$ /u01/app/oracle/product/11.2.0.3/DB_1/oui/bin/runInstaller
-updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/DB_1
"CLUSTER_NODES={rac123}" - local
Starting Oracle
Universal Installer...
Checking swap space:
must be greater than 500 MB. Actual
7998 MB Passed
The inventory
pointer is located at /etc/oraInst.loc
The inventory is
located at /grid/app/oraInventory
'UpdateNodeList' was
successful.
Validate the cluster
node status in OCR.
[oracle@rac121
bin]$ olsnodes -s -n -t -i
rac121 1
rac121-vip Active Unpinned
rac122 2
rac122-vip Active Unpinned
[oracle@rac121
bin]$ crsctl check cluster -all
**************************************************************
rac121:
CRS-4537: Cluster
Ready Services is online
CRS-4529: Cluster
Synchronization Services is online
CRS-4533: Event
Manager is online
**************************************************************
rac122:
CRS-4537: Cluster
Ready Services is online
CRS-4529: Cluster
Synchronization Services is online
CRS-4533: Event
Manager is online
**************************************************************
[oracle@rac121
bin]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE
SERVER
STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DG_DATA.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.DG_DATA2.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.DG_FLASH.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.DG_OCR.dg
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.LISTENER.lsnr
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.asm
ONLINE ONLINE
rac121 Started
ONLINE ONLINE
rac122 Started
ora.gsd
OFFLINE OFFLINE rac121
OFFLINE OFFLINE rac122
ora.net1.network
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
ora.ons
ONLINE ONLINE
rac121
ONLINE ONLINE
rac122
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE rac122
ora.LISTENER_SCAN2.lsnr
1
ONLINE ONLINE rac121
ora.LISTENER_SCAN3.lsnr
1
ONLINE ONLINE rac121
ora.cvu
1
ONLINE ONLINE rac121
ora.oc4j
1
ONLINE ONLINE rac121
ora.orcl.crm_orcl.svc
1
ONLINE ONLINE rac121
2
ONLINE ONLINE rac122
ora.orcl.db
1
ONLINE ONLINE rac121 Open
2
ONLINE ONLINE rac122 Open
ora.rac121.vip
1
ONLINE ONLINE rac121
ora.rac122.vip
1
ONLINE ONLINE rac122
ora.rac123.vip
1 ONLINE
INTERMEDIATE rac121
FAILED OVER
ora.scan1.vip
1
ONLINE ONLINE rac122
ora.scan2.vip
1
ONLINE ONLINE rac121
ora.scan3.vip
1
ONLINE ONLINE rac121
Remove the VIP which is still associated with rac123.
Stop VIP for rac123
[oracle@rac121
bin]$ srvctl stop vip -i rac123-vip -f
[oracle@rac121
bin]$ crsctl stat res -t
....
ora.orcl.db
1
ONLINE ONLINE rac121 Open
2
ONLINE ONLINE rac122 Open
ora.rac121.vip
1
ONLINE ONLINE rac121
ora.rac122.vip
1
ONLINE ONLINE rac122
ora.rac123.vip
1 OFFLINE OFFLINE
ora.scan1.vip
1
ONLINE ONLINE rac122
ora.scan2.vip
1
ONLINE ONLINE rac121
ora.scan3.vip
1
ONLINE ONLINE rac121
Remove VIP for
rac123
[oracle@rac121
bin]$ sudo $ORACLE_HOME/bin/srvctl remove vip -i rac123-vip -f
REMOVE
ORACLE DATABASE HOME ON NODE3 (RAC123)
Deinstall the oracle
RDBMS Home for rac123
[oracle@rac123
bin]$ /u01/app/oracle/product/11.2.0.3/DB_1/deinstall/deinstall
Checking for
required files and bootstrapping ...
Please wait ...
Location of logs
/grid/app/oraInventory/logs/
############ ORACLE
DEINSTALL & DECONFIG TOOL START ############
#########################
CHECK OPERATION START #########################
## [START] Install
check configuration ##
Checking for
existence of the Oracle home location /u01/app/oracle/product/11.2.0.3/DB_1
Oracle Home type
selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected
for deinstall is: /u01/app/oracle
Checking for
existence of central inventory location /grid/app/oraInventory
Checking for
existence of the Oracle Grid Infrastructure home
The following nodes
are part of this cluster: rac123
Checking for
sufficient temp space availability on node(s) : 'rac123'
## [END] Install
check configuration ##
Network
Configuration check config START
Network
de-configuration trace file location:
/grid/app/oraInventory/logs/netdc_check2016-04-01_06-52-44-PM.log
Network
Configuration check config END
Database Check
Configuration START
Database
de-configuration trace file location:
/grid/app/oraInventory/logs/databasedc_check2016-04-01_06-52-45-PM.log
Use comma as
separator when specifying list of values as input
Specify the list of
database names that are configured in this Oracle home []:
Database Check
Configuration END
Enterprise Manager
Configuration Assistant START
EMCA
de-configuration trace file location:
/grid/app/oraInventory/logs/emcadc_check2016-04-01_06-53-56-PM.log
Enterprise Manager
Configuration Assistant END
Oracle Configuration
Manager check START
OCM check log file
location : /grid/app/oraInventory/logs//ocm_check4164.log
Oracle Configuration
Manager check END
#########################
CHECK OPERATION END #########################
#######################
CHECK OPERATION SUMMARY #######################
Oracle Grid
Infrastructure Home is:
The cluster node(s)
on which the Oracle home deinstallation will be performed are:rac123
WARNING !
Deinstall utility
has detected that Oracle Clusterware processes are not running on the local
node, hence entries of Real Application Cluster Databases that are being
deinstalled will not be removed from Oracle Cluster Registry (OCR). For
complete deinstallation of Real Application Cluster Databases, it is
recommended to run the deinstall utility with Oracle Clusterware running.
Oracle Home selected
for deinstall is: /u01/app/oracle/product/11.2.0.3/DB_1
Inventory Location
where the Oracle home registered is: /grid/app/oraInventory
No Enterprise
Manager configuration to be updated for any database(s)
No Enterprise
Manager ASM targets to update
No Enterprise
Manager listener targets to migrate
Checking the config
status for CCR
Oracle Home exists
with CCR directory, but CCR is not configured
CCR check is
finished
Do
you want to continue (y - yes, n - no)? [n]: Y
A log of this
session will be written to:
'/grid/app/oraInventory/logs/deinstall_deconfig2016-04-01_06-52-42-PM.out'
Any error messages
from this session will be written to:
'/grid/app/oraInventory/logs/deinstall_deconfig2016-04-01_06-52-42-PM.err'
########################
CLEAN OPERATION START ########################
Enterprise Manager
Configuration Assistant START
EMCA
de-configuration trace file location:
/grid/app/oraInventory/logs/emcadc_clean2016-04-01_06-53-56-PM.log
Updating Enterprise
Manager ASM targets (if any)
Updating Enterprise
Manager listener targets (if any)
Enterprise Manager
Configuration Assistant END
Database
de-configuration trace file location:
/grid/app/oraInventory/logs/databasedc_clean2016-04-01_06-55-00-PM.log
Network
Configuration clean config START
Network
de-configuration trace file location:
/grid/app/oraInventory/logs/netdc_clean2016-04-01_06-55-00-PM.log
De-configuring Local
Net Service Names configuration file...
Local Net Service
Names configuration file de-configured successfully.
De-configuring
backup files...
Backup files
de-configured successfully.
The network
configuration has been cleaned up successfully.
Network
Configuration clean config END
Oracle Configuration
Manager clean START
OCM clean log file
location : /grid/app/oraInventory/logs//ocm_clean4164.log
Oracle Configuration
Manager clean END
Setting the force
flag to false
Setting the force
flag to cleanup the Oracle Base
Oracle Universal
Installer clean START
Detach Oracle home
'/u01/app/oracle/product/11.2.0.3/DB_1' from the central inventory on the local
node : Done
Delete directory
'/u01/app/oracle/product/11.2.0.3/DB_1' on the local node : Done
The Oracle Base
directory '/u01/app/oracle' will not be removed on local node. The directory is
not empty.
Oracle Universal
Installer cleanup was successful.
Oracle Universal
Installer clean END
## [START] Oracle
install clean ##
Clean install
operation removing temporary directory '/tmp/deinstall2016-04-01_06-52-26PM' on
node 'rac123'
## [END] Oracle
install clean ##
#########################
CLEAN OPERATION END #########################
#######################
CLEAN OPERATION SUMMARY #######################
Cleaning the config
for CCR
As CCR is not
configured, so skipping the cleaning of CCR configuration
CCR clean is
finished
Successfully
detached Oracle home '/u01/app/oracle/product/11.2.0.3/DB_1' from the central
inventory on the local node.
Successfully deleted
directory '/u01/app/oracle/product/11.2.0.3/DB_1' on the local node.
Oracle Universal
Installer cleanup was successful.
Oracle deinstall
tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE
DEINSTALL & DECONFIG TOOL END #############
[oracle@rac123 bin]$
REMOVE GRID INFRASTRUCTUR ON NODE
RAC123
Now deinstall the Grid Binary from rac123 using local option. If
we are not using local
option the in that case entire cluster will
deinstalled.
[oracle@rac123
bin]$ /grid/app/11.2.0.3/grid/deinstall/deinstall -local
Checking for
required files and bootstrapping ...
Please wait ...
Location of logs
/tmp/deinstall2016-04-01_06-59-35PM/logs/
############ ORACLE
DEINSTALL & DECONFIG TOOL START ############
#########################
CHECK OPERATION START #########################
## [START] Install
check configuration ##
Checking for
existence of the Oracle home location /grid/app/11.2.0.3/grid
Oracle Home type
selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected
for deinstall is: /grid/app/oracle
Checking for
existence of central inventory location /grid/app/oraInventory
Checking for
existence of the Oracle Grid Infrastructure home
The following nodes
are part of this cluster: rac123
Checking for
sufficient temp space availability on node(s) : 'rac123'
## [END] Install
check configuration ##
Traces log file:
/tmp/deinstall2016-04-01_06-59-35PM/logs//crsdc.log
Enter an address or
the name of the virtual IP used on node "rac123"[rac123-vip]
>
The following
information can be collected by running "/sbin/ifconfig -a" on node
"rac123"
Enter the IP netmask
of Virtual IP "192.168.1.158" on node
"rac123"[255.255.255.0]
>
Enter the network
interface name on which the virtual IP address "192.168.1.158" is
active
>
Enter an address or
the name of the virtual IP[]
>
Network
Configuration check config START
Network
de-configuration trace file location:
/tmp/deinstall2016-04-01_06-59-35PM/logs/netdc_check2016-04-01_07-00-21-PM.log
Specify all RAC
listeners (do not include SCAN listener)
that are to be de-configured [LISTENER,LISTENER_SCAN2]:
Network
Configuration check config END
Asm Check
Configuration START
ASM de-configuration
trace file location:
/tmp/deinstall2016-04-01_06-59-35PM/logs/asmcadc_check2016-04-01_07-01-24-PM.log
#########################
CHECK OPERATION END #########################
#######################
CHECK OPERATION SUMMARY #######################
Oracle Grid
Infrastructure Home is:
The cluster node(s)
on which the Oracle home deinstallation will be performed are:rac123
Since -local option
has been specified, the Oracle home will be deinstalled only on the local node,
'rac123', and the global configuration will be removed.
Oracle Home selected
for deinstall is: /grid/app/11.2.0.3/grid
Inventory Location
where the Oracle home registered is: /grid/app/oraInventory
Following RAC
listener(s) will be de-configured: LISTENER,LISTENER_SCAN2
Option -local will
not modify any ASM configuration.
Do you want to
continue (y - yes, n - no)? [n]: y
A log of this
session will be written to:
'/tmp/deinstall2016-04-01_06-59-35PM/logs/deinstall_deconfig2016-04-01_06-59-48-PM.out'
Any error messages
from this session will be written to:
'/tmp/deinstall2016-04-01_06-59-35PM/logs/deinstall_deconfig2016-04-01_06-59-48-PM.err'
########################
CLEAN OPERATION START ########################
ASM de-configuration
trace file location:
/tmp/deinstall2016-04-01_06-59-35PM/logs/asmcadc_clean2016-04-01_07-01-51-PM.log
ASM Clean
Configuration END
Network Configuration
clean config START
Network
de-configuration trace file location:
/tmp/deinstall2016-04-01_06-59-35PM/logs/netdc_clean2016-04-01_07-01-51-PM.log
De-configuring RAC
listener(s): LISTENER,LISTENER_SCAN2
De-configuring
listener: LISTENER
Stopping listener on node
"rac123": LISTENER
Warning: Failed to stop listener. Listener
may not be running.
Listener
de-configured successfully.
De-configuring
listener: LISTENER_SCAN2
Stopping listener on node
"rac123": LISTENER_SCAN2
Warning: Failed to stop listener. Listener
may not be running.
Listener
de-configured successfully.
De-configuring
Naming Methods configuration file...
Naming Methods
configuration file de-configured successfully.
De-configuring
backup files...
Backup files
de-configured successfully.
The network
configuration has been cleaned up successfully.
Network
Configuration clean config END
---------------------------------------->
The deconfig command
below can be executed in parallel on all the remote nodes. Execute the command
on the local node after the execution
completes on all the remote nodes.
Run the following
command as the root user or the administrator on node "rac123".
/tmp/deinstall2016-04-01_06-59-35PM/perl/bin/perl
-I/tmp/deinstall2016-04-01_06-59-35PM/perl/lib
-I/tmp/deinstall2016-04-01_06-59-35PM/crs/install
/tmp/deinstall2016-04-01_06-59-35PM/crs/install/rootcrs.pl -force -deconfig -paramfile
"/tmp/deinstall2016-04-01_06-59-35PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Press Enter after
you finish running the above commands
<----------------------------------------
Remove the
directory: /tmp/deinstall2016-04-01_06-59-35PM on node:
Setting the force
flag to false
Setting the force
flag to cleanup the Oracle Base
Oracle Universal
Installer clean START
Detach Oracle home
'/grid/app/11.2.0.3/grid' from the central inventory on the local node : Done
Delete directory
'/grid/app/11.2.0.3/grid' on the local node : Done
Delete directory
'/grid/app/oraInventory' on the local node : Done
Delete directory
'/grid/app/oracle' on the local node : Done
Oracle Universal
Installer cleanup was successful.
Oracle Universal
Installer clean END
## [START] Oracle
install clean ##
Clean install
operation removing temporary directory '/tmp/deinstall2016-04-01_06-59-35PM' on
node 'rac123'
## [END] Oracle
install clean ##
#########################
CLEAN OPERATION END #########################
#######################
CLEAN OPERATION SUMMARY #######################
Following RAC
listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN2
Oracle Clusterware
is stopped and successfully de-configured on node "rac123"
Oracle Clusterware
is stopped and de-configured successfully.
Successfully
detached Oracle home '/grid/app/11.2.0.3/grid' from the central inventory on
the local node.
Successfully deleted
directory '/grid/app/11.2.0.3/grid' on the local node.
Successfully deleted
directory '/grid/app/oraInventory' on the local node.
Successfully deleted
directory '/grid/app/oracle' on the local node.
Oracle Universal
Installer cleanup was successful.
Run 'rm -rf
/etc/oraInst.loc' as root on node(s) 'rac123' at the end of the session.
Run 'rm -rf
/opt/ORCLfmap' as root on node(s) 'rac123' at the end of the session.
Oracle deinstall
tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE
DEINSTALL & DECONFIG TOOL END #############
Run the below to
complete the deinstallation.
[oracle@rac123
~]$ sudo /tmp/deinstall2016-04-01_06-59-35PM/perl/bin/perl
-I/tmp/deinstall2016-04-01_06-59-35PM/perl/lib
-I/tmp/deinstall2016-04-01_06-59-35PM/crs/install
/tmp/deinstall2016-04-01_06-59-35PM/crs/install/rootcrs.pl -force -deconfig -paramfile
"/tmp/deinstall2016-04-01_06-59-35PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration
parameter file:
/tmp/deinstall2016-04-01_06-59-35PM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve
Oracle Clusterware home.
Start Oracle
Clusterware stack and try again.
CRS-4047: No Oracle
Clusterware components configured.
CRS-4000: Command
Stop failed, or completed with errors.
################################################################
# You must kill
processes or reboot the system to properly #
# cleanup the
processes started by Oracle clusterware
#
################################################################
Either
/etc/oracle/olr.loc does not exist or is not readable
Make sure the file
exists and it has read and execute access
Either
/etc/oracle/olr.loc does not exist or is not readable
Make sure the file
exists and it has read and execute access
Failure in execution
(rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package
cvuqdisk is not installed
Successfully
deconfigured Oracle clusterware stack on this node
Hope this help.... :)
No comments:
Post a Comment