Hello Friend's
Introduction
As an Oracle DBA, there might be a situation where you will have to change all the IP addresses in your Real Application Clusters (RAC) environment. A data center migration, a company-wide network re-IP project, or strict security compliance requirements could be the reason behind this. No matter what the cause is, changing the IP addresses in a RAC environment is one of the most invasive operations that can be done on a cluster. The network, which is the main component of the Grid Infrastructure, will be affected.
The total success of a re-IP relies completely on careful planning, accurate carrying out of the operation and tough testing. Speeding up the process or not following a single step might cause a non-working cluster, which will then need a long and difficult restoration. This guide is here to give you a secure, well-organized, and proven method to go through this operation with high risks. It will take a lot of time when the service is off and this should be arranged in advance.
Pre-Migration Checklist
Before you even think about executing the procedure, complete this checklist. Preparation is your best insurance policy.
Verify Current Configuration: Document the entire existing setup
Apply Patches: Ensure all Oracle Grid Infrastructure and Database patches are applied
Secure Backups: Take verified, recent backups of the cluster and databases
Document Everything: Create a detailed worksheet of current and new IP configurations
Schedule Downtime: Coordinate a definitive maintenance window
Documenting the current environment
Let's document our current environment before making any changes:
[grid@dm01db01 ~]$ srvctl config database -d PRODDB
Database unique name: PRODDB
Database name: PRODDB
Oracle home: /u01/app/oracle/product/19.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/PRODDB/PARAMETERFILE/spfile.272.1234567890
Password file: +DATA/PRODDB/PASSWORD/pwdprod.256.1234567890
Domain: database.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA,RECO
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oper
Database instances: PRODDB1,PRODDB2
Configured nodes: dm01db01,dm01db02
Database is administrator managed
[grid@dm01db01 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
ora.asm
               ONLINE  ONLINE       dm01db01                 Started,STABLE
               ONLINE  ONLINE       dm01db02                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
ora.ons
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.cvu
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.proddb.db
      1        ONLINE  ONLINE       dm01db01                 Open,HOME=/u01/app/oracle/product/19.0.0/dbhome_1,STABLE
      2        ONLINE  ONLINE       dm01db02                 Open,HOME=/u01/app/oracle/product/19.0.0/dbhome_1,STABLE
ora.dm01db01.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.dm01db02.vip
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
	  
	  
[grid@dm01db01 ~]$ srvctl config scan
SCAN name: cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /cluster-scan/192.168.1.105
SCAN VIP name: scan2, IP: /cluster-scan/192.168.1.106
SCAN VIP name: scan3, IP: /cluster-scan/192.168.1.107
[grid@dm01db01 ~]$ srvctl status nodeapps
VIP dm01db01-vip is enabled
VIP dm01db01-vip is running on node: dm01db01
VIP dm01db02-vip is enabled
VIP dm01db02-vip is running on node: dm01db02
Network is enabled
Network is running on node: dm01db01
Network is running on node: dm01db02
ONS is enabled
ONS daemon is running on node: dm01db01
ONS daemon is running on node: dm01db02
Current Network Configuration
/etc/hosts (Current):
[root@dm01db01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# Public Network - Current
192.168.1.101 dm01db01.database.com dm01db01
192.168.1.102 dm01db02.database.com dm01db02
# Private Network
10.10.10.101 dm01db01-priv.database.com dm01db01-priv
10.10.10.102 dm01db02-priv.database.com dm01db02-priv
# Virtual IPs - Current
192.168.1.111 dm01db01-vip.database.com dm01db01-vip
192.168.1.112 dm01db02-vip.database.com dm01db02-vip
# SCAN IPs - Current
192.168.1.105 cluster-scan.database.com cluster-scan
192.168.1.106 cluster-scan.database.com cluster-scan
192.168.1.107 cluster-scan.database.com cluster-scan
New Network Configuration:
New Public Network: 192.168.2.0/24
New VIP Network: 192.168.2.0/24
New SCAN IPs: 192.168.2.105, 192.168.2.106, 192.168.2.107
The Step-by-Step Procedure
Warning: Execute these steps in the exact order specified. All commands should be run as the grid user unless otherwise specified.
Phase 1: Graceful Shutdown
Stop all database instances (as oracle user):
[oracle@dm01db01 ~]$ srvctl stop database -d PRODDB -o immediate
Stop ASM instances across all nodes:
[grid@dm01db01 ~]$ srvctl stop asm -node dm01db01 -force
[grid@dm01db01 ~]$ srvctl stop asm -node dm01db02 -force
Stop the Clusterware stack across all nodes:
************************Node1******************************
[grid@dm01db01 ~]$ crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dm01db01'
CRS-2673: Attempting to stop 'ora.cvu' on 'dm01db01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'dm01db01'
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'dm01db01'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'dm01db01' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'dm01db01' succeeded
CRS-2677: Stop of 'ora.cvu' on 'dm01db01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dm01db01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
****************************Node2***********************
[grid@dm01db02 ~]$ crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dm01db02'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'dm01db02'
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'dm01db02'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'dm01db02' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on 'dm01db02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dm01db02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Phase 2: Operating System Network Reconfiguration
Update /etc/hosts on all nodes:
[root@dm01db01 ~]# cp /etc/hosts /etc/hosts.backup.$(date +%Y%m%d)
[root@dm01db01 ~]# vi /etc/hosts
# Modified /etc/hosts file:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# OLD IPs - Commented out for rollback
#192.168.1.101 dm01db01.database.com dm01db01
#192.168.1.102 dm01db02.database.com dm01db02
#192.168.1.111 dm01db01-vip.database.com dm01db01-vip
#192.168.1.112 dm01db02-vip.database.com dm01db02-vip
#192.168.1.105 cluster-scan.database.com cluster-scan
#192.168.1.106 cluster-scan.database.com cluster-scan
#192.168.1.107 cluster-scan.database.com cluster-scan
# NEW IPs - Public Network
192.168.2.101 dm01db01.database.com dm01db01
192.168.2.102 dm01db02.database.com dm01db02
# Private Network (unchanged)
10.10.10.101 dm01db01-priv.database.com dm01db01-priv
10.10.10.102 dm01db02-priv.database.com dm01db02-priv
# NEW Virtual IPs
192.168.2.111 dm01db01-vip.database.com dm01db01-vip
192.168.2.112 dm01db02-vip.database.com dm01db02-vip
# NEW SCAN IPs
192.168.2.105 cluster-scan.database.com cluster-scan
192.168.2.106 cluster-scan.database.com cluster-scan
192.168.2.107 cluster-scan.database.com cluster-scan
Reconfigure network interfaces and update DNS:
[root@dm01db01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
# Change IPADDR to new public IP
IPADDR=192.168.2.101
NETMASK=255.255.255.0
GATEWAY=192.168.2.1
[root@dm01db01 ~]# systemctl restart network
Contact your network team to update DNS records for SCAN and VIPs to point to the new IP addresses.
Phase 3: Grid Infrastructure Reconfiguration
Verify and update network interface configuration with oifcfg:
[grid@dm01db01 ~]$ oifcfg getif
eth0  192.168.1.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
[grid@dm01db01 ~]$ oifcfg delif -global eth0
[grid@dm01db01 ~]$ oifcfg setif -global eth0/192.168.2.0:public
[grid@dm01db01 ~]$ oifcfg getif
eth0  192.168.2.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
Start Clusterware on node 1 only:
[grid@dm01db01 ~]$ crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'dm01db01'
CRS-2672: Attempting to start 'ora.evmd' on 'dm01db01'
CRS-2676: Start of 'ora.mdnsd' on 'dm01db01' succeeded
CRS-2676: Start of 'ora.evmd' on 'dm01db01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'dm01db01'
CRS-2676: Start of 'ora.gpnpd' on 'dm01db01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'dm01db01'
CRS-2676: Start of 'ora.gipcd' on 'dm01db01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'dm01db01'
CRS-2676: Start of 'ora.cssdmonitor' on 'dm01db01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'dm01db01'
CRS-2672: Attempting to start 'ora.diskmon' on 'dm01db01'
CRS-2676: Start of 'ora.diskmon' on 'dm01db01' succeeded
CRS-2676: Start of 'ora.cssd' on 'dm01db01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'dm01db01'
CRS-2676: Start of 'ora.ctssd' on 'dm01db01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'dm01db01'
CRS-2676: Start of 'ora.asm' on 'dm01db01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'dm01db01'
CRS-2676: Start of 'ora.storage' on 'dm01db01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'dm01db01'
CRS-2676: Start of 'ora.crsd' on 'dm01db01' succeeded
Reconfigure Node Virtual IPs:
[grid@dm01db01 ~]$ srvctl stop vip -node dm01db01 -force
[grid@dm01db01 ~]$ srvctl stop vip -node dm01db02 -force
[grid@dm01db01 ~]$ srvctl remove vip -node dm01db01 -force
[grid@dm01db01 ~]$ srvctl remove vip -node dm01db02 -force
[grid@dm01db01 ~]$ srvctl add vip -node dm01db01 -address dm01db01-vip/255.255.255.0/eth0
[grid@dm01db01 ~]$ srvctl add vip -node dm01db02 -address dm01db02-vip/255.255.255.0/eth0
[grid@dm01db01 ~]$ srvctl start vip -node dm01db01
[grid@dm01db01 ~]$ srvctl start vip -node dm01db02
Reconfigure SCAN IPs:
[grid@dm01db01 ~]$ srvctl stop scan
[grid@dm01db01 ~]$ srvctl stop scan_listener
[grid@dm01db01 ~]$ srvctl remove scan
[grid@dm01db01 ~]$ srvctl remove scan_listener
[grid@dm01db01 ~]$ srvctl add scan
[grid@dm01db01 ~]$ srvctl add scan_listener
[grid@dm01db01 ~]$ srvctl start scan
[grid@dm01db01 ~]$ srvctl start scan_listener
Phase 4: Cluster Restart and Validation
Start Clusterware on remaining nodes:
[grid@dm01db02 ~]$ crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'dm01db02'
CRS-2672: Attempting to start 'ora.evmd' on 'dm01db02'
CRS-2676: Start of 'ora.mdnsd' on 'dm01db02' succeeded
CRS-2676: Start of 'ora.evmd' on 'dm01db02' succeeded
...
Verify the Clusterware Stack:
[grid@dm01db01 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
ora.asm
               ONLINE  ONLINE       dm01db01                 Started,STABLE
               ONLINE  ONLINE       dm01db02                 Started,STABLE
ora.net1.network
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
ora.ons
               ONLINE  ONLINE       dm01db01                 STABLE
               ONLINE  ONLINE       dm01db02                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.cvu
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.proddb.db
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  OFFLINE                               STABLE
ora.dm01db01.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.dm01db02.vip
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       dm01db02                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       dm01db01                 STABLE
	  
[grid@dm01db01 ~]$ srvctl status nodeapps
VIP dm01db01-vip is enabled
VIP dm01db01-vip is running on node: dm01db01
VIP dm01db02-vip is enabled
VIP dm01db02-vip is running on node: dm01db02
Network is enabled
Network is running on node: dm01db01
Network is running on node: dm01db02
ONS is enabled
ONS daemon is running on node: dm01db01
ONS daemon is running on node: dm01db02
[grid@dm01db01 ~]$ srvctl config scan
SCAN name: cluster-scan, Network: 1/192.168.2.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /cluster-scan/192.168.2.105
SCAN VIP name: scan2, IP: /cluster-scan/192.168.2.106
SCAN VIP name: scan3, IP: /cluster-scan/192.168.2.107
Start ASM and Database:
[grid@dm01db01 ~]$ srvctl start asm -node dm01db01
[grid@dm01db01 ~]$ srvctl start asm -node dm01db02
[oracle@dm01db01 ~]$ srvctl start database -d PRODDB
Post-Migration Validation
Run these commands to confirm everything is working correctly:
[grid@dm01db01 ~]$ crsctl stat res -t
# Verify all resources are ONLINE
[grid@dm01db01 ~]$ srvctl config scan
SCAN name: cluster-scan, Network: 1/192.168.2.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /cluster-scan/192.168.2.105
SCAN VIP name: scan2, IP: /cluster-scan/192.168.2.106
SCAN VIP name: scan3, IP: /cluster-scan/192.168.2.107
[grid@dm01db01 ~]$ srvctl status nodeapps
VIP dm01db01-vip is enabled
VIP dm01db01-vip is running on node: dm01db01
VIP dm01db02-vip is enabled
VIP dm01db02-vip is running on node: dm01db02
[grid@dm01db01 ~]$ oifcfg getif
eth0  192.168.2.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
Test database connectivity using new SCAN IP:
[oracle@client ~]$ sqlplus system/password@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.105)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=PRODDB)))'
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Nov 1 10:00:00 2024
Version 19.15.0.0.0
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0
SQL> SELECT INSTANCE_NAME, HOST_NAME FROM V$INSTANCE;
INSTANCE_NAME    HOST_NAME
---------------  ---------------
PRODDB1          dm01db01
Rollback Plan
If critical issues are encountered, follow this rollback procedure immediately:
Stop all operations and shut down Clusterware:
[grid@dm01db01 ~]$ crsctl stop crs -f
[grid@dm01db02 ~]$ crsctl stop crs -f
Revert OS network configuration:
[root@dm01db01 ~]# cp /etc/hosts.backup.20251101 /etc/hosts
[root@dm01db01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
# Revert to original IP
IPADDR=192.168.1.101
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
[root@dm01db01 ~]# systemctl restart network
Contact network team to revert DNS records immediately
Revert oifcfg configuration:
[grid@dm01db01 ~]$ oifcfg delif -global eth0
[grid@dm01db01 ~]$ oifcfg setif -global eth0/192.168.1.0:public
Start Clusterware on node 1:
[grid@dm01db01 ~]$ crsctl start crs
The original VIP and SCAN configurations will be automatically used since we reverted DNS and network configuration.
Start remaining nodes and validate:
[grid@dm01db02 ~]$ crsctl start crs
[oracle@dm01db01 ~]$ srvctl start database -d PRODDB

ConversionConversion EmoticonEmoticon