Skip navigation

RacOneNode 11.2.0.4. Let's follow what occurs when we relocate a RacOneNode database from one node to another. Reminder: in RacOneNode, only one instance is up at a time.

 

Situation before the RELOCATE:

oracle@node1:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST1 is running on node node1

Online relocation: INACTIVE

 

On our "node1" server, let's launch the RELOCATE (from node1 to node2):

oracle@node1:/home/oracle $ srvctl relocate database -d RAC1NODE_DB -n node2

 

On the other node, let's monitor the RELOCATE with SRVCTL STATUS DATABASE:

oracle@node2:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST2 is running on node node1

Instance RAC1NODE_INST1 is running on node node1

Online relocation: ACTIVE

Source instance: RAC1NODE_INST1 on node1

Destination instance: RAC1NODE_INST2 on node2

Here we can see that for a moment, while online relocation is active, 2 instances are up. This does not last long however.

 

oracle@node2:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST2 is running on node node2

Online relocation: ACTIVE

Source instance: RAC1NODE_INST1 on node1

Destination instance: RAC1NODE_INST2 on node2

Soon, while online relocation is still active, our database is said to be running on the destination node, and the first instance is now down.

 

Looking at the alert.log on node 1 (the instance that is going to be shut down):

2018-03-13 02:00:00.005000 +01:00

Closing scheduler window

Restoring Resource Manager plan DEFAULT_PLAN via scheduler window

Setting Resource Manager plan DEFAULT_PLAN via parameter

2018-03-13 03:10:51.311000 +01:00

Stopping background process CJQ0

2018-03-13 16:23:17.464000 +01:00

Reconfiguration started (old inc 2, new inc 4)

List of instances:

1 2 (myinst: 1)

Global Resource Directory frozen

Communication channels reestablished

Master broadcasted resource hash value bitmaps

Non-local Process blocks cleaned out

LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

Set master node info

Submitted all remote-enqueue requests

Dwn-cvts replayed, VALBLKs dubious

All grantable enqueues granted

Submitted all GCS remote-cache requests

Fix write in gcs resources

Reconfiguration complete

2018-03-13 16:23:20.133000 +01:00

minact-scn: Master returning as live inst:2 has inc# mismatch instinc:0 cur:4 errcnt:0

2018-03-13 16:23:27.330000 +01:00

ALTER SYSTEM SET service_names='RAC1NODE_DB' SCOPE=MEMORY SID='RAC1NODE_INST1';

Shutting down instance (transactional local)

Stopping background process SMCO

2018-03-13 16:23:28.504000 +01:00

Shutting down instance: further logons disabled

Stopping background process QMNC

2018-03-13 16:23:34.536000 +01:00

Stopping background process MMNL

2018-03-13 16:23:35.538000 +01:00

Stopping background process MMON

2018-03-13 16:23:36.540000 +01:00

Local transactions complete. Performing immediate shutdown

License high water mark = 45

2018-03-13 16:23:39.663000 +01:00

ALTER SYSTEM SET _shutdown_completion_timeout_mins=30 SCOPE=MEMORY;

ALTER DATABASE CLOSE NORMAL /* db agent *//* {2:52331:39577} */

SMON: disabling tx recovery

Stopping background process RCBG

2018-03-13 16:23:41.737000 +01:00

SMON: disabling cache recovery

NOTE: Deferred communication with ASM instance

NOTE: deferred map free for map id 16

Redo thread 1 internally disabled at seq 2772 (LGWR)

Shutting down archive processes

Archiving is disabled

ARCH shutting down

ARCH shutting down

ARCH shutting down

ARC2: Archival stopped

ARC0: Archival stopped

ARC3: Archival stopped

ARC1: Archiving disabled thread 1 sequence 2772

Archived Log entry 2792 added for thread 1 sequence 2772 ID 0xde13bfb6 dest 1:

ARCH shutting down

ARC1: Archival stopped

NOTE: Deferred communication with ASM instance

2018-03-13 16:23:42.816000 +01:00

Thread 1 closed at log sequence 2772

Successful close of redo thread 1

NOTE: Deferred communication with ASM instance

NOTE: deferred map free for map id 4

Completed: ALTER DATABASE CLOSE NORMAL /* db agent *//* {2:52331:39577} */

ALTER DATABASE DISMOUNT /* db agent *//* {2:52331:39577} */

Shutting down archive processes

Archiving is disabled

NOTE: Deferred communication with ASM instance

NOTE: deferred map free for map id 2

Completed: ALTER DATABASE DISMOUNT /* db agent *//* {2:52331:39577} */

ARCH: Archival disabled due to shutdown: 1089

Shutting down archive processes

Archiving is disabled

NOTE: force a map free for map id 2

NOTE: force a map free for map id 4

NOTE: force a map free for map id 16

2018-03-13 16:23:44.226000 +01:00

ARCH: Archival disabled due to shutdown: 1089

Shutting down archive processes

Archiving is disabled

NOTE: Shutting down MARK background process

Stopping background process VKTM

NOTE: force a map free for map id 4684

NOTE: force a map free for map id 4683

2018-03-13 16:23:45.900000 +01:00

freeing rdom 0

2018-03-13 16:23:48.283000 +01:00

Instance shutdown complete

 

and the alert.log on node 2 (the instance that is started):

2018-03-13 16:23:09.103000 +01:00

Starting ORACLE instance (normal)

...

2018-03-13 16:23:37.540000 +01:00

minact-scn: Inst 2 is now the master inc#:4 mmon proc-id:3931 status:0x7

minact-scn status: grec-scn:0x0000.00000000 gmin-scn:0x0000.00000000 gcalc-scn:0x0000.00000000

minact-scn: Master returning as live inst:1 has inc# mismatch instinc:0 cur:4 errcnt:0

2018-03-13 16:23:46.352000 +01:00

Reconfiguration started (old inc 4, new inc 6)

List of instances:

2 (myinst: 2)

Global Resource Directory frozen

* dead instance detected - domain 0 invalid = TRUE

Communication channels reestablished

Master broadcasted resource hash value bitmaps

Non-local Process blocks cleaned out

LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived

Set master node info

Submitted all remote-enqueue requests

Dwn-cvts replayed, VALBLKs dubious

All grantable enqueues granted

Post SMON to start 1st pass IR

Instance recovery: looking for dead threads

Submitted all GCS remote-cache requests

Post SMON to start 1st pass IR

Fix write in gcs resources

Starting background process CJQ0

Reconfiguration complete

 

End result:

oracle@node1:/home/oracle $ srvctl status database -d RAC1NODE_DB

Instance RAC1NODE_INST2 is running on node node2

Online relocation: INACTIVE

We are back to normal: online relocation is over so inactive, only one instance is up and it's the one on node2.