On thing with your approach is you have to consider downtime when you will migrate data from old box to new box.
Also with respect to Oracle 18c migration you can have a look to below mentioned blog,it has multiple links:
1. Install RedHat 7/8 on new Server
==> Please note that neither Oracle 18c nor 19c are currently certified for RHEL 8.
2. Install Oracle DB 18c RAC on this Server
==> Why 18c and not 19c (Long Term Support Release)
3.Change DB 126.96.36.199 from NonCDB to CDB
==> I would not change the current database, but
(a) Create a new CDB on the 18c/19c-RAC
(b) Plug-In the 188.8.131.52-Non-CDB as a PDB
(c) Upgrade the PDB to 18c/19c (this is not done automatically)
if your database is small using DataPump (instead of plugging in the database) can be an option (reorganization of indexes and tables, getting rid of unnecessary database options and schemas etc.)
The following MOS-Notes can be helpful:
- How to Convert Non-CDB to PDB (Doc ID 2288024.1)
- Best-practice Order for Multiple Changes Involving nonCDB to PDB Migration (Doc ID 2534697.1)
When thinking about it, I'm not sure if plugging in a 12.1 Non-CDB into an 18c-CDB will work, because this includes 2 steps (convert data dictionary from Non-CDB-Dictionary to "PDB-dictionary) and upgrading the data dictionary), so may be Transportable Tablespaces of DataPump are the better options.
all you need is a GI 19c, a 184.108.40.206 DB Home and a 19c Home. (right it makes no sense to upgrade to 18c, your managers will say "are they really that "not clever", they migrated the DB to a new server, had XY hours downtime and now they want again downtime for 30 minutes to upgrade it again"
Oracle sells multitenant as unplug plug and you have a higher release, well it is sad it does not work like that. You still have to upgrade.
So a clever way migth look something like:
install OS, 19c GI and DB, 220.127.116.11 DB (with patches you currently have), make a physical standby, set flashback on on the standby, create restore point, start recover to get in sync, convert to logical standby, upgrade to 19c and let it recover, in the mean time prepare an empty CDB in 19c, downtime begins, databases are in sync, stop primary as fallback, convert the standby to primary, describe noncdb, unplug, plug in with copy option (takes more time but then your datafiles can be still continuosly recovered if the upgrade fails, simply you would start the 19c noncdb) to the CDB, noncdb_to_pdb and you are done. Probably if you try it out once, you might finish in 45 Minutes, that might be the real downtime. But you need 2 times the space, if it's a 100TB database well you probably will "risk" the plug in part with NOCOPY. anyway the test case is always online after you upgraded to 19c you can flashback to restore point and continue recovery, but you will need probably a lot FRA (on Exadata called RECO), you can try it first, then you will know how long it takes and if it works
hello, thanks for the information. This helps me a lot. I think that my questions are answered.
@all: Thanks for Information and links.
@Markus Flechtner: The database is to big for DataPump, over 6TB.
@PS_orclNerd: I have no hardware and no storage space to create a standby database.
You have new servers. Or how do you plan to migrate it to the new systems? I don't think it's supported or certified to run a cluster on 4 nodes, 2 nodes EL5, 2 nodes EL7. It might work, but you would need to extend the configuration to 4 nodes and have to connect these servers to the same networks.