We are upgrading and moving our database to a different server.
The move is from 10G R1 database on Solaris to 11G R2 on Linux.
Our database uses Oracle Change Data Capture.
What is the best way to perform this migration? Unlike in the approach below, ideally, it would be without any manual steps to drop and recreate CDC subscriptions, change tables, etc.
Considerations for Exporting and Importing Change Data Capture Objects
Starting in Oracle Databse 10g, Oracle Data Pump is the supported export and import utility for Change Data Capture. Change Data Capture change sources, change sets, change tables, and subscriptions are exported and imported by the Oracle Data Pump expdp and impdp commands with the following restrictions:
After a Data Pump full database import operation completes for a database containing AutoLog Change Data Capture objects, the following steps must be performed to restore these objects:
1. The publisher must manually drop the change tables with the SQL DROP TABLE command. This is needed because the tables are imported without the accompanying Change Data Capture metadata.
2. The publisher must re-create the AutoLog change sources, change sets, and change tables using the appropriate DBMS_CDC_PUBLISH procedures.
3. Subscribers must re-create their subscriptions to the AutoLog change sets.
I opened SR with Oracle Support, they are suggesting to perform full database import/export
Change Data Capture change sources, change sets, change tables, and subscriptions are exported and imported by the Oracle Data Pump expdp and impdp commands with the following restrictions.
Change Data Capture objects are exported and imported only as part of full database export and import operations (those in which the expdp and impdb commands specify the FULL=y parameter). Schema-level import and export operations include some underlying objects (for example, the table underlying a change table), but not the Change Data Capture metadata needed for change data capture to occur."
CDC has different implementation methods:
You may use the below query to determine-
select SOURCE_NAME, SOURCE_DESCRIPTION, CREATED, SOURCE_TYPE, SOURCE_DATABASE, SOURCE_ENABLED from change_sources;
– Synchronous CDC: with this implementation method you capture changes
synchronously on the source database into change tables. This method uses
internal database triggers to enable CDC. Capturing the change is part of the
original transaction that introduces the change thus impacting the performance
of the transaction.
– Asynchronous Autolog CDC: this implementation method requires a staging
database separate from the source database. Asynchronous Autolog CDC uses
the database's redo transport services to transport redo log information from
the source database to the staging database1. Changes are captured at the
staging database. The impact to the source system is minimal, but there is some
latency between the original transaction and the change being captured.
As suggested in the document-
Change Data Capture objects are exported and imported only as part of full database export and import operations (those in which the expdp and impdb commands specify the FULL=y parameter). Schema-level import and export
operations include some underlying objects (for example, the table underlying a change table), but not the Change Data Capture metadata needed for change data capture to occur.
■ AutoLog change sources, change sets, and change tables are not supported.
Starting in Oracle Database 10g, Oracle Data Pump is the supported export and import utility for Change Data Capture.
Re-Creating AutoLog Change Data Capture Objects After an Import Operation
I've done this on 2 of my production DBs. We have three DBs:
1. 220.127.116.11, downstream DW DB
2. 10.2.0.4, source OLTP DB
3. 10.2.0.4, source OLTP DB
We are using Asynchronous autlog archive CDC.
I wanted to upgrade DBs 2 and 3 to 11g. When I upgraded DB 2 everything went fine and I didn't change anything - testing showed it didn't impact CDC.
However, with DB 3 it broke in testing. Our workaround was to stop all traffic to DB 3, ship all oustanding data to the downstream DB (1), process it, then drop the subscriptions, change tables, change sets and change source. Then upgrade the DB (3) and rebuild CDC afterwards.
It's very easy in our case to rebuild CDC - we have written dynamic code to do this.
Hope that helps,
We decided to do something similar.
We'll back-up the CDC change tables, drop CDC subscriptions and change tables, migrate and upgrade the database, create CDC subscriptions and change tables, and then restore data from backup tables to CDC change tables
We followed this article by Boris Knizhnik to create a script to automate subscribing a table to CDC
Are you sure you want to upgrade and migrate all in the same step? What happens if there are issues, how will you know if it's related to the platform change or from the 10g to 11g upgrade? I always like to change only one thing at a time so that I know if there is a change in behaviour exactly what to look at.
The auto generating scripts found at that site are useful, just be weary that he seems to be referring to 9i so there may be some changes between 9i and 11g and the article was published in 2003. Still, I'm sure that the code works fine as long as you test it first.
Backing up the tables is a good idea in case something goes wrong but why do you need to restore the data? Do you use the change tables as a long term store of data? What happens to the data once it's been processed from the change table? I only know my setup so I'd be intrigued to know how other people use it, but in our configuration we read from the change tables via the subscriber views, process the data and then purge the change tables so in theory once the data in the change table has been processed it is no longer needed. That's why when we drop all the CDC items and rebuild we don't back up the change tables - we ensure that they are empty before we start the rebuild process.