This content has been marked as final. Show 10 replies
Full database export/import may be the better option.
Its got very large tables, and the application only updates a fraction of it. import/export is not an option. thx
If database is in archivelog mode, you can use log mining.
Pl define what you mean by "very large" and "fraction".
Cloning the entire database periodically is an option. See MOS Doc 562556.1 (How to Manually Clone a Database to Another Node). Exp/Imp has already been suggested. Using Streams may also be an option - http://download.oracle.com/docs/cd/B10501_01/server.920/a96571/strmover.htm
Very true Srini. Logical standby may one of the options. It all depends on environment and requirements.
500 million rows, 70gb, with updates of 5 million rows a month
Personally speaking, 70GB is a pretty small database :-) I would simply refresh/clone the database periodically as needed - the process should not take more than a few hours at most.
Its got very large tables, and the application only updates a fraction of it. import/export is not an option. thxWhat about Transportable Tablespaces ?
You have just to export / import the metadata which describe the Tablespaces and, copy the corresponding datafiles. The set of Tablespaces must be self-contained and put in read-only during the metadata export & datafile copy. It can be much faster than export/import in Owner or Full mode.
Please, find enclosed a link about this method which is available in Oracle 9i:
Hope this help.
I have not tired this, but I wonder if using the query parameter would work. Something like
query=schema.table:'minus select * from schema.table as of scn=TO_CHAR(LAST_SCN_UPDATE)
Let's say your latest backup is of scn 123456 and the current scn is 234567, You would use:
query=schema.table:'minus select * from schema.table as of scn=TO_CHAR(123456)
The next time you would use 234567, etc. I think flashback uses undo, so you would need lots of undo, but if that is not a problem, then maybe it would work.
Again, I have not tried this, so I would recommend trying it on a small test table to see what happens. The query parameter may not accept this type of query, but I think it would.
I had some time so I tried it. Here is what I did.
1. got my current scn number;
select dbms_flashback.get_system_change_number from dual;
2. added a row to the scott.emp table
insert into scott.emp values (123,'dean', 'chief', 000, sysdate, 12567, 12566, 12);
select count(*) from scott.emp;
3. ran this expdp command
expdp system/manager tables=scott.emp directory=dpump_dir dumpfile=snew.dmp query=scott.emp:'"minus select * from scott.emp as of scn(596018)"'
. . exported "SCOTT"."EMP" 8.039 KB 1 rows
So, it only exported 1 row where there were 15 rows in the table. I then ran the normal import and looked at the table. there were 2 columns with the ename of dean. I think this should work if you have enough undo to handle the flashback query.