thank you for doc.
I see the document but more option for this specific migration are not avaiable such as TTS, Datapump and more.
Usually I prefer manual upgrade but I have different Endian, lower source database version, High volume of data and changing store and database are in a different geo location (with possible high network latency)
My only solution it's exp/imp or 2 step migration with first upgrade of database at least 10g
If you want to use cross plattform tts and you have problems with the network performance over the distance between 9i database and exadata you can use a dataguard (or manually applying archives) to an nfs storage system near the exadata. If you have not a solaris machine near the exadata machine then use a solaris machin on your primary site (it is evan possible to use your current 9i database server but take care when you confiure the dataguard on the saem machine because you easily can dammage your primary database) When you start your downtime the data is alreadyon a nfs storage near the exadata and you hab´ve not to bother with you 500km network distance. upgrade this 9i system on the new storage to 10g or 11g. The use cross platform tts export to get the data from the nfs attached to the exadata to the asm on the exadata.
It's a good plan, similar at migration with GoldenGate, I need to try to talk with customer and convince him to not use insert as select and take information about the network in Italy from sud to nord....
The best way for your plan it's take one Solaris machine in the same datacenter of Exadata.... In this case I prefer to clone the physical database on this machine, take the manual upgrade and use TTS for the migration.
TTS is the great solution but I need to one intermediate step.
I think you don't disable contraints to avoid constraint errors (why should there be constraint errors?) but you disable constraints to avoid constraint checking and therefore save a lot of time when importing if you enable the constraint with the 'no validate' option after the import. If network bandwith is a problem you can try to reduce the amount of data to transmit by compression the data before copying. If you have 2T schema, then in most cases you will have about 1T data and 1T indexes. If you compress this table data they will be about 200G-500G. I think this will be the amount of data that you have to copy but the actual numbers may be different.
you can parallelize the export by exporting every table seperately. You should not only enable the constrainst manually (that means it should not be done by the import but by a script) but you should also create the indexes with scripts (and appropriate parallelism) But all these methods to accelerate the migration procedure bring more complexity to this procedure and inrease the risk of a failure and the time to prepare the migration. so yu should only use them if it is really necessary.
which intermedaite step do you mean?
Yes I can reduce the time for constraint checking.
For parallelism I have another problem: only one table it's 1,2Tb of data (this table it's partitioned). I think it's the best way don't increase the complexity and (correct me if I wrong) this plan it's the most safe:
verify the possibility of parallelism (I think to use it performing parallel schema export)
export data using direct path to increase the speed of export
Compress the dmp and take it on target machine
Simply import of data in exadata
for intermediate step i mean first upgrade source database to 11R2 and use the TTS strategy.
yes, the intermediate upgrade step is necessary. but it does not depnd on the size of data. and you can prepare it carefully. the time needed for upgrade will be not more than 2 or 3 hours. If you upgrade you can set all tablespaces except SYSTEM and the UNDO tablespaces in readonly mode. If you want to fall back after the upgrade you have only to restore the SYSTEM and UNDO tablespaces because the other tablespaces were not modified. If you upgrade you can use not only xtts but also parallel export/import with datapump . I think parallel datapump export/import will work for partitioned tables. if you use traditional export I think you have to use a separate export job for every partition.
Yes I think your plan ist the most safe, The advantage of exp/imp is also the table reorganisation. Maybe you use HCC on some older partitions. Also you can check if you really need all indexes on exadata: if you can avoid the creation of some indexes you will save time. there are always indexes in a database that are not used. Also it is possible that there are indexes that are used in ypur 9i database but not on exadata becuase on exadata full table scans (maybe in combination with storage indexes or something else) are more efficient.
If I take an intermediate upgrade I perform teh database clone in another solaris machine with manual upgrade for don't compromise the original database...but this action depend on my customer....and my customer want to use insert as select for the transoportation of 2TB , the best alternative without intermediate step is exp and imp....