This content has been marked as final. Show 6 replies
You can use XTTS( if platform is dufferent) or TTS to migrate tablespaces one-by-one. So when once tablespace are migrated they can be compressed after migration to make room for other tablespaces.
Or other laborious task is to drop the REDO/DBFS diskgroups(which you might have) and assign the space to data diskgroup only, so you can have enough space to make RMAN or import/export work. Once migration gets over you can then compress the data and release the space back to redo/dbfs DG (ofcourse you need to recreate them).
BTW it depends upon the data how much compression you can get after compressing.So its not always true so say you can compress data to 1/10(either high archival or high query). Hope that helps
Thanks for the reply.
I am using RHEL 5.5 64bit on Source and in Exadata it is OEL 5.5 64bit. Now i am in production and i can t be able to drop the redo and DBFS diskgroups. Can we go for any dblinks creating the tablestructure and and importing the data to Exadata machine. Is there any recommended steps for TTS from source to Exadata.
Hi Bobs,1 person found this helpful
Assuming your data is already in an Oracle database, I'd definitely suggest looking at direct path inserts across db links. Tanel Poder has an excellent presentation on a large-scale migration of this type: http://www.slideshare.net/tanelp/tanel-poder-performance-stories-from-exadata-migrations
Hope this helps!
Thanks for the link.
Is there any detailed documentation on how we can insert the massive data through dblinks from source server to Exadata.
I think Tanel's presentation will is one of the better resources out there when designing this type of migration strategy. The book "Expert Oracle Exadata" also has a good section on data migration.
In general, though, if you're looking to move large volumes of data over database links I would suggest:
- Avoid staging data, and instead try and stream directly to the final destination
- Keeping all those Exadata disks and CPUs busy, which means a high degree of parallelism
- Remember that a single database link is inherantly serial. You will therefore want to parallelize over multiple links.
- Making use of direct path loads, and deferring index/constraint creation until after data loads are complete
- Physically sorted data will typically improve HCC compression ratios
- Make sure your network can keep up with your data load volumes. InfiniBand is your friend!
- Giving some thought to how to organize your data: migration is an excellent time to make data model and partitioning changes that can be very difficult afterwards
- Testing your data load procedures with smaller data volumes before moving on to larger volumes
If you can't afford the downtime you maybe can look into tools like Golden Gate or Shareplex.
This will give you time to build your database on Exadata and implementing HCC compression for static data or OLTP compression for dynamic data.
The original database stays up running and changes get replicated to the Exadata dbs during the building phase.
When the build is completed you stop (or reverse) replication and repoint the application to the Exadata database.
I must admit it will involve additional license costs.