Hi to all,
We should migrate our 10gR2 single-instance database with conventional file system to a two-node 11gR2 RAC on ASM (on same Windows Server platform…).
Could you please help me in this, as how can I migrate my production database using data pump?
I have full data pump export from target but I don’t know how to import, whether the scheme after scheme, full import, do I need to first create manually tablespaces on destination, whether to exclude the index, constraint, statistics…??
You can do the full import. Well it might be easier for you to create the tablespaces but you don't have to. If your file structure is different ( which willl be in your case), you can use remap_datafile parameter. You do not want to leave the constraint and indexes. For stats, you should test this in your test envorinment.
>> whether the scheme after scheme, full import, do I need to first create manually tablespaces on destination, whether to exclude the index, constraint, statistics…??
1.You have both the choice better you can import schema data add the parameter schemas=<list of schemas>
2.If you want then you can create tablespaces on Target with same name and size(should be grater then used size on source) or you can use REMAP_TABLESPACE option.
3.If your Database is big in size then you can exclude indexes and create manually using the parallel and nologging option for faster result
4.you can also exclude statistics, it better to gather new stats after import the data.
user CLUSTER=N during import for RAC database.
You can take command help from link provided by TSharma.
Since you are changing to ASM, I would create the tablespaces before running import. Trying to do that with an import job can be difficult especially since you are now going into ASM.
Everything else could be imported by running import full=y exclude=tablespaces ...
The statistics can take a long time to import (depending on what you have in the dumpfile) so often times, exclude=statistics is something to think about. You can always regather them them you are done the import.
Indexes - I'm not sure why people say to skip them. If you run your job with parallel=x, then Data Pump will create indexes serially, but user multiple pq slaves to build the indexes in parallel. Up to the value 'x' supplied on the import command.