Srini.. thanks for the info... I have read manual and also check database looks exactly what I want.
Unfortunately we don't have CSSCAN utility installed on the test/prod databases, I spoke with the developer and he's ready to update records but need the list. Since it's production database it will take months to get approval from management before we can install anything new.
I know I'm pushing my luck but without csscan utility is it possible to get list of problem rows/columns?
You could try something like this to find the problem rows using the convert function - it gives you the basics anyway - just wrap it in something that displays rowid for anything large than size 'X'
create table test (col1 varchar2(1));
insert into test values ('Ü');
1 row created.
select max(length(col1)) from test;
select max(length(convert(col1,'AL32UTF8'))) from test;
Srini and others,
You do not have to use a single export file. You can export in parallel 100 if you want and you will import as parallel 1. You can have 1 file or 1000 files. It does not matter to import. Run your export command as you normally would and then run your import command using only the features allowed in SE.
The Data Pump will only create objects serially. I may read them in parallel. Let's say you exported scott.emp and external tables was used. Now let's say the first pq slave wrote to file 1 and the second pq slave wrote to file 2, these would both be read during import while scott.emp was being imported.
All that is meant by parallel=1 is that 1 worker process will be working at a time.
referential constraints are only loaded after data is loaded so there won't be a problem with them with any value of parallel.
Hope this helps.