This content has been marked as final. Show 6 replies
check for explain plan for create index statement.
That would tell you how much time approximatley it would take.
Rah, I find your post difficult to understand. You mention "need to export data" but then you mention of "one column". Do you mean copy data as export as in an export/import operation using exp or expdp is done on a row basis. The ony way to export a single column would be if the target table has only one column defined.
If you can create a database link between the two databases you could use a query to find the rows in your local version not in the remote version of the table and use that query to insert the data via the database link or perhaps use the exp query= or datapump equilivent to extact only those rows of interest.
HTH -- Mark D Powell --
It sounds like you really want to do an upsert. Google for examples of the merge statement with error logging for the rows that would otherwise add.
There may be some other options if you give us more detail, such as using original export with pipe and compression, or transportable tablespace. How much data, how far off is the space limit, what kind of data is it, what version and platform are you on. The way you put it about indexes makes me wonder about the use of this data.
If you are running 11g and the table is not very large and source and target tables are the same, I'd recommend to create a database link and use the DBMS_COMPARISON package.
$ expdp -help
Predicate clause used to export a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".
You can't use the QUERY option to export just one column.
Edited by: ursusca on Feb 8, 2013 3:09 PM
what about using spool !!!
and you load it later using TOAD or another software
spool Name.txt/csv SELECT COL-Name FROM Table-name; spool off
Is there a unique key on the destination table? Can you create a database link from source to destination and then just do a staright insert making use of the IGNORE_ROW_ON_DUPKEY_INDEX hint to suppress the duplicate errors.
Would that work?