This content has been marked as final. Show 6 replies
There is never a count, but with different versions there is a size limit.
when you specify
expdp user/password tables=string_goes_here
The string was being stored in a character string of only 4000 characters. So the limit was number of characters in the string. This changed (I think in 11 some time) to be a clob, so that limitation went away.
Are you getting an error? If so - what error are you getting. - also what version are you on.
Yes i have problem when i want to export 300 tables with data pump.
This my file :
Schema.Mytable:partition2 -- average 50 charchters
I have for each line average 50 charachters, I have 300 tables so 300 lines. So, after the TABLES parametre i have 15000 charachters..
This thes error message :
ORA-39001: invalid argument value
ORA-39071: Value for TABLES is badly formed.
I'am in oracle10gR2
I'm not sure if there is a patch available for 10.2.0.4 but you should check.
Here are some alternate approaches:
1. In 10.2.0.4, all of the tables need to be owned by the same schema so, if you can run the job from the schema that owns the tables, you could remove the schema. from all of your tables.
Let's say all of your tables are owned by scott and you are running the job as scott, if you say tables=foo, then Data Pump will export scott.foo. If you run the job as system then
Data Pump will look for system.foo. In 10.2.0.4, Data Pump did not support table mode exports where tables were owned by different schemas.
10.2.0.4 - you can't say ---- expdp system/manager tables=scott.emp, hr.employees ...
184.108.40.206 - you can use --- expdp system/manager tables=scott.emp, hr.employees ...
At least I think it was 220.127.116.11.
2. If you are getting all of the partitions in the table, you don't need add the partitions. By default, Data Pump will export all partitions in the table.
so instead of saying expdp tables=user1.table1:part1,user1:table1:part2
you could say expdp tables=user1.table1
3. I have not used this approach, but from what I have been told, this should work.
Exporting using tables=xxx and exporting using schema=sss include=table... will give you the same results in the dumpfile. Let's say you have table tab1 and an index on tab1 called ind1. If you export
expdp user/password tables=tab1
you will get table tab1 and index ind1
If you say
expdp user/password schemas=my_schema include=TABLE:"= 'TAB1'"
You will export tab1 and ind1.
So, you if the 2 above solutions don't work, you could do something like this: (This would require you to have to export all partitions in a table)
create table my_exp_tabs(a varchar2(30);
insert into my_exp_tabs values ('TAB1');
expdp user/password schemas=my_schema include=TABLE:'IN (SELECT A FROM MY_EXP_TABS)" ...
Hope this helps.
For those who can have this problem, the solution is :
you create table containing list of all the tables to export :
CREATE TABLE expdp_tab (owner VARCHAR2(30), object_name VARCHAR2(128), object_type VARCHAR2(19));
INSERT INTO expdp_tab VALUES ('schema','table_name','TABLE');
and then replace TABLES=(…) in parfile with this:
INCLUDE=TABLE:”IN (SELECT OBJECT_NAME FROM EXPDP_TAB)”