6 Replies Latest reply: Apr 7, 2011 11:06 AM by Moonlight RSS

    Limit numbre of table with Data Pump

    Moonlight
      Hi,

      When we use TABLES parameter to export tables with DataPump, the numbre of tables is limit or no? how many tables can i put?

      Thank in advance
        • 1. Re: Limit numbre of table with Data Pump
          Dean Gagne-Oracle
          There is never a count, but with different versions there is a size limit.

          when you specify

          expdp user/password tables=string_goes_here

          The string was being stored in a character string of only 4000 characters. So the limit was number of characters in the string. This changed (I think in 11 some time) to be a clob, so that limitation went away.

          Are you getting an error? If so - what error are you getting. - also what version are you on.

          Thanks

          Dean
          • 2. Re: Limit numbre of table with Data Pump
            Moonlight
            Hi,

            Yes i have problem when i want to export 300 tables with data pump.

            This my file :

            DIRECTORY=DirPUMP
            DUMPFILE=MYDUMP.dmp
            LOGFILE=MYLOG.log
            TABLES=Schema.Mytable:partition1
            Schema.Mytable:partition2 -- average 50 charchters
            Schema.Mytable:partition3
            ....

            I have for each line average 50 charachters, I have 300 tables so 300 lines. So, after the TABLES parametre i have 15000 charachters..

            This thes error message :
            ORA-39001: invalid argument value
            ORA-39071: Value for TABLES is badly formed.


            I'am in oracle10gR2
            • 3. Re: Limit numbre of table with Data Pump
              Moonlight
              Exactly Oracle 10.2.04
              • 4. Re: Limit number of table with Data Pump
                Dean Gagne-Oracle
                I'm not sure if there is a patch available for 10.2.0.4 but you should check.

                Here are some alternate approaches:

                1. In 10.2.0.4, all of the tables need to be owned by the same schema so, if you can run the job from the schema that owns the tables, you could remove the schema. from all of your tables.
                Let's say all of your tables are owned by scott and you are running the job as scott, if you say tables=foo, then Data Pump will export scott.foo. If you run the job as system then
                Data Pump will look for system.foo. In 10.2.0.4, Data Pump did not support table mode exports where tables were owned by different schemas.

                10.2.0.4 - you can't say ---- expdp system/manager tables=scott.emp, hr.employees ...

                11.1.0.6 - you can use --- expdp system/manager tables=scott.emp, hr.employees ...

                At least I think it was 11.1.0.6.

                2. If you are getting all of the partitions in the table, you don't need add the partitions. By default, Data Pump will export all partitions in the table.

                so instead of saying expdp tables=user1.table1:part1,user1:table1:part2

                you could say expdp tables=user1.table1

                3. I have not used this approach, but from what I have been told, this should work.

                Exporting using tables=xxx and exporting using schema=sss include=table... will give you the same results in the dumpfile. Let's say you have table tab1 and an index on tab1 called ind1. If you export

                expdp user/password tables=tab1

                you will get table tab1 and index ind1

                If you say

                expdp user/password schemas=my_schema include=TABLE:"= 'TAB1'"

                You will export tab1 and ind1.

                So, you if the 2 above solutions don't work, you could do something like this: (This would require you to have to export all partitions in a table)

                create table my_exp_tabs(a varchar2(30);

                insert into my_exp_tabs values ('TAB1');

                then use:

                expdp user/password schemas=my_schema include=TABLE:'IN (SELECT A FROM MY_EXP_TABS)" ...

                Hope this helps.

                Dean
                • 5. Re: Limit numbre of table with Data Pump
                  Moonlight
                  with oracle 10gR2 when you are using parameter TABLES=, it cannot be longer than 4000 characters... this the problem!!! :-(
                  I know the solution now, thank u for your help...
                  • 6. Re: Limit numbre of table with Data Pump
                    Moonlight
                    For those who can have this problem, the solution is :

                    you create table containing list of all the tables to export :

                    CREATE TABLE expdp_tab (owner VARCHAR2(30), object_name VARCHAR2(128), object_type VARCHAR2(19));

                    INSERT INTO expdp_tab VALUES ('schema','table_name','TABLE');
                    ....
                    commit;


                    and then replace TABLES=(…) in parfile with this:
                    INCLUDE=TABLE:”IN (SELECT OBJECT_NAME FROM EXPDP_TAB)”


                    :-)