I'm experiencing a strange issue with expdp. We have a production clustered db (11gR2 on RHEL5.4) and from time to time, we take exports from this db and import it on our test servers for testing purposes.
Today I experienced a strange issue with expdp. As it was going on with the export of empty tables (as I investigated data pump exports/imports the tables according to their size) and it was nearly finished, it suddenly stopped and the prompt came up:
. . exported "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS" 0 KB 0 rows . . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES" 0 KB 0 rows . . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE" 0 KB 0 rows -bash-3.2$
without prompting "Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed". Actually the dump is working well but I checked the dump on our test server and it seems some of the tables are missing (they don't have data on it, but the problem is not partitioning, as both our production server and test server are running 11gR2 enterprise edition), which caused our program not working properly on our test server. I never had this issue before, any ideas?
Actually, this issue is kinda critical for us, as part of our Disaster Recovery Plan works with datapump dump files.
Did you do the export in parallel (didnt see it in your command line but you never know) - this would mean that even though you are logging the small tables are being exported it doesnt mean all the large ones are done.
Did you get any DM/DW trace files produced from the export?
At the moment it looks like something went wrong that wasn't logged to the screen that caused the job to fail - the completed succesfully message wasn't recorded (as it wasn't succesful) so importing that file in could well be missing some tables as it's not complete.
I suspect one of the dm/dw processes died right near the end leaving you in this state which would mean the master table wasn;t complete and written to the dumpfile leaving you with some missing objects when you try and import.
If the master table was never exported, then import would never get started. The master table is the first thing loaded during import and if it can't be found, then the job doesn't get started.
If the master is there, then it would seem that you had some messaging issue. If that is the case, I would guess that the job is complete. You could test this by trying an import like this:
impdp user/password directory=your_dir dumpfile=your_dumpfile master_only=y
This will import just the master. If it can load, the job finished because the master table was exported. Remember to drop the master table after that import command.
I'd always assumed the master table was written to the dumpfile as it went - so an object is completed and it's written to the file (and also the table still in the database). Thinking about it again now and i think you must be correct - there is no point in it unloading any data from the MASTER_TABLE to the file until it has finished everything. So perhaps everything is OK as you suggest.
To take your answer a stage further you could load the master table in as you already showed how to do and then query it to see what it thinks it exported using the example i wrote some time ago for doing something similar where i wanted to see how long each individual table took to export - this also displays how many rows etc were in it and 'reconstructs' part of the log file for you - that's detailed here. I think this same SQL (or something similar for import) could be used to list what it thinks is in the file.
Some other reasons we don't export the master table as it is loaded is because some times some columns need to be updated. Let's say you are exporting on version 188.8.131.52 and you want to import into 10.1 so you have the version=10.1 parameter on your export command. The master table from 184.108.40.206 looks different than the master table on 10.1 so we need to 'massage' it so the import of the master table can load. We also want to only export the table once for performance reasons. We have no idea how large the master table will get so trying to allocate space for it would be impossible.
The issue exists with some system tables. I needed our application's data and fortunately (thanks to the developer team!) they are distributed over specific, standalone schemas. The problem is resolved.
But during my check, I found out that the data pump log gets created and filled BEFORE the job gets to a successful/failure point. try using Data Pump and in another session, tail the log in realtime. You will see that the successful prompt appears much earlier than expected.