Pl post exact OS and database versions, along with the complete expdp command used.
OS: RedHat Enterprise Linux 5.4 64bit version
DB: Oracle 18.104.22.168.0 Enterprise 64bit version
expdp system@orcl full=y directory=export dumpfile=export_20130712.dmp nologfile=y
Can you consistently replicate this behavior ? Pl retry the operation without the NOLOGFILE parameter
I tried without NOLOGFILE option, I did not get "successful" message on the shell output, but the message exists in the logfile.
I'm going home right now so I'll check the replies tomorrow.
Did you do the export in parallel (didnt see it in your command line but you never know) - this would mean that even though you are logging the small tables are being exported it doesnt mean all the large ones are done.
Did you get any DM/DW trace files produced from the export?
At the moment it looks like something went wrong that wasn't logged to the screen that caused the job to fail - the completed succesfully message wasn't recorded (as it wasn't succesful) so importing that file in could well be missing some tables as it's not complete.
I suspect one of the dm/dw processes died right near the end leaving you in this state which would mean the master table wasn;t complete and written to the dumpfile leaving you with some missing objects when you try and import.
If the master table was never exported, then import would never get started. The master table is the first thing loaded during import and if it can't be found, then the job doesn't get started.
If the master is there, then it would seem that you had some messaging issue. If that is the case, I would guess that the job is complete. You could test this by trying an import like this:
impdp user/password directory=your_dir dumpfile=your_dumpfile master_only=y
This will import just the master. If it can load, the job finished because the master table was exported. Remember to drop the master table after that import command.
I'd always assumed the master table was written to the dumpfile as it went - so an object is completed and it's written to the file (and also the table still in the database). Thinking about it again now and i think you must be correct - there is no point in it unloading any data from the MASTER_TABLE to the file until it has finished everything. So perhaps everything is OK as you suggest.
To take your answer a stage further you could load the master table in as you already showed how to do and then query it to see what it thinks it exported using the example i wrote some time ago for doing something similar where i wanted to see how long each individual table took to export - this also displays how many rows etc were in it and 'reconstructs' part of the log file for you - that's detailed here. I think this same SQL (or something similar for import) could be used to list what it thinks is in the file.
Some other reasons we don't export the master table as it is loaded is because some times some columns need to be updated. Let's say you are exporting on version 22.214.171.124 and you want to import into 10.1 so you have the version=10.1 parameter on your export command. The master table from 126.96.36.199 looks different than the master table on 10.1 so we need to 'massage' it so the import of the master table can load. We also want to only export the table once for performance reasons. We have no idea how large the master table will get so trying to allocate space for it would be impossible.
The issue exists with some system tables. I needed our application's data and fortunately (thanks to the developer team!) they are distributed over specific, standalone schemas. The problem is resolved.
But during my check, I found out that the data pump log gets created and filled BEFORE the job gets to a successful/failure point. try using Data Pump and in another session, tail the log in realtime. You will see that the successful prompt appears much earlier than expected.