When you say "bit huge" how many terabytes in size is it?
You might be better off looking at creating a standby in mount mode using dataguard to get the standby to a point you are happy then stop the dataguard feed and bring the standby db up in open mode... or use a rman backup of db and restore it to "where ever", no down time using either method.
The answer is no, you never really know when the dumpfile is full. Can you perform a target side export? For example, you have source a and target b, from b database can you run:
expdp user/password network_link=db_link_defined_on_b_pointing_to_source_a directory=local_b_directory_object dumpfile=xxx ...
This way, when you export, you are directly writing them to the target database. We call this network export.
You may also be able to use network import. It is one job that exports from the source database and instead of writing a file, it immediately imports into the target database. It is done like this:
Again, you run this from the target database:
impdp user/password network_link=db_link_defined_on_b_pointing_to_source_a directory=local_b_directory_object ...
This one does not need a dumpfile but does need a directory object for the log file. This will essentially run an export job on the source database and an import job on the target database.
We have seen customers move 3TB of data in an hour. Granted they had the hardware to do that.
Another potential way to do this is using transportable tablespaces with Data Pump. It has some limitations and restrictions, but if that works for you, it could b a lot faster.
Hopes some of this helps.