I'm wondering if following is possible.
Export datapump job to export an entire tablespace each day to dumpfile running daily
Then drop older partitions e.g day 1
On destination databased days later (imgaeine day 4) if need import dumpfile and current datafile for tablespace concerned will this import the dropped partition (i.e day 1
A daily export can be set up using something similar to what i posted here http://dbaharrison.blogspot.de/2013/04/dbmsdatapump-daily-full-export.html
Any extra logic about dropping partitions can then be added as just extra plsql after the export itself.
I'm a little confused as to what you are asking, but since you are on 22.214.171.124 you have what is called transportable table mode. You can export a single partition of a table using transportable tablespace data movement. This helps alot when you create a new tablespace for each partition. It also works if you don't, but there are lots of extra data that you need to store. Let me give you an example:
table foo with:
partition 1 - tablespace 1 - day 1
partition 2 - tablespace 2 - day 2
partition 3 - tablespace 3 - day 3
partition 4 - tablespace 4 - day 4
If you were to run this export:
expdp user/password tables=foo:partition1 transportable=always dumpfile=foo_part1.dmp ...
you would get a dumpfile with just partition1. You would need to save the dumpfile and the datafile if you ever needed to restore it. To restore it, you would run a command like:
impdp user/password transport_datafiles=<location of your tablespace datafile> dumpfile=foo_part1.dmp ...
This would import just the partition and it would create a table called foo_partition1. You could then use exchange partition to put it back into the table where you need it restored.
If all of these partitions were in the same tablespace, you could do the same for export and import, but you would need to keep a copy of the dumpfile and a copy of the tablespace for each day. So
day1 - would have dumpfile plus datafile with just day 1's data
day2 - would have dumpfile plus datafile with day 1's data and day2's data
day3 - would have dumpfile plus datafile with day 1's + day2 + day3's data
The reason for this is because the import verifies that the tablespace it is importing has the same characteristics as when it was exported. It uses some checksum or something to verify this. So, if you added data in day 2, the datafile would not be the same and the import would fail.
Hope this helps.