A Replace should clear the data set in the .dat file prior to load and then load the .dat.
If you take the same .dat and manually load it to HFM via the web, do you see the same behavior?
Thanks for replying.
Let me explain further. A Replace will replace all entities that had currently existed. However, in my case, the first file had additional entities that were not in the new file - loaded in the same period. (e.g. an incorrect file for December with more entities was loaded into January period) Therefore, eventhough entities were replaced when loading January correctly, the Entities from Dec that were not in Jan were still sitting in the Jan period in HFM.
I need the new load to clear out entities not used in the new file.
I believe that the safest approach for you issue is to use the HFM "Clear Data" functionality to clear the data of all the entities of that was affected from the original file and the export the correct file from FDM.
you're right that this can be done, but unfortunately the HFM team is unwilling to accept this method
I cannot understand why... in any case there is always another way....
1.Get the file of December
2.Replace all the values with zeros.
3.Load the file with zeros
4.Load the correct file.
I usually do not suggest this method... but it is definitely that it will fix the issue...
You could also explore a logic group, you could set up a logic group with a very small value in an intersection that may not be generally used to force entities that are loaded through that location to be cleared.
Actually if you're going to go through the route of loading a file to do a replace, just load a value of NODATA. HFM accepts that as if you typed NODATA into a data grid an essentially performs a clear. If you put in one NODATA record to an entity and load on a replace, it'll wipe the entity out.
Still not sure why your HFM team won't let you do a data clear using the app. That's by far the safest and easiest way... but this would work too.
As a starter. there are a couple of ways of doing this but in each you need to access the previous data set, to identify the entities that are not on the current file. this can either be done at the import stage before the data is deleted, or when the export file is created. In the first instance you could do a read of the database table tdataseg for the POV, and for the second option access the previous load file. (If you look through the API manual you should find where FDM stores the last file details. Depending on which option you choose will determine what you need to do with this data to merge it with the current file to ensure the records in the target get deleted. But this may not provide the final solution as you my need to also check what entities may not have loaded.
there is a presentation on the internet - search for 'Presentation-ComplexDataSubmission' by FINIT Soultions, and also look at' an old previous forum thread (thread 3419122), where the FINIT consultant actually reponds to a similar query with some good points explaining why they perform the processing in the AftLoad event, but with the use of the .ERR report from the previous load it could also potentially be carried out in the aftValidate script. You will need to assess the risk of each possible solution depending on the system you are working on.