Can someone explain me how incremental loading works in FDI?
As per my experience every time I import a csv file I can only use the data contained in it, not data uploaded in previous flows
Hi @Stefano_Mazzocca ,
if tomorrow’s file contains only the 3 changed employees, only those 3 rows will be processed by the incremental load. The previous employee records are not automatically retained from earlier CSV uploads.
All employees will still be visible only if the target is designed to preserve the full dataset, for example by using append or merge/upsert logic. If the target is overwritten or truncated on each run, then only the 3 rows from the new file will remain.
Regards,
Arindam
Hi Stefano,
For CSV loads in FDI, incremental behavior is file-based, not history-based. It typically picks up new or modified files since the last successful run, using the file/object timestamp. It does not reuse data from earlier CSV uploads unless you store it in a target table and manage history there.
Docs:
Regads,
Arindam.
Hi @Arindam Sadhukhan-Oracle , I'm a little bit confused. Let's say today I import all employees by reading a long file. Tomorrow one gets terminated and 2 gets hired so in tomorrow's file I'll have just 3 rows (1 update for termination and 2 new people).
Does this mean that tomorrow, when running my pipeline, I'll be able to see only 3 employees?