Categories
- All Categories
- Oracle Analytics Learning Hub
- 19 Oracle Analytics Sharing Center
- 17 Oracle Analytics Lounge
- 233 Oracle Analytics News
- 45 Oracle Analytics Videos
- 15.9K Oracle Analytics Forums
- 6.2K Oracle Analytics Idea Labs
- Oracle Analytics User Groups
- 87 Oracle Analytics Trainings
- 15 Oracle Analytics Data Visualizations Challenge
- Find Partners
- For Partners
Using BICC PVOs and need a method for identifying records that are deleted in Cloud Fusion
Currently using BICC PVOs to extract Cloud Fusion data from different workstream areas (FSCM and HCM) to build a data store - used for reporting and analysis. We have noticed that the BICC PVOs identify updated and new records, but we are unable capture deleted records in the Fusion app without a full table load. Full table loads is not a sustainable solution as the data continues to grow. What methods and tools are available to help manage and keep our data synchronized from Oracle Fusion (FSCM and HCM) with our Data Store.
Comments
-
If you get a pecsv data set (which is just the primary keys identified for your PVO), that is the current valid set of active records for that pvo.
We handle it in this manner, we pull data rows (Delta or full depending on the pvo/day/situation, usually just Delta changes). these rows are bulk inserted into staging tables which then merge to our internal warehouse table. These are automatically considered "active" records.
We then pull the PECSV Key any row in the warehouse table that is not in this PECSV data is considered to have been deleted (we soft delete for 30 days, then perma delete). Any row that was previously soft deleted (i.e., we got a delta record, but it wasn't in the previous PECSV data set) is marked active if it is in the PECSV data set. This is primarily due to the scheduling differences of the data and PECSV data pulls on oracles side, i.e., they do not run at the same time despite being scheduled that way.
0 -
Hi Dave Kekish , how do you manage the fact that an extraction of PK can generate several Oracle BICC Files when you have a lot of PK concerned ?
Regards, Laurent
0 -
Hello @LaurentDup !
I can tell you how we handle it certainly.
We pull a large number of PVO's into SQL Server (eventually snowflake, but for the past 5+ years it's been SQL Server).For an example, we'll pick on the biggest one we currently pull, that being the PVO for SubLedgerJournalDistributionExtract. This thing is a monster of a pvo. I believe we are sitting at just under 900 million rows currently, so any given pull can have one to many CSV files and one to many PECSV files.
The data CSVs are easy. Doesn't matter how many files there are, they get processed one file at a time. We load the file into staging and run a SQL Merge process to the actual warehouse. pretty simple really.
The PECSV's need to be treated however differently. We have a table in the staging system to match the pecsv file format and load 100% of the files to it. We then run one large merge on that table (it can take a while) to the warehouse simply checking if the keys exist or not. if they don't exist, we set a flag to "soft" delete the record. if they do exist, then its all good and we move on.
We have recently started to apply filtering to the PVO's like this one, where we only want the records from the last X days (we chose 90 days). This dramatically cuts down the number of files we have to load (from 35+ to 1 or 2 total). We modified the merge logic to only look at records from the last 90 days, so we are not capping off records we shouldn't, etc. And we also have a custom bip report that will give us the record counts of the keys from the base tables. For the extract versions of the PVO's this is much simpler than the OTBI based PVO's but still doable.
I know that some middleware processes will also compare the previous pecsv load to the current which we thought about implementing but never did.
Hope that helps
-Dave
0

