I build reporting pulls in SQL or PL/SQL to get data from 27 separate stores and import into a corporate Oracle DB. In the corp DB, I have the 27 stores as DBLinks, so my normal process in pseudocode is to do this:
insert into <corp.table>
But I have to build 27 separate SQL groups for each query I want to run.
You haven't provided enough info to make any specific recommendations. We would need to know:
1. how often this 'report' is needed?
2. how many users run the query?
3. how often the query is executed?
4. how up-to-date the data needs to be?
5. how often the real data changes and how much of the data changes?
My first approach for prototyping a solution would be to create a report-ready table and a process to 'truncate and load' that table with data.
I would use at least three procedures/functions:
1. A control procedure - this procedure accesses new tables that contain the info about the DB links, and process info such as the last run-date, number of rows, current status of the process, etc.
2. A 'source data' procedure - this procedure creates a CURSOR on an appropriate data source and returns it
3. A 'workhorse' procedure - this produce FETCHs the cursor and stores the data in the report-ready table
The control procedure would 'loop' thru the 27 data sources (the DB links) one at a time (or could submit jobs to execute in parallel) to populate the report-ready table.
A modular approach provides maximum control and flexibility. When you are testing you set up the new control table so that only 1 or 2 data sources are used. Then for full testing you simply alter a FLAG in the control table to include the other 25 data sources.
You would also have a LOG table to log the results. For a process like this you need to know which of the 27 succeeded, which failed, which ones need to be rerun, etc.
By making the processing somewhat table-driven you can easily run the process for one store (delete the data for the store and refresh it) or any combination of stores at any time the data for a store is available or needs to be refreshed.
By using a PARTITIONED report-ready table, one partition for each of the 27 stores, each process can work with one store totally independentlly of the other stores: truncate the stores partition and reload it. It also make it easy to archive/drop/reload data for any one store independently of the other stores.
That is the simple, modular approach that I would use. Even if the fnal use case required an 'all at once' solution it is trivial to convert that modular approach.