user13328581 wrote:I have my own custom version of materialised views. The reasons are basically performance and management of the materialised object.
This is just for learning purposes. What if I would like to create my own materialized view utilizing only tables and a job...is this possible and a simple example would help. thank you
rp0428 wrote:Not really. A MV is often used in data warehousing to maintain summarised/aggregated data sets. And that is the functionality we need. We used MVs originally for this system. However, we ran into a bunch of issues ito usability and performance.
I have my own custom version of materialised views. The reasons are basically performance and management of the materialised object
Nothing in what you describe is similar to an MV. You are merely using staging tables the way they have been used for years.
Why are you using CTAS? From what you described I don't see where it is necessary.A MV is defined by a SQL statement. E.g.
And this is the SQL statement that needs to be supplied to our customised interface. A MV DDL also defines the name of the MV - which is the target table in our interface call. The only additional parameter is the name of the target partition. As the SQL defines the summarised data set for a partition in the table, and not the summarised data set for the entire table - as a MV's SQL would.
select deptno, avg(salary) as AVG_SAL, count(*) as EMPLOYEES, ...etc from emp group by deptno
You describe only using the NAME of the target table which means you must already know the structure. So why not just create a permanent table with the correct structure and do a direct-path insert into it?Because then you need to first truncate or delete the data in that table (or table partition). And this is the problem with using a MV in our case.
Yes - MVs would be slower for the use you describe. But that use isn't what MVs were designed for. So using staging tables and NOT using MVs isn't the same thing as creating an MV replacement. MVs are designed to be used over and over again for the problem of slowly changing data.Beg to differ. We have a classic data warehouse issue of a massive fact being updated with data, and requiring summarised and aggregated tables to be refreshed to reflect the changes to the fact table.
user13328581 wrote:Unworkable and inefficient approach. If there is a single staging table as a permanent table, then a single partition can be refreshed. If this is done dynamically, then 30 refreshes can be done at the same time on the same table with different partitions.
Hi billy what exactly do you mean by aggregated?...and i really dont understand why you still need a ctas...you can create several permanent staging table ahead of time with the associated index and partitions from their associated target table and then just dump the aggregate data in them and finally transfer the data to the target table and then delete the contents of the permanent staging table... ALso, if any of the indexes changes or partition changes in the target table, you can always check and update the permanent staging table with the new change
user13328581 wrote:Deadlocks are self (application) induced. So I'm not sure what exactly you have in mind as a potential problem.
what if the refresh takes a while....wouldn't cause table deadlock or are you prevent deadlocks from occuring.......this is why i was suggesting a single permanent staging table instead