I have a large database table that is growing daily. The application I have has a page for the past day data and another for some chosen period of time. Since I'm looking at a very large amount of data for each page (~100k rows) and having charts based on time, I have performance issues. I tried collections for each of these and found out that it is making everything slower and I think because the collection is large and it is not indexed.
Since I don't need the data to be maintained for the session and in fact for each time that I submit a page I need to get the updated data at least for the past day page, I wonder if Global Temporary Table is a good solution for me.
The only reason I want to store the data in a table is to avoid running similar queries for different charts and reports. Is this a valid reason at all?
If this is a good solution, can someone give me a hint on how to do this?
Any help is appreciated.
Temporary tables do not work with APEX as those hold data for database session or till commit;
Maybe you try calculate data ready for charts and use e.g. materialized views.
You can find lot of information when you search from OTN forums or Google.
Start check documentation
i just want to know. if your table is large, then you can go for partitioned. The best solution for this kind of problem is to make a table with partitoon and apply index on each partition.
There are three type of partitoned table in oracle they are range based ,hash based and list based. But the approch needs the analysis of data what type of partition you need.
Materialized view is another option.. if you want materialized view that need to improve performance, then use enable query rewrite option while creating a materilized view. Remember to create a materilized view log and go for fast refresh. Ensure proper refreshment cycle.
Mark Answers Promptly
Thanks a lot Jari and mini. I guess partitioning is the best I need to do. I once thought of it, but read somewhere that it's a complex subject and thought it's not for a dummy in database like me, but I guess you gotta do what you gotta do!
Partitioned tables arent intended to be used for tiny ~100k rows tables (unless your rows are really really long).
In my env, it takes 6 sec to do a FTS of an unindexed 2 million row (150 byte rows) unpartitioned table (uncached at OS or Oracle buffer level). I think you need to trace the source of your bottleneck before proceeding with partitioned tables, collections, MVs or whatever. Dumb the problem down by doing a simple report on the data rather than a chart. If the report is fast, then you can probably focus on the chart as being the bottleneck. There are many threads on slow charts. Be sure to monitor the load on your browser machine to make sure it's not just a rendering issue on your side.
Well, my table has 11 million rows and it's growing fast, and also I guess I have traced the source of bottleneck to needing to return lots of rows ~100k in reports and charts.
Also I have historic data that needs to be accessed less often. That's why I thought partitioning is a good candidate. Am I right?
It all depends on how efficient your query is. You can have a billion row table and still get a fraction of a second response if the data is indexed, and the number of data blocks to be visited to retrieve the data is small. It's all about reducing the number of I/Os to find and retrieve your data with the query. Many aspects of the data, stats, table/index structure etc can influence the efficiency of your query. The SQL forum would be a better place to get into the query tuning, but if this test is fast, you can probably focus elsewhere for now. It will resolve your full resultset, and then just do a count of the result (to avoid sending 100k rows back to the client). We are trying to get an idea of how long it takes to resolve your resultset. Using litterals rather than item names in your sql should be fine for this test. Avoid using V() around item names in your SQL.
select count(*) from ( <your-query-goes-here> );
Since you mentioned charts, I think that a materialized view might be the better option.
Or a combination of partitioning (extra cost database option) and a materialized view.
Load the data into table partitions. Create a materialized view on top of that that sums up most data. Like do aggregates for each day.
Build a report on top of that materialized view.