here is my scenario
Windows 2008 R2 Sp1
64 GB RAM
Table DATA_BI 4GB >> Its a BI table, updated at night, read a lot at daytime.
My initial step to get aster and if possible instantaneous access to this table was:
1 - Create 16k blocksize to fit this table in larger blocks... >>> failed, the response time did not get improved.
2 - Allocate the table in memory, KEEP POOL, so that, i imagined the access would benefit and be really faster because we have enough memory in KEEP POOL to acommodate the whole table, and I allocated it to this pool.
But I want to know why the access also did not get improved even with table on keep pool.
Justin gave me nice tips to test:
+""If these are BI-type queries, I'll wager that you're doing some sort of aggregation of the data, right? You're not giving people a BI dashboard that has millions of rows. If that's the case, have you looked at using materialized view, dimension objects, and query rewrite to pre-aggregate the data? That's generally going to be much more productive than worrying about trying to outsmart Oracle's caching strategy.+ Are you using parallel query in your BI-type queries? If you're not, that's another thing to investigate. If you are, that's going to vastly complicate your ability to read data from memory rather than from disk.""
I want to here from you guys much more experienced than me what kind of steps shall I take to instantaneous access of this table in memory and how to ensure I'm reading from that pool.
tks a lot guys, really appreciated
Have you actually looked at the queries that are slow to determine what they're spending time on? Do you know, for example, that the queries are actually spending all their time reading data from disk?
If you don't know what the problem is, it is exceedingly unlikely that any solution you come up is going to happen to fix whatever problem you actually have. Guessing that it will be faster if the table is in the keep pool is unlikely to help unless you actually have some evidence that your bottleneck is reading from disk and unless you have some reason to believe that reading from cache is going to be faster (for example, you'd be giving up the ability to do the more efficient direct-path reads since they bypass cache if you want to force all access to be in memory).
Pick one query that is slow and post that query (along with the information we'd need to help you which is laid out in the FAQ on posting a tuning request). My wager is that there are better ways to approach the problem than trying to go down the path of forcing the table into a different memory segment.
So you create a new thread for this question but as I mentioned in your other thread (non standard block size tablespaces???
you haven't provided any information at all about what you are even trying to do, the table or tables you are using, the query or queries you are having trouble with, etc, etc, etc.
In the new question please provide the information needed for tuning requests as outlined by the FAQ.
We can't help you if you won't tell us what problem, if any, you are really trying to address.
The essential way to have an instantaneous answer to a DSS query is to have it already calculated at query time. That isn't going to happen by placing the data that needs to be queried anywhere. That happens by calculating it ahead of time. Justin has mentioned several ways, but the root question is whether you know what is wanted and when.
For example, years ago I was asked to fix a particular new custom report that would run for 60 hours and still not finish. I looked at it, and immediately saw someone had made a basic mistake - if memory is an order of magnitude faster than disk, then putting all the data in memory should be faster than reading if from disk, right? Well, no, it depends on how you access the data, and the in-memory procedures all did full scans (actually, stopkey scans for each row!). In this case, it was a report that sliced up two years of monthly data off a denormalized table with way too many indices, and presented it in different ways. So it made much better sense to extract the two years of data into another table, then read the relevant subset of that into memory and play reporting games. This eventually evolved into a batch procedure that would generate hundreds of reports in an hour or so, after a 2 hour extract (including a couple of ETL's to reconcile changes in business definitions over the years). On modern hardware, it is ten times faster. If I were doing it today, I would use analytics and materialized views. But it's still easier to maintain simple scripts than to rewrite the whole thing that runs perfectly well, and it doesn't run any slower with many more years of data.
So, not only do you need to solve the right problem, you need to ask the right questions. The denormalized table was perfectly fine for the original reports it was created for, but not for anything else, and certainly would not have benefited from being kept in memory, as well as scaling issues with years more data.
check the segment i/o statistic for your DATA_BI table.
If you are performing a lot of direct reads you will not benefit from bigger/keep buffer cache.
If you are performing a lot of scattered reads set the table to CACHE additionally to cache the blocks as read sequentially.
If really sequential reads are/were a problem pin also involved indexes into the keep pool.
I think you misunderstand the use of the CACHE setting, it [url http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_7002.htm#sthref5306]indicates that the blocks retrieved for this table are placed at the most recently used end of the least recently used (LRU) list in the buffer cache when a full table scan is performed. This attribute is useful for small lookup tables. I would even question the docs as to whether that is still true, I seem to recall someone saying this placement has changed to the middle, though I wouldn't trust my memory as to whether that was regarding this feature.
You probably only want to put things in the keep pool that are being flushed from the default pool. It might be an eye-opener to look at the bh view and see what is actually where. Beware, there may be misinformation on the tubes about this stuff.
There are a lot of options to move forward now. I'll adopt firstly the materialized view, it seems the best idea to test.
And I'll test every possible tip you guys adviced me, thanks a lot about that.
As long as I test them all, sure I'll post its results here....
ASAP the materialized view results will be posted