This content has been marked as final. Show 4 replies
Let me try to answer that one:
I’ve also read that, Exadata tags each DB I/O is with metadata indicating I/O type which influence caching . Can somebody shed light on what are various I/O types and which ones are cached?
At least as important as to know what to cache, it is to know what doesn't make sense caching. Here, iDB kicks in.
Because the storage is aware about the kind of I/O (also about the origin) through iDB, it will always cache e.g.
Headers of Datafiles
because we need that all time.
It will not cache e.g.
Secondary ASM Extents
because these will not likely get accessed for reading again. A dumb storage would cache them regardless :-)
"Don't believe it, test it!"
user12254641 wrote:There are really two parts to caching: 1) what to cache and 2) the management of cached data.
Thanks for your inputs.
I see it knows what not to cache.
However I was wondering if it has any special algorithm to decide frequently accessed data.
It seems it will cache all random read/writes and of course file headers and control files.
Assuming cell_flash_cache = default for these segments: index/table, any single block operation is cached. Also IIRC, multi-block reads (but not smart scan multi-block reads) are also cached, up until a certain size (64k or 128K are numbers that comes to mind, but I don't have the code open to check at this moment).
The cache management is basically LRU based, like the db buffer cache -- frequently accessed data is kept on the hot end, and data is aged out on the cold end and replaced.
While I'm sure "algorithms" are interesting for the geek in us, I 'm not sure how this info aids in any management or decision making. Questions are generally better formulated discussing what you would like to do so that appropriate advice can be given. You're question is context free.
Greg Rahn | blog | twitter | linkedin