So here I have this very simple table, designed not in the best way imo but that's a different topic.
SQL> describe batch_log
Name Null? Type
REQUEST_ID NOT NULL NUMBER(7)
LINE_NO NOT NULL NUMBER
TEXT VARCHAR2(4000)
LOG_DATE DATE
This table has an INDEX based on REQUEST_ID and LINE_NO columns.
When I run Q1 result comes back in a whim. When I run the query based on another REQUEST_ID the query takes 5+ minutes.
Q1: SELECT REQUEST_ID as "REQUEST_ID" , LINE_NO as "LINE_NO" , TEXT as "TEXT" , LOG_DATE as "LOG_DATE" FROM BATCH_LOG WHERE
REQUEST_ID=5061
ORDER BY line_no;
Q2: SELECT REQUEST_ID as "REQUEST_ID" , LINE_NO as "LINE_NO" , TEXT as "TEXT" , LOG_DATE as "LOG_DATE" FROM BATCH_LOG WHERE
REQUEST_ID=5060
ORDER BY line_no;
Execution plan looks exactly the same for both in num or rows and CPU cost. I ran the query with autotrace and also ran gather_stats on that particular table but still I don't understand why when I have REQUEST_ID=5060 there's millions of physical reads on the disk.
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("REQUEST_ID"=5060)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
2465841 consistent gets
2465833 physical reads
0 redo size
980 bytes sent via SQL*Net to client
516 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
Could it be the DB_CACHE size is undersized (400M)? I mean, I do think that SGA (2G) and PGA (2G) are tremendously undersized but I'm new to the company and I'm trying to push changes as quickly (and "politely") as I can.
Thoughts on DB_CACHE being undersized causing this problem?