This content has been marked as final. Show 9 replies
Can you share a bit about your workload? What is the impact of having slow reads? Are you maxing out the I/O capacity of the hard drives?
And have you considered a min_latency target? It will trade some throughput but it might be acceptable if you need your small reads to be consistently fast. Another option to consider would be more active flash cache management to help get your latency-sensitive (and hopefully repeatable) reads into cache.
Did you try t0 check "cell single block physical read" in AWR section? If i remember correctly good baselines for this wait event is from .01 to .1 ms. Please check AWR and pos it here.
At the other site we tracked down an instance of a query exceeding expected SLA. It basically did an index lookup for a goodly number of rows. When looking up one of the rows it waited for over a second for a single block read to complete. that pushed the whole operation over the SLA. However what we are trying to understand is whether the variances we are seeing are expected and typical or are atypical. As I remarked in two sites having x2-2 Exadata racks using high performance disk we have observed similar behavior. Oracle support was of the opinion that the disks were not at 100% utilization when the issue occurred. Though we have a good flash hit ratio we can never guarantee a 100% hit rate so our issue boils down to predictability. If Exadata single block reads truly are that variable we could go back to the business and renegotiate for a different SLA but no one has given us a definitive answer as to the expected wait histogram of cell single block physical reads.
That's why I was asking if this forum could gather additional data points.
I really hope you and everyone else here could give it a shot... run the cellcli query and check the output over your existing metrichistory.....
Hope you can do that for me .... pretty please :)
If others experience the same as we are then it would at least give everyone a heads up on what to expect .
Hi there problem her eis that it tends to get overwhelmed since the number of occurances is low. the problem is that each occurance will cause an SLA miss. So we actually found the issue by looking at dba_hist views rather than awr trying to find out why a particular query from a particular session ran slow.
we then corellated via cellcli to find the same.
select WAIT_TIME_MILLI, WAIT_COUNT from v$event_histogram where event='cell single block physical read'
It looks like you only had two moments some grid disk had a high response time.
2012-04-07T12:36Both on the same host "exass01" (seems like a none default cell name).
Did you check the node to discard some hardware failure?
the first one was on exass03 ... and this is the defualt naming as per what acs generated for us... seems ss stands for storage server instead of cel which they used to use...
Are there overlapping Smart Scans in flight?
Hi robinsc,1 person found this helpful
Apologies for the late reply.
Here are some numbers for yesterday on a production system that tends to saturate I/O frequently. This system doesn't have IORM in use at all, and we can see that >1s response times to exist.
$ dcli -g cell_group -l cellmonitor cellcli -e "list metrichistory where metricvalue '>500000' and Name like 'CD_IO_TM_R_SM_RQ'" > cellcli.out
$ cat cellcli.out | grep 2012-05-29T | tr -d , | sort -nk4 | tail
dm1c06: CD_IO_TM_R_SM_RQ CD_00_dm1c06 2848684 us/request 2012-05-29T21:56:25+00:00
dm1c08: CD_IO_TM_R_SM_RQ CD_00_dm1c08 3032456 us/request 2012-05-29T19:17:41+00:00
dm1c09: CD_IO_TM_R_SM_RQ CD_01_dm1c09 3070021 us/request 2012-05-29T04:23:11+00:00
dm1c07: CD_IO_TM_R_SM_RQ CD_00_dm1c07 3166033 us/request 2012-05-29T18:56:32+00:00
dm1c11: CD_IO_TM_R_SM_RQ CD_00_dm1c11 3213968 us/request 2012-05-29T02:26:13+00:00
dm1c07: CD_IO_TM_R_SM_RQ CD_00_dm1c07 3407311 us/request 2012-05-29T04:31:20+00:00
dm1c05: CD_IO_TM_R_SM_RQ CD_00_dm1c05 3642966 us/request 2012-05-29T16:30:08+00:00
dm1c10: CD_IO_TM_R_SM_RQ CD_01_dm1c10 3701307 us/request 2012-05-29T03:32:23+00:00
dm1c07: CD_IO_TM_R_SM_RQ CD_01_dm1c07 3843500 us/request 2012-05-29T23:01:37+00:00
dm1c10: CD_IO_TM_R_SM_RQ CD_01_dm1c10 4809542 us/request 2012-05-29T02:31:22+00:00
SQL> select WAIT_TIME_MILLI, WAIT_COUNT from v$event_histogram where event='cell single block physical read';
So while there aren't many of them, slow single-block read requests to exist on this system.
If you have a strict per-query response time SLA though, I'd highly recommend trying out a MIN_LATENCY objective and seeing if you can tolerate the throughput impact.
I got an answer back from oracle support. Even their test system has slow response times so it seems that setting an alert on this metric is a recipe for late night emergency panic calls.
We do seem to be getting more even response times with io_objective set to auto