This discussion is archived
1 2 Previous Next 24 Replies Latest reply: Feb 2, 2013 11:13 AM by Dude! Go to original post RSS
  • 15. Re: Oracle direct path read IO size
    Dude! Guru
    Currently Being Moderated
    As far as I know, iostat only measures physical block I/O operations and does not read the page cache, storage controller cache, or the Oracle database buffer cache. I think it cannot be used to measure ASM I/O performance and sizes.
  • 16. Re: Oracle direct path read IO size
    Kirill.Boyko Newbie
    Currently Being Moderated
    Anyway, :) iostat is very well aligned with storage controller statistic. My storage shows same result like iostat. It means that "someone" is issuing these 32K IOs to storage. My question is who is splitting Oracle 1M IO into chunks of 32K IOs? Question is very serious, as it is a significant penalty on IO throughput. We can increase backup speed in 4 times if Oracle process would be able to read using 512K requests and not 32K.

    I would proceed with SR on that.
  • 17. Re: Oracle direct path read IO size
    Dude! Guru
    Currently Being Moderated
    Thinking about this again I agree. I wonder if your storage driver is limiting the I/O size. What are you using? You try the lspci -v command to see the driver name for your storage.
  • 18. Re: Oracle direct path read IO size
    jgarry Guru
    Currently Being Moderated
    Just speculating here, but are you using the large pool with rman? On an older version on a different OS, I think I noticed that using the large pool made for for much smoother and larger I/O (though I could be misremembering, as the problem I was solving at that time was OS-specific I/O buffer fragmentation).

    Anyways, it sounds like you have a stripe width v. AU conflict. Does the ASMCMD volinfo command tell us anything?
  • 19. Re: Oracle direct path read IO size
    Kirill.Boyko Newbie
    Currently Being Moderated
    Well, it could be that it is a conflict but look:

    [root@lnxru0240vg081 bin]# blockdev --report
    RO RA SSZ BSZ StartSec Size Device
    rw 256 512 4096 0 2396860186624 /dev/xvdb
    rw 256 512 512 32768 1198396539392 /dev/xvdb1
    rw 256 512 512 2340683776 1198396539392 /dev/xvdb2
    rw 256 512 4096 0 3998614552576 /dev/xvdc
    rw 256 512 512 32768 1999273722368 /dev/xvdc1
    rw 256 512 512 3904897024 1999273722368 /dev/xvdc2
    rw 256 512 4096 0 2094870298624 /dev/xvdd
    rw 256 512 512 32768 1047401595392 /dev/xvdd1
    rw 256 512 512 2045771776 1047401595392 /dev/xvdd2
    rw 256 512 4096 0 2396860186624 /dev/xvde
    rw 256 512 512 32768 1198396539392 /dev/xvde1
    rw 256 512 512 2340683776 1198396539392 /dev/xvde2
    rw 256 512 4096 0 2396860186624 /dev/xvdf
    rw 256 512 512 32768 1198396539392 /dev/xvdf1
    rw 256 512 512 2340683776 1198396539392 /dev/xvdf2
    rw 256 512 4096 0 2396860186624 /dev/xvdg
    rw 256 512 512 32768 1198396539392 /dev/xvdg1
    rw 256 512 512 2340683776 1198396539392 /dev/xvdg2
    rw 256 512 4096 0 2396860186624 /dev/xvdj
    rw 256 512 512 32768 1198396539392 /dev/xvdj1
    rw 256 512 512 2340683776 1198396539392 /dev/xvdj2
    rw 256 512 4096 0 2396860186624 /dev/xvdk
    rw 256 512 512 32768 1198396539392 /dev/xvdk1
    rw 256 512 512 2340683776 1198396539392 /dev/xvdk2
    rw 256 512 4096 0 2396860186624 /dev/xvdl
    rw 256 512 512 32768 1198396539392 /dev/xvdl1
    rw 256 512 512 2340683776 1198396539392 /dev/xvdl2
    rw 256 512 4096 0 3998614552576 /dev/xvdm
    rw 256 512 512 32768 1999273722368 /dev/xvdm1
    rw 256 512 512 3904897024 1999273722368 /dev/xvdm2

    ASM disk group which is storing tablespace used in test query is striped over /dev/xvdb /dev/xvde /dev/xvdf /dev/xvdg /dev/xvdj /dev/xvdk /dev/xvdl partitions with external redundancy
    As you can see partitions on these LUNs are well alligned on 16MB boundary. Our AU size is 1MB. Our storage strip size is 512 KB.
  • 20. Re: Oracle direct path read IO size
    Kirill.Boyko Newbie
    Currently Being Moderated
    Dude, we are on Xen HVM +PV guest. Physycal storage is IBM V7000 connected through 4 8Gb FC ports. I can not pos graphs here,but what is interesting:
    When we are doing storage calibration with Orion tool, it shows performance close to what we expect: 1.8GB/sec.
    We use following Orion command line and lun file:
    ./orion_linux_x86-64 -run advanced -testname sas_20121206 -matrix point -num_small 0 -num_large 1 -size_large 512 -num_disks 35 -type seq -num_streamIO 10 - simulate raid0 -cache_size 0 -duration 120 -verbose
    ORION: ORacle IO Numbers -- Version 11.1.0.7.0
    sas_20121206_20130202_1258
    Test will take approximately 3 minutes
    Larger caches may take longer

    Name: /dev/xvdb1     Size: 1198396539392
    Name: /dev/xvde1     Size: 1198396539392
    Name: /dev/xvdf1     Size: 1198396539392
    Name: /dev/xvdg1     Size: 1198396539392
    Name: /dev/xvdj1     Size: 1198396539392
    Name: /dev/xvdk1     Size: 1198396539392
    Name: /dev/xvdl1     Size: 1198396539392
    Name: /dev/xvdb2     Size: 1198396539392
    Name: /dev/xvde2     Size: 1198396539392
    Name: /dev/xvdf2     Size: 1198396539392
    Name: /dev/xvdg2     Size: 1198396539392
    Name: /dev/xvdj2     Size: 1198396539392
    Name: /dev/xvdk2     Size: 1198396539392
    Name: /dev/xvdl2     Size: 1198396539392
    14 FILEs found.

    In fact when we are looking to storage statistics or IOSTAT during these tests (they perfectly match) we see that storage is receivng ~40000 requests over FC apron 6000 per LUN this gives ~40K per request. It is confusing for me, because we suppose to issue 512K reads

    With DB process picture is a bit different, but still confusing.

    Regards,
    Kirill
  • 21. Re: Oracle direct path read IO size
    Kirill.Boyko Newbie
    Currently Being Moderated
    While searching on the web, I found interesting report (i am not sure I could post links here, anyway):

    http://www.google.com/url?sa=t&rct=j&q=scaleabilities%20emulex&source=web&cd=1&ved=0CDQQFjAA&url=http%3A%2F%2Fwww.implementerslab.com%2Fartifacts%2Ff417a9ef-bb65-41c3-9b0c-26f78a8470a5%2Felx_sis_all_sg_8gfc_oracle.pdf&ei=cd0MUaygPOrd4QT5s4CIBw&usg=AFQjCNGAnhgfyD_BDYpaRxXIJbPLBhYCRQ&sig2=2mRCWKxk6Ibzfsnsg1tmAw&bvm=bv.41867550,d.bGE

    If you look through pages 8-9 you will find:
    This iostat output shows that, although Oracle was making 1MB read calls, the Linux kernel was breaking these calls into multiple smaller calls – in this case, around 350KB in size. This is a function of the Linux block layer, which was operating correctly based on the default configuration of the system. No tuning of the kernel-block layer took place for this testing because, although more throughput could be obtained by eliminating this request fracturing, as long as this phenomenon was the same for both HBA types then the comparison remained valid. Indeed, it reflected the reality that many end-user environments do not tune the block layer at all.

    Seems that James has some information to share :)
  • 22. Re: Oracle direct path read IO size
    Dude! Guru
    Currently Being Moderated
    Can you show your output of /etc/modprobe.conf ?
  • 23. Re: Oracle direct path read IO size
    Kirill.Boyko Newbie
    Currently Being Moderated
    You mean on VM? or on host?
    On VM in fact we don't have such file at all.
    [root@lnxru0240vg081 ~]# cat /etc/modprobe.conf
    cat: /etc/modprobe.conf: No such file or directory

    On VM host - nothing special:
    [root@ovsru0240vh021 ~]# cat /etc/modprobe.conf
    alias scsi_hostadapter shpchp
    #alias eth0 bnx2
    #alias eth1 bnx2
    alias eth2 be2net
    alias eth3 be2net
    alias scsi_hostadapter1 megaraid_sas
    alias scsi_hostadapter2 ata_piix
    alias scsi_hostadapter3 lpfc
    alias scsi_hostadapter4 usb-storage
    alias eth0 bnx2
    alias eth1 bnx2
    install lpfc /sbin/modprobe bnx2 ; /sbin/modprobe -i lpfc ; true
  • 24. Re: Oracle direct path read IO size
    Dude! Guru
    Currently Being Moderated
    Can you provide more information, e.g. kernel (uname -r) of Dom0 and DomU machine?

    If the virtual machine is on Oracle Linux 6, you should have a directory named /etc/modprobe.d/. The reason I'm asking is because storage drivers, e.g. lpfc have settings that can be customized to affect channel count and queue sizes and thereby I/O size.

    Also I think this post should probably be moved to the Oracle VM forum for closer topic alignment.
1 2 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points