This discussion is archived
1 2 Previous Next 26 Replies Latest reply: Jun 27, 2012 11:17 AM by user157995 RSS

NFS disk performance after upgrade to 3.1.1

user796835 Newbie
Currently Being Moderated
Hello,
after OVS upgrade from 3.0.3 to 3.1.1 I noticed performance problems on virtual disks placed on nfs repository. Before upgrade, on 3.0.3 I can read from xvda disk at around 60MB/s, after upgrade on 3.1.1 it fall down to around 1.5 MB/s, with 1MB block size:

dd if=/dev/xvda of=/dev/null bs=1024k count=1000
^C106+0 records in
105+0 records out
110100480 bytes (110 MB) copied, 79.0509 seconds, 1.4 MB/s

Repository is on nfs share attached through dedicated 1Gbit/s ethernet network with MTU=8900. The same configuration was used before upgrade. Only change was upgrade from 3.0.3 to 3.1.1.
Test machines are OEL5 with latest UEK kernels, running PVM mode.
Repository on 3.1.1 is mounted without additional NFS options like rsize,wsize,tcp or proto=3:
192.168.100.10:/mnt/nfs_storage_pool on /OVS/Repositories/0004fb0000030000a75ccd9ef5a238c3 type nfs (rw,addr=192.168.100.10)
I don't find a way to change it, but don't know if it may cause performance issues.

Any idea why is with 3.1.1 so high performance decrease from 60 to 1.5 MB/s?
Thanks.
  • 1. Re: NFS disk performance after upgrade to 3.1.1
    jurajl Newbie
    Currently Being Moderated
    Well, my tests also show huge slowdown in disk performance, when using NFS repositories.
    I tried hdparm -tT /dev/xvda from within virtual machines (they have repositories on the same NAS, however - separate NFS shares are being used as repositories for 3.1.1 and 3.0.3).
    This is what I ended up with:
    OVM3.1.1 : Timing buffered disk reads: 6 MB in 4.14 seconds = 1.45 MB/sec
    OVM3.0.3 : Timing buffered disk reads: 140 MB in 3.02 seconds = 46.32 MB/sec

    Could this be caused by NFS late locking mechanism (introduced in 3.1) ?
    Any known way to fix this?
  • 2. Re: NFS disk performance after upgrade to 3.1.1
    user157995 Explorer
    Currently Being Moderated
    I too am witnessing this HUGE slowdown... we have a pretty loaded Oracle 10G server that resides on a NFS repo, and its CRAWLING.. I am opening an SR1 with oracle now because thats a huge issue. In our case, we also notice the dom0's load averages very high due to this, along with my NFS guest that is running from 60-90% iowait.
  • 3. Re: NFS disk performance after upgrade to 3.1.1
    user157995 Explorer
    Currently Being Moderated
    No dice yet, Oracle has not responded.. Has anyone else had any lucking figuring anything out?

    My nfs mounts are identical between 3.0.3 and 3.1.1 so i dont think that matters..

    3.0.3: 10.10.1.4:/coraid/nfs01 on /OVS/Repositories/0004fb0000030000113d5ca8bb4766e7 type nfs (rw,addr=10.10.1.4)
    3.1.1: 10.10.1.4:/coraid/nfs01 on /OVS/Repositories/0004fb0000030000113d5ca8bb4766e7 type nfs (rw,addr=10.10.1.4)
  • 4. Re: NFS disk performance after upgrade to 3.1.1
    budachst Journeyer
    Currently Being Moderated
    Oracle Support didn't respond to a SR1? Phew… I have noticed very slow performance from Oracle's OVM support regarding SR feedback, but I'd imagine that they respond to a SR1 in time.

    But to post something useful, I have just checked the NFS speeds from one of my VM servers to a NFS share hosted on one of my Solaris boxes and the thoughput seems pretty reasonable:

    [root@oraclevms01 OrcleVM]# dd if=OVM_EL5U5_X86_PVM_10GB.tar of=/dev/null
    3715540+0 records in
    3715540+0 records out
    1902356480 bytes (1.9 GB) copied, 22.7916 seconds, 83.5 MB/s

    I haven't done anything special to either the NFS mount or the NFS export on my Solaris box. All that I could think of is some driver issues, maybe?
  • 5. Re: NFS disk performance after upgrade to 3.1.1
    user796835 Newbie
    Currently Being Moderated
    Speed of NFS share mounted on VM machine is not a problem, I also can get around 80MB/s when reading from nfs share mounted inside of virtual machine.
    Problem is, when virtual disk /dev/xvda is on NFS-based storage repository.
    Reading disk directly from OVS server is also fast:

    [root@acs-ovm3 ~]# dd if=/OVS/Repositories/0004fb0000030000a75ccd9ef5a238c3/VirtualDisks/0004fb00001200009da8bd2fd1dcef22.img of=/dev/null bs=1024k count=1000
    1000+0 records in
    1000+0 records out
    1048576000 bytes (1.0 GB) copied, 13.4906 seconds, 77.7 MB/s

    But when this disk is attached as virtual disk /dev/xvd..... to virtual machine, reading gets slow, 1.5 MB/s. And only on OVS 3.1.1
  • 6. Re: NFS disk performance after upgrade to 3.1.1
    budachst Journeyer
    Currently Being Moderated
    I see - sorry, I missed that. I am using FC storage for my vdisks. So to test that I'd have tp set up a NFS SR and I'd rather not do that.
  • 7. Re: NFS disk performance after upgrade to 3.1.1
    user157995 Explorer
    Currently Being Moderated
    I actually had a change of heart and opened an SR2, which has a history of getting traction within a few hours, or sits for a week - I hope its the former.
  • 8. Re: NFS disk performance after upgrade to 3.1.1
    user157995 Explorer
    Currently Being Moderated
    Gave in, its now an SR1 and was promptly picked up.. However it seems this might be an unknown issue because it is being kicked over to development. I will post any relevant info once (if) Oracle resolves it for me.
  • 9. Re: NFS disk performance after upgrade to 3.1.1
    user796835 Newbie
    Currently Being Moderated
    can you give me bug#, so I can also check progress ?
    I did more tests. When I boot redhat compatible 2.6.18 kernel on my OEL5 in HVM mode, I get the same virtual disk under two devices - hda and xvda. Reading xvda is still slow, but reading hda is fast, over 60MB/s as was before upgrade. Checked with hdparm -tT /dev/.... Don't know why.
  • 10. Re: NFS disk performance after upgrade to 3.1.1
    user796835 Newbie
    Currently Being Moderated
    Dave, I see that you still have no bug logged for that problem.
    Maybe I found a workaround. Only disks attached through PV drivers have the problem, emulated sdxx disks perform fine. Problem was how to enable only emulated disks and not pv disks. Because even with HVM mode are used pv drivers, not emulated.
    But after setting HVM mode for virtual machine and adding xen_platform_pci=0 to vm.cfg config file, I see now only emulated sdxx disks with UEK kernel (and of course also emulated nic). Reading sda is fast in my case.
    I don't know if this is supported, this can tell only our support, but you may try it on some test system.
  • 11. Re: NFS disk performance after upgrade to 3.1.1
    user157995 Explorer
    Currently Being Moderated
    Yes, this is what Oracle support suspects currently, a issue with PVM guests.. I have a lengthy test script to run for them getting data and we will be filing a bug.
  • 12. Re: NFS disk performance after upgrade to 3.1.1
    JimRussell Newbie
    Currently Being Moderated
    I've got a SuSe VM which has both HVM and PVM disks (HVM with PVM drivers). When I run dd using the hvm device, it's normal speed. With the pvm device it's very slow.

    dd if=/dev/sda of=/dev/null bs=1024k count=1000
    1000+0 records in
    1000+0 records out
    1048576000 bytes (1.0 GB) copied, 10.7715 s, 97.3 MB/s

    dd if=/dev/xvda of=/dev/null bs=1024k count=1000
    1000+0 records in
    1000+0 records out
    1048576000 bytes (1.0 GB) copied, 224.24 s, 4.7 MB/s
  • 13. Re: NFS disk performance after upgrade to 3.1.1
    user157995 Explorer
    Currently Being Moderated
    Yes, same outcome here, Oracle is aware of the issue, stated they reproduced it internally, but not resolve yet.

    I imagine this is going to becoming a much larger problem when more of the 3.0 customers start upgrading... I really hope they find a solution because even on our rather small NFS SR deployment, its rendering Oracle DB servers useless.
  • 14. Re: NFS disk performance after upgrade to 3.1.1
    884343 Newbie
    Currently Being Moderated
    Hi,

    Please check kernel in virtual machine!
    grub conf must boot kernel with el5xen.
    Verify modprobe.conf, must
    alias scsi_hostadapter xenblk
    alias eth0 xennet
1 2 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points