This discussion is archived
6 Replies Latest reply: May 13, 2013 5:53 AM by user11391721 RSS

Poor disk io thourghtput on OVM and guest VM

1006550 Newbie
Currently Being Moderated
We are using DELL R720 with H710 raid controller.
A wired problem about the disk io was discovered during tests.
We found that the disk throught on VM server(OVS repo) and VM guest are only about 1/3 of the physical server, around only 50M/s.


We are tring to compare disk io write with simple dd instruction on following enviroments with same hardware
The test cases are:
(H710, 2T 7200rpm * 8 configured with RAID10 as two virtual disks, first for ovm, second for OVS repo)

1. Centos 6 installed directly
2. OVM 3.2.1, sda
3. OVM 3.2.1, /dev/mapper/36848f690ec834b0018df77d30704a452 (used by OVS repo)
4. Guest vm Oracal Linux 6.0 on OVM 3.2.1,pvm (using local physical disk as OVS repository)

The result is:

1.Centos 6 installed directly
dd if=/dev/zero of=~/a.out bs=16k count=20000 oflag=direct
20000+0 records in
20000+0 records out
327680000 bytes (328 MB) copied, 1.85022 s, 177 MB/s

2. OVM 3.2.1, sda
dd if=/dev/zero of=~/a.out bs=16k count=20000 oflag=direct
20000+0 records in
20000+0 records out
327680000 bytes (328 MB) copied, 2.5659 seconds, 128 MB/s

3.OVM 3.2.1, /dev/mapper/36848f690ec834b0018df77d30704a452
dd if=/dev/zero of=/OVS/Repositories/0004fb00000300000102bf00d2adeb8a/a.out bs=16k count=10000 oflag=direct
10000+0 records in
10000+0 records out
163840000 bytes (164 MB) copied, 3.00933 seconds, 54.4 MB/s

4. Guest vm Oracal Linux 6.0 on OVM 3.2.1,pvm
dd if=/dev/zero of=~/a.out bs=16k count=20000 oflag=direct
20000+0 records in
20000+0 records out
327680000 bytes (328 MB) copied, 7.60928 s, 43.1 MB/s

We can see that on pure Centos 6 we got the reasonable disk io, on OVM sda the score is slower but still acceptable.
But on OVM disk and guest VM, we saw a big drop.

An interesting thing is : I have 3 guest vms on this vm server, when I run a same dd command samultaniously on 3 guest vms, I can still
get the same score. Adding them together we get a 120+ MB/s throughput.
I guess the reason might be that OVS put a io threshold on both OVS and each vms on it to make sure no single vm can drain up all disk io
in order to preserve some to other vms ?

I plan to use mongo db on the guest vm if the io is close to the real machine, could anyone please help me improve the disk io ?

thanks a lot!

Edited by: user12945979 on 2013-5-1 上午9:21

Edited by: user12945979 on 2013-5-3 上午5:10

Edited by: user12945979 on 2013-5-3 上午5:14
  • 1. Re: Poor disk io on OVM and guest VM
    budachst Pro
    Currently Being Moderated
    As soon as it comes to database usage, the raw throughput will not be your problem, but latency will. If you want to benchmark your storage in that regard, I'd suggest fio.
    You will already trade in some latency due to the OCFS2 filesystem where your pools are running on, so you might have to pony up some more capable drives than your SATA 2TBs for that.

    Of course, it depends also on the load you are expecting from your applications.
  • 2. Re: Poor disk io on OVM and guest VM
    1006550 Newbie
    Currently Being Moderated
    So the throughput drop to 1/3 on OVS and vm server is common?
    I tried on the vm with mongodb installed, and performed an insert test.
    The performance also drop around 1/3, so that it is not ok to apply disk io intensive apps on vm server ?
  • 3. Re: Poor disk io on OVM and guest VM
    budachst Pro
    Currently Being Moderated
    If the virtual disk that is hosted on the storage repo is too slow, you still have the option to use something like iSCSI for that. The performance penalty you get, when using the OCFS2 storage repos really is significant.

    I do have achieved throughput of 90 MB/s on my SR inside my guests, but I do have dedicated storage for that (FC and iSCSI), so I can't state anything about the H700's performance. Note also, that I have benchmarked my OCFS2 SRs with fio, which you really should do first, since you won't get good performance it you've got a high-latency storage.
  • 4. Re: Poor disk io on OVM and guest VM
    1006550 Newbie
    Currently Being Moderated
    Thanks for your advice,
    I followed played below tests against physical server(Centos6) and Oracal vm guest server(OL6,pvm).

    fio -filename=/srv/test.out -direct=1 -rw=randwrite -bs=4k -size=2g -numjobs=8 -runtime=60 -group_reporting -name=test
    fio -filename=/srv/test.out -direct=1 -rw=randread -bs=4k -size=2g -numjobs=8 -runtime=60 -group_reporting -name=test
    fio -filename=/srv/test.out -direct=1 -rw=randrw -bs=4k -size=2g -numjobs=8 -runtime=60 -group_reporting -name=test
    fio -filename=/srv/test.out -direct=1 -rw=read -bs=4k -size=2g -numjobs=8 -runtime=60 -group_reporting -name=test
    fio -filename=/srv/test.out -direct=1 -rw=write -bs=4k -size=2g -numjobs=8 -runtime=60 -group_reporting -name=test
    fio -filename=/srv/test.out -direct=1 -rw=read -bs=4k -size=2g -numjobs=8 -runtime=60 -group_reporting -name=test

    fio version is 2.0.13

    The results shows random read on VM guest matches about 60% of physical server, and random write about 90%.
    But sequential read only counts about 42% and write 35%.

    I am not sure if the big drop on sequential read/write performance is reasonable. Maybe I did something wrong ?

                             Physical     VM guest
    randwrite(4k,60s)     write iops     2385          2114
                   bw(k)          9542          8456
                        
    randread(4k,60s)     read iops     1455          838
                   br          5822          3352
                        
    randrw(4k,60s)          read/write iops     791/794          569/565
                   b r/w           3167/3176     2278/2260
                        
    write(4k,60s)          write iops     14465          5114
                   bw(k)          57864          20457
                        
    read(4k,60s)          read iops     48870          20985
                   br           195483          83940

    Edited by: user12945979 on 2013-5-3 上午5:16
  • 5. Re: Poor disk io on OVM and guest VM
    budachst Pro
    Currently Being Moderated
    Can you redo these tests and set -ioengine=libaio, that would provide significantly better results. I just checked on my "self-made" hybrid-storage, that is connected via 1GbE, IOPs between 5k and 7k, depending on the type of operation.

    If I do some raw testing using dd with oflag=direct am getting approx. 75 MB/s, once the hybrid storage has shuffled the new hot blocks around a bit.
  • 6. Re: Poor disk io on OVM and guest VM
    user11391721 Newbie
    Currently Being Moderated
    budachst wrote:
    If the virtual disk that is hosted on the storage repo is too slow, you still have the option to use something like iSCSI for that. The performance penalty you get, when using the OCFS2 storage repos really is significant.

    I do have achieved throughput of 90 MB/s on my SR inside my guests, but I do have dedicated storage for that (FC and iSCSI), so I can't state anything about the H700's performance. Note also, that I have benchmarked my OCFS2 SRs with fio, which you really should do first, since you won't get good performance it you've got a high-latency storage.
    Our environment is Fiber Channel & currently all of our repos are OCFS2.

    How are your FC repositories setup to give best performance?

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points