I know I must have something configured wrong but I can't figure it out.
I have tested disk write access on my VM servers in V2.2 and V3.1, using these commands:
hdparm -tT /dev/mapper/<lun>
dd if=/dev/zero of=output.img bs=8k count=256k
the output is very close between the V2.2 and V3.1 servers, so that's good.
But then when I execute those command on the actual VMs themselves within the 2.2 and 3.1 environments, I get different results.
The hdparm command is still equivalent between the two.
But the dd command shows my old V2.2 VMs getting around 650mb/s while my new V3.1 VMs are getting between 8mb-30mb/s. Does anyone have an idea off the top of their head why this would be?
Oh, I just tried one more thing... most of my V3.1 VMs were converted from V2.2 via template import. I just tried the same commands on a couple of new VMs I made in V3.1 (from OL6-64bit templates) and they returned 360mb/s and 750mb/s. So I wonder what's wrong with the VMs that I converted from VM2.2?
We're having exactly the same problem as you appear to have experienced. Very slow write I/O in guest VMs (domU) but fast I/O on the host on the same iSCSI file system.
Write I/O inside the guest is between 3-20MB/sec (dd bs=2048k count=512), whereas on the host its 95MB/sec - which is hitting the practical limits of our GigEth iSCSI SAN.
I've checked inside the guests and can see no problems of write caches being disabled in dmesg. Curiously we hit this same wall on both a RHEL6.3 and Solaris10U10 guest.
I've tried both sparse and non-sparse files and the performance is the same. Read performance is fine, but this write bottleneck is a showstopper. Would appreciate any assistance you guys might have while I await a response from Oracle..
- Oracle VM 3.1.1 update 485
- Sun Fire X4150 Server
- Sun Storagetek 2510 iSCSI SAN