This content has been marked as final. Show 6 replies
Well after talking with support, modifying the grub.conf file is what is needed. They did not have any guidelines on incrementing the memory (because of the amount of iSCSI Luns) other than what was recommended for the physical RAM. Just keep increasing the memory until problems go away.
I also had to change this line in the grub.conf (This was not stated in the document)
title Oracle VM Server-ovs (xen-4.1.3 2.6.39-300.22.2.el5uek)
kernel /boot/xen.gz dom0_mem=4000M
module /boot/vmlinuz-2.6.39-300.22.2.el5uek ro root=UUID=0135bdc9-28f2-4ee3-bfb2-56afb5f092a3
Has that resolved all your issues?
Do you know if the added memory will stick around at the next kernel upgrade?
Looney128 wrote:I recommend reading the Xen Project wiki page, Best Practices[url]. It discusses topics such as [url=http://wiki.xen.org/wiki/XenBestPractices#Xen_dom0_dedicated_memory_and_preventing_dom0_memory_ballooning]dedicating a fixed amount of RAM to the Dom0 and dedicating a CPU to the Dom0 in order to prevent the DomU's from consuming all of the system resources and preventing Dom0 from servicing the DomU's:
I have a question about increasing the RAM in 3.2.1 servers. I now have 64GB on two nodes running 18VMs (mostly Windows 2008). I have added enough servers to consume all the RAM and have notice some weird anomalies. Such as network dropping periodically even tho cpu and memory utilization is low, but if I migrate the troubled VM to the other node that has less VMs on it.....the problem goes away.
So I ordered another 64GB for each node....which will bring me to 128GB on each node.
Does the system adjust for the new RAM?
I also have many ISCSI Luns (29 as of now, more to come), but have read that I may need to adjust the Dom0 RAM if I have performance issues (Which it sounds like I am having)
With the new RAM, the new setting should be this...
kernel /xen.gz console=com1,vga com1=38400,8n1 dom0_mem=4000M
With all the ISCSI luns, should the dom0_mem be set higher? If so, by how much?
If you're running I/O intensive guests or workloads in the VMs it might be a good idea to dedicate (pin) a CPU core only for Dom0 use.
[url=http://wiki.xen.org/wiki/XenBestPracticesBest Practices for Xen from the Xen Project Wiki}
[Dedicating a CPU to the Dom0] might a good idea, especially for systems running IO intensive guests. Dedicating a CPU core only for dom0 makes sure Dom0 always has free CPU time to process the I/O requests for the DomU's. Also when Dom0 has a dedicated core there are less CPU context switches to do, giving better performance.
[url=http://wiki.xen.org/wiki/XenCommonProblemsXen Common Problems from the Xen Project Wiki}
It’s essential to make sure that the Dom0 has sufficient CPU to service I/O requests. You can handle this by dedicating a CPU to the Dom0 or by giving the Dom0 a very high weight—high enough to ensure that it never runs out of credits.
[url=http://wiki.prgmr.com/mediawiki/index.php/Chapter_7:_Hosting_Untrusted_Users_Under_Xen:_Lessons_from_the_TrenchesChapter 7, Hosting Untrusted Users Under Xen: Lessons from the Trenches, of The Book of Xen}
BTW: Is the Dom0 also the iSCSI target? That's probably not a good idea.
Adding the memory has helped out. Don't know what will happen after an upgrade.....I will have to double check when it happens.....
So...help me out here.
If Dom0 is now using 24 cores by default (but sharing with the DomUs) and I change this to using 1 physical core, it will give better performance??? I understand if all the DomUs start consuming all the CPU utilization that it will effect Dom0.....but if that is not the case, wouldn't having multiple vcores outweigh 1 physical one?
24 CPUs in contention or 1 CPU with no contention... it depends on your load.
That said, configuring dom0 to have a single dedicated CPU requires guest configuration changes for every guest. I'm not sure if these guest configuration changes survive migrations.
It would be nice if it were a feature OVM supported rather than a Xen tweak....