We are installing Oracle VM Server 3.1.1 on HP DL580 G7 servers with HP NC523SFP 10 GB cards. The install does not see any network cards in the system, and eventually the install errors out. I see that Anaconda can not handle an install with a server that seems to have no network cards in it. So the conclusion is that OVM Server does not have a compatible 10GB driver that can see these cards.
You can try 3.0.3
or you can try the manufacturers driver for OEL/Redhat 5
Don't you have any 1 GB nics? Does it not see them as well?
Edited by: user12273962 on Nov 9, 2012 8:43 AM
It can see the internal 1 gb cards. However we always disable the internal ones and go with the added 10 GB ones. Surprised Oracle VM Server out of the box can not recognize a major manufacturers 10 GB card. I will download the driver and see if I can make it available through the ILO.
It depends on if the kernel has support for the nic or not. I have never enjoyed trying to load kernel modules to add support for any hardware.... but sometimes its necessary. Kernel distributions from various linux offering vary greatly in hardware support. Nothing new in the linux world. Seeing that HP is offering a driver for OEL/Redhat 5... tells me there is an issue with support in that distribution.
May be late to the party...but had the similar issues. The QLogic based driver in the OVM kernel has problems with the LLDP (link layer discovery protocol) communication. Oracle support wasn't very tuned in. They wanted me to change init images and initially claimed it was not a driver issue. Funny as I was able to prove it was the driver via testing.
"*DCX-No ACK in 100 PDU*" started appearing the switch logs. This appears to be a Cisco related error when the switch (Nexus) sent LLDP packets and the QLogic driver not handling the ack correctly. The connecting switch would see errors and administratively shut the port down.
I stripped OVM off the server in question and just ran a regular OL 6.3 and RHEL 6.3 build on it for testing. The following included "baked-in" QLogic kernel had issues across OVM, OL and RHEL. grep 8020 /lib/modules/$(uname -r)/modules.pcimap qlcnic 0x00001077 0x00008020 0xffffffff 0xffffffff 0x00020000 0xffffffff 0x0 modinfo qlcnic filename: /lib/modules/2.6.39-200.1.4.el5uek/kernel/drivers/net/qlcnic/qlcnic.ko firmware: phanfw.bin version: 18.104.22.168 license: GPL description: QLogic 1/10 GbE Converged/Intelligent Ethernet Driver srcversion: BCECA1F223B07CA5B10345C alias: pci:v00001077d00008020sv sd bc02sc00i00 depends: vermagic: 2.6.39-200.1.4.el5uek SMP mod_unload modversions
The latest HP provided QLogic driver also did not work correctly. My only recourse was to download the driver directly from QLogic and bypass "NC523" or "HP" anything. That worked....however you need to be able to compile the driver for the running kernel and as I did not have access to kernel-headers on OVM that would have been more frustrating and work to resolve for every server that we had running NC523 10Gb cards. Not to mention the aspect that as soon as you update the kernel you most likely would have to re-compile the driver. Not a great choice for production systems.
Ultimately after testing and looking at our architecture we went with the NC550 Emulex based cards. We have had zero issues since. It utilizes the native Emulex drivers in the kernel...no special download from HP or Emulex. Something interesting I think is at previous employers utilizing DL580s and QLogic cards we had other issues. Came down to faulty hardware. QLogic used to be the brand I would start out with, but over the past few years I've detected more issues with rebranded QLogic cards and have since begun to shy away from them. Not worth the hassle and time needed to troubleshoot everything.
I hope you have the latest hardware rev of that card, we had the original rev and had to RMA them all because the firmware locked up (thermal issues) and it was taking our OVM environment down hard monthly.