This content has been marked as final. Show 5 replies
You can try 3.0.3
or you can try the manufacturers driver for OEL/Redhat 5
Don't you have any 1 GB nics? Does it not see them as well?
Edited by: user12273962 on Nov 9, 2012 8:43 AM
It depends on if the kernel has support for the nic or not. I have never enjoyed trying to load kernel modules to add support for any hardware.... but sometimes its necessary. Kernel distributions from various linux offering vary greatly in hardware support. Nothing new in the linux world. Seeing that HP is offering a driver for OEL/Redhat 5... tells me there is an issue with support in that distribution.
May be late to the party...but had the similar issues. The QLogic based driver in the OVM kernel has problems with the LLDP (link layer discovery protocol) communication. Oracle support wasn't very tuned in. They wanted me to change init images and initially claimed it was not a driver issue. Funny as I was able to prove it was the driver via testing.
"*DCX-No ACK in 100 PDU*" started appearing the switch logs. This appears to be a Cisco related error when the switch (Nexus) sent LLDP packets and the QLogic driver not handling the ack correctly. The connecting switch would see errors and administratively shut the port down.
I stripped OVM off the server in question and just ran a regular OL 6.3 and RHEL 6.3 build on it for testing. The following included "baked-in" QLogic kernel had issues across OVM, OL and RHEL.
grep 8020 /lib/modules/$(uname -r)/modules.pcimap
qlcnic 0x00001077 0x00008020 0xffffffff 0xffffffff 0x00020000 0xffffffff 0x0
description: QLogic 1/10 GbE Converged/Intelligent Ethernet Driver
alias: pci:v00001077d00008020sv sd bc02sc00i00
vermagic: 2.6.39-200.1.4.el5uek SMP mod_unload modversions
The latest HP provided QLogic driver also did not work correctly. My only recourse was to download the driver directly from QLogic and bypass "NC523" or "HP" anything. That worked....however you need to be able to compile the driver for the running kernel and as I did not have access to kernel-headers on OVM that would have been more frustrating and work to resolve for every server that we had running NC523 10Gb cards. Not to mention the aspect that as soon as you update the kernel you most likely would have to re-compile the driver. Not a great choice for production systems.
Ultimately after testing and looking at our architecture we went with the NC550 Emulex based cards. We have had zero issues since. It utilizes the native Emulex drivers in the kernel...no special download from HP or Emulex. Something interesting I think is at previous employers utilizing DL580s and QLogic cards we had other issues. Came down to faulty hardware. QLogic used to be the brand I would start out with, but over the past few years I've detected more issues with rebranded QLogic cards and have since begun to shy away from them. Not worth the hassle and time needed to troubleshoot everything.