What storage vendor do this recommendation and what value You want set ?
According https://en.wikipedia.org/wiki/TCP_delayed_acknowledgment most specific problems can be resolved on application layer.
Some recommendation for Windows look like platform specifc...
As far as I know you need a real-time patched kernel to be able to maintain certain latency settings. For example:
[root@localhost ~]# echo 4 > /proc/sys/net/ipv4/tcp_delack_min
-bash: /proc/sys/net/ipv4/tcp_delack_min: No such file or directory
This won't work unless you have a RT patched kernel.
# yum groupinstall RT
I'm afraid you will have to ask Oracle support for access or availability of the appropriate software repository. Alternatively compile your own kernel, which may however not be supported under your technical support contract. You can find some info at:
On second thought, it seems like an unusual demand for an unusual situation and is usually not the best option. If you are dealing with network congestion, you will need to troubleshoot your network. Perhaps you need to segregate network traffic. Btw, did you check your cabling, or whether you have any damaged hardware?
Do you have a dedicated network for your storage system? TCP/IP is not a protocol that was designed for low latency and hence is not the best for demanding storage systems. Perhaps you need a different networking option, such as Infiniband, or use fiber channel and SAN.
Hi NIk, DuDe
1- On second thought, it seems like an unusual demand for an unusual situation and is usually not the best option.
If you are dealing with network congestion, you will need to troubleshoot your network.
The congestion occurs on multiple session creation from same node / or multiple Node
The bandwidth will be capped to a value and it is will be shared with all consumers
2-Perhaps you need to segregate network traffic.
Traffic is between Initiators ( linux 2x10G bonded type 4 ) multipathed iscsi and targets ( 2 x 10G) dedicated non routed vlans
I went to the extend to dedicating same layer 2 equipment to isolate any up stream delays .( using cisco Nexus switches with Fex )
In case of switching to NFS access to same storage we see a doubling of available IOps and rw capability .
3- Btw, did you check your cabling, or whether you have any damaged hardware?
4- Do you have have a dedicated network for your storage system?
The Response of Oracle support was nothing more than a misleading stream of information .
Appreciate your input which coincide with my research specifically Dude's comments .
Just to validate the need and the impact of the config changes :
1- on esxi 6.7 , applying the config made a huge impact on the available rw Bandwidth with almost 150% increase .
2- Same on a Suse 12 ent node ...which had the kernel
The Vendor is EMC.
I have to say that putting the question to the community proved to be more useful than opening a SR .....
Thanks to all who took the time to chime in
Please, share required step for SUSE 12.
most Linux version not allow change this parameters.
Well, I'm sure it won't help your case and frankly, I'm not keen to be part in any standoff you may have with Oracle Technical support. I know nothing about your SR and history. Anyhow, let's get back on the technical subject.
The advantage of VLAN are arguably management and reducing the need for network hardware and cabling. Virtual segregation as such, however, cannot overcome physical limitations, and hence can result in network congestion and central point of failure. What exactly is your setup?
I find it a bit strange that the configuration you mention appears to be necessary on multiple systems. If NFS gives you much better results, perhaps the problem is with the iSCSI implementation on the host or EMC side. But it's all guessing since I do not understand your installation.
Do you have a problem in combination with ESXi and a virtual host, or is it a standalone Oracle Linux system running on bare metal?I found the following old material that probably explains your problem. You may have seen it already, but perhaps you can verify:
Building and customizing the Linux kernel was a common practice in the early days of Linux. I remember it was often a frustrating experience, but also made you feel great when it worked. Needless to say this process is no longer supported. Wondering what it's like today, I tried to build my own UEK kernel this afternoon and got as far as running menuconfig. You can download and install the UEK source, but I could not find any instructions or howto.
There's no RT patch for UEKR5 (4.14.35) or any other UEK at https://mirrors.edge.kernel.org/pub/linux/kernel/projects/rt/4.14/older/ which I think means you are out of luck building your own real-time patched UEK kernel.
At this point, I think you have to rely on Oracle Support to get a patched UEK kernel. We can try to ping Avi Miller-Oracle and Sergio-Oracle who frequently monitor these forums but continuing with the existing SR is probably your best option.
If you can message me the SR number, I can take a look.
Environment is Oracle OVM . The OVS server r running 4.1.12-124.36.1.el6uek.x86_64
This is running on bare metal
Edited to remove the SR#.
This is an Oracle VM SR and I can see that Oracle Support are actively working on this issue with you. As such, I don't believe there is any further assistance I can provide via the Groundbreakers Community. If you feel that your SR is not getting the appropriate attention, please contact your Oracle sales person or key account director for advice on how to escalate the request.
The following will give some more explanation:
There's apparently a config/setting in ESXi to disable delayed ACK. Something similar might exist in Oracle VM. But from what I understand it's not a kernel issue. The source of the problem is an oversold/congested network and the proper fix would be to improve the network or strategy implemented by the iSCSI storage vendor.
Thanks AVI ... the OVM team still working on it ....
Dude , you are absolutely right . The esxi TCP Stack have the option exposed to be modified . Testing with and without the configuration have major impact on performance.