This discussion is archived
8 Replies Latest reply: Nov 1, 2010 12:04 PM by Dude! RSS

How to make elevator/schedule type persistent across boots (multipath)

358556 Newbie
Currently Being Moderated
Hello All,

Running OEL 5.5 x64. Have set up multipath no probs, but notice that I get far better performance when using NOOP instead of CFQ, which is the default. Rather than having to change this every time I boot, wondering if there is a way to have it persistent across boots?

The issue is that it seems it must be done at the block level. For example, echo noop > /sys/block/sda/queue/scheduler . The problem is that these block device paths change all the time. I saw something about doing it within grub.conf, but that did not work for me (http://lonesysadmin.net/2008/02/21/elevatornoop/)

Thanks in advance!
  • 1. Re: How to make elevator/schedule type persistent across boots (multipath)
    BillyVerreynne Oracle ACE
    Currently Being Moderated
    vcovco wrote:

    Running OEL 5.5 x64. Have set up multipath no probs, but notice that I get far better performance when using NOOP instead of CFQ, which is the default. Rather than having to change this every time I boot, wondering if there is a way to have it persistent across boots?
    Set it as a kernel boot parameter in the grub config file?

    The issue is that it seems it must be done at the block level. For example, echo noop > /sys/block/sda/queue/scheduler . The problem is that these block device paths change all the time. I saw something about doing it within grub.conf, but that did not work for me (http://lonesysadmin.net/2008/02/21/elevatornoop/)
    Hmm... perhaps in udev rules? These execute when the mpath devices are created. I use these to set permissions for mpath devices (at boot time or dynamically when refreshed). That should be the most appropriate place I think...
  • 2. Re: How to make elevator/schedule type persistent across boots (multipath)
    Dude! Guru
    Currently Being Moderated
    From what I can gather, the NOOP scheduler is best used with non-mechanical devices that do not require re-ordering of multiple I/O requests.


    Perhaps an interesting link:
    Choosing an I/O Scheduler for Red Hat® Enterprise Linux® 4 and the 2.6 Kernel
    http://www.redhat.com/magazine/008jun05/features/schedulers/

    "The NOOP scheduler indeed freed up CPU cycles but performed 23% fewer transactions per minute when using the same number of clients driving the Oracle 10G database. The reduction in CPU cycles was proportional to the drop in performance, so perhaps this scheduler may work well for systems which drive their databases into CPU saturation. But CFQ or Deadline yield better throughput for the same client load than the NOOP scheduler."
  • 3. Re: How to make elevator/schedule type persistent across boots (multipath)
    BillyVerreynne Oracle ACE
    Currently Being Moderated
    The NOOP scheduler sounded interesting and I had a look at how the multipath shared storage devices are configured on one of our clusters.

    Here's device /dev/sdk info:
    root@nvs-dev1 /root> multipath -l | grep sdk
     \_ 2:0:0:8  sdk  8:160  [active][undef]
    
    root@nvs-dev1 /root> cat /sys/block/sdk/queue/scheduler 
    noop anticipatory deadline [cfq] 
    All schedulers are available it seems, with noop the default? How does one read this output?
  • 4. Re: How to make elevator/schedule type persistent across boots (multipath)
    Dude! Guru
    Currently Being Moderated
    noop anticipatory deadline [cfq]
    The one in brackets is currently being used.

    http://tombuntu.com/index.php/2008/09/04/four-tweaks-for-using-linux-with-solid-state-drives/
  • 5. Re: How to make elevator/schedule type persistent across boots (multipath)
    358556 Newbie
    Currently Being Moderated
    Thanks for the replies, everyone. As I mentioned above, I tried putting elevator=noop in as a kernel parameter in grub.conf, but cfq still shows up as the default. I have found that noop works better in our SAN devices where the SAN handles the queuing instead.

    I am thinking at this point of just doing a script in rc.local for all /dev/sd* devices...Have opened an SR; now if Oracle support was even one tenth as fast as the replies on this forum, I'd be set.... ;)
  • 6. Re: How to make elevator/schedule type persistent across boots (multipath)
    Dude! Guru
    Currently Being Moderated
    The I/O scheduler can be changed on a per-drive basis without rebooting. The way how to do this in rc.local is mentioned in the previous link.

    Here's another one:
    http://blog.carlosgomez.net/2009/10/io-scheduler-and-queue-on-linux.html
    I have found that noop works better in our SAN devices where the SAN handles the queuing instead.
    I'm not sure if this is relevant. SCSI also handles the queuing for example. Perhaps it depends whether or not you are accessing your disk controller memory cache or performing physical disk reads.

    http://en.wikipedia.org/wiki/CFQ

    Edited by: waldorfm on Nov 1, 2010 6:38 AM
  • 7. Re: How to make elevator/schedule type persistent across boots (multipath)
    358556 Newbie
    Currently Being Moderated
    Thanks Waldorfm.

    Oracle support also came through. I had placed elevator=noop under the wrong kernel entry in my grub.conf file. Once I placed it after the correct kernel entry, it stuck. I also tried the below simple line in /etc/rc.local and that worked, as well.


    #for i in /sys/block/sd*; do echo noop > $i/queue/scheduler; done

    Thanks again for the help!
  • 8. Re: How to make elevator/schedule type persistent across boots (multipath)
    Dude! Guru
    Currently Being Moderated
    Thanks for the feedback. Btw, another interesting information I found:

    Tweak #2: Use the “noop” I/O scheduler. By default, Linux uses an “elevator” so that platter reads and writes are done in an orderly and sequential matter. Since an SSD is not a conventional disk, the “elevator” scheduler actually gets in the way. By adding elevator=noop to your kernel boot parameters in your /boot/grub/menu.lst file, you will greatly improve read and write performance on your SSD. For those of you using Linux in virtual machines on conventional drives such as JBOD and SAN-based arrays, this is a good practice as well, since most VMs are implemented in image files (such as .vmdk on VMWare and .vhd on Hyper-V) and there is no need to treat I/O to a virtual disk the same as a physical one.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points