This content has been marked as final. Show 8 replies
As of OL5, raw devices are unnecessary. Where you would normally use a raw name, just use the name of the block device. Sometimes applications are overly clever and parse the filenames to see if you are using a raw device; these should be corrected.
Raw devices were supported.
Then raw devices were deprecated.
Then raw devices were removed.
Then raw devices are supported again.
Too much thrashing for me; who needs 'em?
There is an obsolete kernel configuration flag named CONFIG_MAX_RAW_DEVS that can be used to set the number of raw devices supported when compiling the kernel. It accepts values from 1-65535, default is 256. If raw devices are supported, I don't think there is any practical limit.
As far as I know, a raw device bypasses the kernel buffer cache and scheduling features and does not have a "mountable" filesystem. It is up to the application or driver accessing the device how the data is organized. Certain technologies require or required raw device support to bypass a OS limitation, in particular clustering and data streaming solutions. For instance, data may need to be shared between nodes but without a cluster aware filesystem, e.g. Oracle RAC without OCFS. Or to achieve better performance, e.g. tape and video streaming. The performance gain however is not guaranteed.
From what I read, raw device support was brought back to address backward compatibility and because of demand from users. However, since raw device support is being phased out, I think there is a good chance that it will be removed from the next 3.0 kernel for good.
The more important question might be why you actually need need raw device support. Apparently, applications should simply open a device with the O_DIRECT flag if necessary.
My advice is not to rely on raw devices support in the future. But perhaps there will be a better answer once we know the reasons or business purpose behind your question.
Perhaps interesting reading: http://kerneltrap.org/node/7563
Edited by: Dude on Oct 27, 2011 3:01 PM
893012 wrote:Do not confuse raw devices with.. well, raw devices.
I know that the raw device was deprecated in this update to the version 3 and after its use was released by Oracle for your database in RAC environment.
Problem: could not open a block device for direct (raw) I/O.
Solution: use a raw device hack for making the block device available via a character device interface that supports direct (raw) I/O.
No problem. Block devices can be opened for direct (raw) I/O.
So raw devices are pretty much still supported. What is deprecated is the old style hack that required you to explicitly configure (via the raw device feature) character devices for block devices.
Raw device means (as Dude already stated) that access to the device is done directly without any "interference" by the kernel (like writing and reading through a kernel buffer cache for the device). This is a fundamental feature - and will not change.
The problem seems to be the confusion of an older (and now deprecated) "+block-device-as-char-device+" feature that was called "raw devices".
Please correct me if I'm wrong, but I think raw device support under Linux and the "block-device-as-char-device" hack as you mention is on and the same thing. The Linux kernel, unlike Unix, does not have /dev/rdisk or /dev/rdsk devices. Linux uses a raw device controller /dev/rawctl instead, which can be used to bind a Linux raw character device to a block device. From what I understand, this feature is scheduled for removal and the O_DIRECT flag should be used instead.
From the raw man page: Raw I/O devices do not maintain cache coherency with the Linux block device buffer cache. If you use raw I/O to overwrite data already in the buffer cache, the buffer cache will no longer correspond to the contents of the actual storage device underneath. This is deliberate, but is regarded either a bug or a feature depending on who you ask!
Dude wrote:It is a terminology thing IMO. Raw devices on Unix (back in the 90's when we build Oracle Parallel Server clusters) were simply just that - a device that is used via direct I/O (aka raw I/O) and not via a "cooked" file system.
Please correct me if I'm wrong, but I think raw device support under Linux and the "block-device-as-char-device" hack as you mention is on and the same thing.
So a device was used as either cooked or raw. On Linux, this simple definition became somewhat convoluted with the introduction of Linux block-as-character raw devices.
The Linux kernel has traditionally not provided a raw interface, for a number of reasons. According to my research, before version 2.2, the Linux kernel supported only system-based I/O, meaning the kernel intercepts the calls and transfers the data to its own buffer before passing it on to the physical device or process. There are obvious advantages to this because the kernel (buffer cache, scheduler) can control the I/O and reduce the physical disk I/O. When the system crashes though, you loose the data.
There were several attempts in the past to introduce raw I/O to Unix and most variants of Unix support it today. Most existing implementations require literally doubling the number of device nodes. Linux creators rejected this approach and instead used a pool of device nodes that can be associated with any arbitrary block device, hence in kernel 2.3 a new object called "kiobuf" was introduced.
More information about "kiobuf" can be found tin the "Linux Device Drivers, 2nd Edition". In particular section "Mapping User-Space Buffers and Raw I/O" is very interesting: http://www.xml.com/ldd/chapter/book/ch13.html#t3
"Raw I/O is not always the great performance boost that some people think it should be, and driver writers should not rush out to add the capability just because they can. The overhead of setting up a raw transfer can be significant, and the advantages of buffering data in the kernel are lost. For example, note that raw I/O operations almost always must be synchronous -- the write system call cannot return until the operation is complete. Linux currently lacks the mechanisms that user programs need to be able to safely perform asynchronous raw I/O on a user buffer."