RAID uses a fixed stripe size and mirrors or strips complete disks. ASM does not duplicate disks and provides data redundancy on a file basis between failure groups. ASM knows about Oracle database files and uses optimal stripe sizes based on file templates. You can resize ASM disk groups online without having to reinitialize you storage, which you cannot with RAID alone.
Thanks for the clarity, always welcome here. Buried in the thread I pointed at, Mladen pointed out a dependency on RMAN. Now, that's not a SPOF, and even if it were, not much of one, but I can understand some people wanting to have some quicker access to archives outside of having to deal with ASM or RMAN when dealing with a recalcitrant standby or its various transports. Some snapshot transport scenarios might benefit too.
Of course, more likely is developers a-scared of dependency on Oracle tools.
What business developers have with your Exadata machine? The only thing that you, the DBA, should be deciding for the redo log files over Exadata is to create a Flash log from the Flashcache available. That's about it! If you are taking the advice of developers how to manage the redo log files, worse , over an engineered system like Exadata, it's a really really bad thing.
Even though the thread has drifted a bit into discussions of ASM, I need to clear up a bad example in my previous demo of mixed OMF vs. non-OMF files. In that demo, I really had no OMF files to start with. This one, on a non-asm database, I took care to make sure I created the db using OMF
SQL> select file_name from dba_data_files;
SQL> CREATE SMALLFILE TABLESPACE "FUBAR"
2 DATAFILE '/u01/app/oracle/oradata/FUBAR/fubar_01.dbf'
3 SIZE 100M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO 4 5 6 ;
SQL> select file_name from dba_data_files;
Yeah, so ASM <> RAID. We kind of all knew that!
The question was more whether it's "not RAID at all", and I was taking issue with the words "at all" there. There are some things one deploys a RAID array for which ASM will also provide, functionally (redundant storage of database data on aa storage array, basically).
I've worked on ASM since 2003 (when it was first released). I'm well aware that it's not RAID. But I think it's a bit daft not to draw attention to its functional similarities to RAID, to someone who maybe isn't as familiar with it as I am. It's the start of a conversation, not the end, but it's a reasonable start, IMO.
Just because two technologies provide data redundancy and striping does not mean they are similar. RAID works on a disk block level, regardless of data, whereas ASM works on a file allocation unit level in a round robin fashion among disk failure groups. I do not see the functional similarity. I think the idea that ASM is similar to RAID levels causes more misunderstandings than doing any good. For instance, it makes people questioning if they need ASM if they already have a RAID solution, apparently not realizing that these are complementary solutions.
Phwwooosh. (The sound of a point flying overhead)
I see a lot of functional similarity in the respect of striping and redundancy. As I've explained, ad infinitum.
No-one was suggesting you tell people "ASM=Software RAID" and leave it at that. I specifically said that saying "ASM is a bit like Software RAID" is the start of a conversation, not its end. So when people "question if they need ASM if they already have a RAID", that's when you can explain about all the things which make ASM useful in a RAID environment.
But when someone questions me with "what the hell is ASM", and I reply "well, it's a database file system that provides a data striping and (optional) redundancy functionality, a bit like doing RAID would", I think that usefully establishes a common ground on which to build deeper knowledge and understanding. But people have to start somewhere and saying it's a bit like RAID in some respects seems perfectly reasonable to me.
>if they need ASM if they already have a RAID solution, apparently not realizing that these are complementary solutions.
Just for my own understanding, can you please explain this statement a little more? IMHO, if you have ASM already, it would be giving you the benefit of striping out of the box-and you can't disable it too. Since the striping is file based and is going to be for a fixed size(AU size), this probably is better than finding out the right stripe size which is required to be done in h/w raid. So why not just use ASM striping only rather than implementing both, from h/w and from ASM, if that's what you meant by complementary solutions? I guess, using the ASM based striping and probably, h/w based mirroring is a better idea?
Whether your RAID setup provides redundancy depends on the RAID level. RAID is also about performance. RAID 1+0 gives you best performance and disk level redundancy, but it is the least efficient in terms of space/disk ratio. RAID 5 will give you good read and more space than RAID 1, but slow write performance, unless you have a special controller dealing with the parity disk overhead. RAID works with a fixed stripe size, usually 512 bytes, or whatever sector/cluster it let's you define. Most modern disks use 4k physical sectors anyway.
ASM provides data redundancy and striping but using a different concept. Striping is coarse or fine grained according to file templates, which increases performance of online log files vs. datafiles. Performance depends on the number of disks in a disk failure group. If your disks are already a hardware RAID it can give you additional performance, including a read and write cache. Depending on your controller, ASM data redundancy might be more reliable than a hardware RAID, which usually relies on battery power if the computer crashes.
What gives you the best data redundancy and performance depends on what setup you can afford. I think it is generally a good idea to combine both ASM and RAID for a maximum of data redundancy and performance. That's what I meant by saying both technologies are complementary. Some people may think if they already use a hardware RAID that provides data redundancy and striping, than they do not need ASM anymore, which is true except for the additional performance, redundancy and storage management advantage.
The real problem with RAID 5 is nothing to do with write performance: fast CPUs (and dedicated controllers, as you say) have seen the write penalty due to parity computation & storage vanish, for all practical purposes, years ago. The major drama with using RAID 5 is, in fact, when in a state of failure, for then a read involves reading from all surviving disks in the array. So the problem actually becomes one of slow reads during failure.
As for mixing ASM and RAID... in my experience you get given a chunk of a SAN and told to get on with it. If the storage guys ever tell you it's using RAID-x, that's often a bonus! In any event: SANs are doing so much under the hood anyway, I would not myself choose to throw in ASM redundancy on top. So I'd usually be creating my disk groups there as 'external' redundancy and let the SAN work its magic instead.
ASM does more than just striping and mirroring of data or using different concepts than RAID. If you prefer to rely on hardware data mirroring used by certain RAID levels and choose ASM with external redundancy is up to you. Whether data redundancy provided by ASM in such cases would be overhead or overkill depends on who much you know and can control about the underlying storage. Those are configuration issues, which depend on your environment and budget. My point was that just because a storage subsystem provides RAID does not qualify to dismiss ASM. Data striping and mirroring by a hardware RAID is not necessarily more efficient or faster than software.
RAID 5 is striping witih parity and can even have better read performance than RAID 10, depending on the number of disks involved. RAID 5 may solve the issue of available disk space, but provides the slowest RAID rebuild performance and hence a higher risk of loosing data. Write performance of RAID 5 compared is always slower than RAID 10, regardless of the storage controller hardware.
You don't need to explain to me what RAID 5 is: I've been building them for about 20 years.
For a fixed number of disks, I would expect RAID5 to read better than RAID1+0, of course.
I would also expect write performance to be poorer than for the same number of disks in RAID1+0, BUT I wouldn't expect the write performance to be a noticeable factor for a database user, because Oracle does datafile writes as a background event anyway. (I wouldn't put online redo logs on a RAID5, though, because redo writes are foreground events). So even though there may be a write penalty for a datafile write, the user shouldn't notice it. Hardware improvements also mean the computation of the partity information is not the bottleneck it once was.
However, a database user will always notice a RAID5 failure, because now what's supposed to be a single physical read becomes multiple I/O operations against all surviving disks in the array. And the entire I/O subsystem is likely to be swamped for the duration of the RAID rebuild time, once it starts.
Anyway, none of this has anything to do with the question asked by the original poster. So over and out.