This content has been marked as final. Show 8 replies
Hello,1 person found this helpful
The firmware is designed to limit 30 drives in your vdisk, regardless the RAID level. This is for the 2500 series.
The initialization increasing 3% a day is definitively not normal, however I do not know what is the issue. Such investigation would require debug data.
Thanks For the Reply!
Well that def answers my first question.
Noob here, how do I get this debug data?
See the document "Collecting Sun Storage Common Array Manager Array Support Data (Doc ID 1002514.1)" on My Oracle Support. It explains how to collect a supportdata which will need to be analyzed by an Oracle support engineer.
OK, a bit off-topic, but a 36 drive RAID volume?!?!!?
Your performance is going to stink. Google "RAID read-modify-write" for just one reason why that's an utterly horrible idea.
And as you're seeing, RAID initialization and therefore rebuild times are huge. So you have a huge window for multiple drive failures happening before a rebuild is finished, thereby rendering your 30-drive array permanently lost. What kind of backups do you have to restore 60+ TB of data from?
We need to archive many TBs worth of .wav files which is increasing daily. We had chosen RAID6 for the 2 drive fail redundancy. I understand what your saying, what direction would you go in to store the data on the array?
On a file system that big?
Use a volume manager (LVM, ZFS, etc) to spread the file system across multiple RAID arrays. 10-drive RAID-6 arrays are good - 8 data, 2 parity. Pick a segment size for each array such that the RAID stripe size (segment size times number of data drives in array) is no larger than the underlying block size of the file or volume management system you plan to put on it. Multiple underlying RAID arrays will also allow both controllers of the 2530 (assuming it has two) to be used for processing IO, doubling both the effective processing power of the 2530 and the bandwidth to the device since your server will be able to actively use SAN paths to both controllers.
For example, if you use ZFS with its 128k block size, on 8+2 RAID-6 arrays, pick a 16k segment size so you get a 128k RAID stripe width. If you're using ext3 on top of LVM, select a segment size, calculate your stripe width, then use the -datalignment option of "pvcreate" to match your stripe width. Build an even number of RAID arrays, with an equal number on each controller.
If you manually have to partition/label each LUN, make sure you start the disk partition you're going to be writing data to on a RAID stripe boundary. For example, note that ZFS, when given a whole disk/LUN to use, puts an EFI label on the disk and starts its data partition at segment 256, which is at the first 128k boundary, assuming a 512-byte segment size.
On a file system this big, if data availability is critical, I'd also be tempted to use software RAID on top of the multiple RAID arrays. You'll lose a small bit of performance, but it might be worth it. In my experience, block size, segment size, and RAID stripe calculations are nowhere near as important for software RAID as they are for hardware RAID.
It's all trade-offs. The more drives you wind up using for parity, the more available your data will be.
Don't mess around with multiple LUNs off any physical RAID array in your 2530. Just fill the entire RAID array with a single LUN.
To get the best possible availability, if you're using Oracle's CAM to set up the RAID array, don't let it magically select which physical drives to put in each RAID array you build. If you have enough expansion trays, you want to minimize the number of drives you select from each tray. If you have a total of 5 trays and are building 10-drive RAID-6 arrays, for example, only pick two drives from each tray for each RAID array you build. That way, if an entire tray fails (someone goes into the server room and unplugs the wrong device...), you'll minimize the impact and won't lose access to your file system. If you don't have enough expansion trays to build your RAID arrays that way, try to build each RAID array in as few trays as possible. That way a tray failure will have a minimum impact on your file system (hopefully).
And don't forget - RAID is for availability. Backups are for integrity. RAID can't protect you from someone making a mistake and running "rm -f -r ." in the wrong place.
I cannot send you this document, you need to go to https://support.oracle.com and login with your MOS account if you have one.
If you do not have an account, please follow the steps below to collect a supportdata, however you will need to have the 2530 under contract in order to log a SR with Oracle:
1. Go to CAM.
2. Select the array.
3. Click on "Service Advisor" no the top right in CAM.
4. Go to "Collecting Support Data" and follow the steps.