This content has been marked as final. Show 4 replies
user10050301 wrote:You have two pizzas that are same size & same ingredients.
I have a quick question regarding the oracle on aix.
We are using fs to store the datafile. What is best compare with create a huge filesystem to hold all datafile or several small filesystem to hold datafile?
The disk comes from raid 5 storage.
One pizza has 4 slices & the other has 12 slices.
Which pizza can you eat faster?
If 1 or more filesystems reside on same single (RAID-5) volume, which gives better performance?
One file system should work providing you set it up to use logical disk defined across multiple physical disks, which probably means across multiple RAID-5 stripes within the storage unit depending on how the storage unit builds its RAID-5 stripes and how many disks worth of space the database will take. Multiple paths to the disk from the OS should also exist (IO controllers).
If is not a question of small verse large file systems, but rather how you define the file systems to the OS and on the disk storage.
HTH -- Mark D Powell --
The question may be quick, but the answer, perhaps not so quick.
You have a complex system. The database is going to be writing things and reading things. The data may or may not go into a file system buffer. Your controller(s) may have some buffer. Your raid probably has some buffer.
The database writes things with different qualitative characteristics. Redo writing and archiving basically spit out large volumes of sequential data. Data files read and write data randomly. The reads may be for single Oracle blocks, which often translate to 8 or 16 OS blocks, or multiblock reads, for many more OS blocks. More modern versions of Oracle may lean more towards direct reads into the users PGA (which version, including patch level, do you have?). Some modern arrays have predictive reading, so read more than the database has actually asked for into it's own buffer. The buffer also needs to handle writes going the other way. Different versions of AIX have different capabilities regarding how it interacts with Oracle to bypass system file buffers (like cio - so which version of AIX?).
What this all means is that there are many places that may bottleneck, and exactly how and when is determined by your usage of your app. In general, the randomized access translates to "the more disk spindles, the better the performance," but that has to be tempered by whatever bottlenecks you run across. If you can separate out the sequential access, that can shift the bottlenecks elsewhere. So, if you can have 4 controllers, giving two to redo and archiving (and just have a few RAID 10 disks for that), and avoid having the regular data access choke the disk buffers, you might have better performance than giving everything over to one big RAID-5 (how many disks do you have?). Or not, it depends. You also want to have some spares available, when RAID-5 goes into degraded mode, performance goes in the toilet, and the funny thing about disks is they tend to be manufactured at the same time, then fail at the same time, sometimes much, much less than their rated MTBF - ie, new.
Remember about "best practices:" They are best for someone else.