Exadata half rack advt that the disk iops is 25,000 .
Is the advt IOPS of 25,000 for write ?.
Does this mean that i can do 25,000 writes per second ?
For high end write intensive system what are the typical writes you folks have encountered so far i have seen less then 7000IOPS in my whole carrer and i am interested to know whats the write IOPS on your systems
If the write IOPS is very high like 22,000 IOPS then should i provison the flash disks as grid disks ?
While traditional HDDs have about the same IOPS for read and write operations, most NAND flash-based SSDs are much slower writing than reading due to the inability to rewrite directly into a previously written location
Some ballpark numbers:
15,000 rpm SAS drives ~175-210 IOPS
Given that 175 IOPS/disk * 12 disks/storage server * 14 storage servers/full rack = 29400, the stated 25k figure is even a tad conservative for the high performance disks.
Naturally, if you're using ASM redundancy you'll be doing each of your writes two or three times, with a consequent reduction in effective IOPS from a database perspective.
Thanks for taking time out on a sunday and responding to this post.
So on a half RAC exadata machine i can safely get around 12,500 IOPS on a SAS drive after accounting for Normal redundancy
and from your post do I understand correctly that the flash drives dont give me great write performance as i was planning to use flash grid disks to counter the high IOPS on our system.
Business is yet to give me IOPS numbers and i just heard a rumour of 22,000 Inserts Per Second and this number makes me nervous as i have havent seen a system in real life that does more then 7000IOPS but i havent seen it doesnt mean they dont exists.
Your post brings up an important point: not all inserts are created equal. In a worst-case scenario, a single-row insert could require a redo writes, undo writes, and datafile block writes, and multiply everything by 2 or 3 with ASM redundancy. But if you can do large, paralle direct path inserts from, say, DBFS-hosted external tables, your throughput rate can approach the maximum dataload rate of 12TB per hour on a full rack.
Ideally you'll be able to gather metrics based on either an existing system or a performance test environment. You cuold even do this in a non-Exadata environemnt (with the exception of hybrid columnar compression testing): just do your dataloads, and measure the IOPS volume and write throughput on disk, per volume of rows inserted.
I don't generally recommend using the flash memory for permanent data storage, if just because, with even normal ASM redundancy, you're cutting usable space in half, and the only way to expand flash capacity is to buy more storage servers.
I am not in a position to do dataloads as the inserts would be done by the application and all inserts would be singtle row inserts and commits.
I am not sure how to measure the IOPS since the awr reports would be showing the IOPS of the database compute nodes and not that of the storage cells when i run the workload.
I'd think that Exadata would be a rather large investment to make without testing of any kind :-). But for the exercise, the major inputs might be:
1. How large are the rows being inserted (and any complex data types like LOBs, object types and the rest?)
2. Is the processing done sequentially or in parallel
3. Are the inserts done by a session dedicated to data loads, or is it more of an OLTP application doing other work?
Doing single-row inserts at a time, you won't be anywhere near Exadata's maximum data load rates. Your throughput may even be restricted by CPU capacity so IOPS may not even be a factor.
On an Exadata system, IOPS at the storage servers can be calculated from data in the cellcli LIST METRICCURRENT and LIST METRICHISTORY commands. Guy Harrison has created a perl script that aggregates this data into more readable form: http://guyharrison.squarespace.com/blog/2011/7/31/a-perl-utility-to-improve-exadata-cellcli-statistics.html