Comment from the peanut gallery - I do not see Exadata v2 as not being suitable for OLAP and only OLTP. There's some basic h/w changes in the architecture if I'm not mistaken. Like scaling from DDR to QDR on the IB layer. Such improvements are generic scalability ones - and will be of benefit irrespective of running OLAP or OLTP. Ditto for using the new Intel CPUs (though personally, I prefer Sun AMD servers).
I think of the new Exadata db machine of being more OLTP capable than before.. and not being less OLAP capable.
While it is good to know that Exadata 2 will be good for both OLTP as well OLAP, but from what I hear from Larry's presentation is that this one is pitched for OLTP. However, there are many realistic cases where there is need to run DW/OLAP queries against large OLTP databases. For such cases, DBAs would be in a dilemma as to what should be the optimal values for parameters specially targetted for DW/OLAP queries, like DB_FILE_MULTIBLOCK_READ_COUNT. Is there any white paper or some resource containing such recommendation for Exadata?
Exadata V1 could only run DSS workloads. With Exadata V2 OLTP workloads can now be run as a result of the Smart Flash Cache which enables the 1 million IOPS per full rack. Exadata V2 is still very much suited for DSS workloads (it runs them 2x as fast as Exadata V1) and is mentioned in the [Exadata V2 product launch webcast|http://www.oracle.com/go/?&Src=6811169&Act=21&pcode=WWMK09047168MPP014] . There is also more on the DSS features in this paper:
You will see these coming on-line in the coming months. The V2 system was announced in September, so the sales cycle is winding towards the implementation phase for the first sets of customers. Stay tuned to the news from Oracle on this...
I am trying to migrate from 10gR2 to Exadata v2 for an eseentially OLTP environment. 2 to 3 million rows inserted per day in about 70 tables. Initially I took the export from 10gR2 (rows = n) and created the environment in ExaData. Performance was about 30-40% better. Then I created the tables and indexes again using uniform extents and consolidated some of the indexes so that all of the required columns in the query were in the index. Performance went down by 50%!
I am going back to the original import to revalidate the results. Nothing seems to make any sense at all. :)
Check if the queries perfrom well with the indexes invisible and if so drop the indexes. that will save you from the index maintenance overhead and there are times when a full table scan or fast full scan is better in exdata due to smart scan , could be you are missing out on smart scan due to the changes in indexes.