This content has been marked as final. Show 7 replies
We have gone through the process. Here is what I sent in an email to a similar question on oracle-l:
We originally purchased a quarter rack V2, and chose to buy the half rack upgrade a year later. Because the V2 wasn't available for sale any more, we received X2 components. Purchasing an upgrade is different from a new Exadata purchase, as you will just receive the components needed to perform the upgrade (our half rack upgrade included 2 database servers, 4 storage servers, an Infiniband switch, and a bunch of cables). It took the Sun engineers a couple of days to get everything installed in the rack. The longest wait was for the labels that identify which cable goes into which server ship from another part of the universe apparently.
After we had everything cabled in the rack, we were ready to configure the new components. You can either add the storage servers and database servers into the existing cluster, or create a second cluster and storage grid (that's what we did). There is nothing physically that would prevent you from connecting the V2 and X2 nodes into the same cluster. Just remember that you will have less memory on the V2 database servers compared to the X2 database servers (72GB vs 96GB).
The process of upgrading an Exadata system (1/4 to 1/2 rack, 1/2 to full rack) is to configure the griddisks on the new cells, add them into ASM, then add each of the database servers to the cluster.
We have one customer who has gone through the process of a 1/4 --> 1/2 rack upgrade...they were using all X2-2 parts, running Exadata storage server version 220.127.116.11.2 with database version 18.104.22.168 (BP5) at the time. One thing to remember is that the upgrade parts will be running on the latest Exadata software versions, so if your V2 is running older software versions (22.214.171.124 database or 11.2.1.X storage server version), you will need to look at patching your V2 to get it in sync with the X2 components. You definitely don't want them running on different patch levels if they're going to be in the same cluster.
Hope this helps!
I am curious about one thing , I have experienced that all servers in the same infiniband network need to be on the same infiniband firmware level or weird things start happening. In case of this sort of upgrade are the kernel and infiniband utils of the existing servers first upgraded to match the new servers and then the new ones are added ?
The existing equipment needs to be patched up to the levels of the new equipment if you are going to be integrating all of the components into one cluster.
For example, if you had a half rack upgrade kit installed today, it would be installed with version 126.96.36.199.2 (storage server software), and most likely bundle patch 13 on the Oracle binaries. If you were planning on using that half rack upgrade to extend your existing cluster, then you would need to patch the existing hardware to those same versions. If you were going to build a separate cluster out of the new components, they would not necessarily have to be brought up to those levels.
The one exception to all of this is the Infiniband switch software. If your existing IB switches were running an older version (say 1.1.3-2), then they would have to be upgraded to the latest version (1.3.3-2). That is because the half rack upgrade kit includes the spine switch (it is not included with a quarter rack).
You dont always have to go with latest storage software/infiniband versions. If for some reason you decide not to upgrade to later version (not that i am advising to do that) of Exadata storage software and infiniband software. You could ask Oracle/Sun field engineer to do just the hardware installation and carry out the following steps.
- Simply re-image the newly installed cells with the same version of exadata software as your rest of cell nodes.
- Installation same infiniband switch software on the newly installed switch as is on existing switches.
- Re-image the compute nodes to the same version as existing compute nodes. Please note that since minimal pack does not upgrade the compute node OS and it upgrade only hardware firmwares, kernel and OFED (infiniband) rpms. On existing OEL 5.3 compute nodes, minimal pack does not even upgrade the kernel. So you may want to reimage the compute node to the same version it started it life at. But then latest battery packs delivered with X2-2 are compatible with very old cell/compute images.
- Create the cell, celldisk, griddisks, flashcache etc.
- Update the existing compute nodes's cellip.ora files to include new cell's infiniband IPs.
- Expand the ASM diskgroups onto newly created griddisks.
- Expand the Grid/Database cluster by following standard node addition process.
Like Andy says, first upgrade your existing exadata quarter rack to same version that Oracle is going to install in new components and then ask them to expand your existing rack onto new components. Depends on the fees your are paying to Oracle. If you are paying lot of money for upgrade, then make use of them. There is no reason that you can upgrade the software yourself. Though i would advise if you are familier with the Exadata hardware then let Oracle/Sun field engineers do the heavy lifting the runing the cables through the tight corners of Exadata rack etc.
Hope this helps.