This content has been marked as final. Show 28 replies
Some clarification to the procedure for reconfiguring OEM after migrating to a new EC2 instance:
1. ORACLE_HOSTNAME must be set correctly. /etc/init.d/dbora sets this in ~oracle/.bash_profile on boot, but if the instance has been assigned an Elastic IP address, it doesn't get updated.
2. EMKEY_LOCATION is not documented anywhere. Not in the Oracle docs, not in MetaLink, and the only hit Google returns is this thread.
I finally discovered that it was a shell environment variable by looking through ~oracle/scripts/run_dbca.sh
Here are the amended steps:
export ORACLE_HOSTNAME=`wget -q -O - http://169.254.169.254/latest/meta-data/public-hostname`
# substitute your key location below
emca -config dbcontrol db
I did this by hand about 6 weeks ago without any white paper , once you have it done it's very easy to recreate , need to be very careful to keep the inventory proper.
I have to re-do it next week, I might PDF something as it is fairly trivial.
It's trivial to create a database on EBS. Not so trivial to take advantage of everything AWS has to offer for Oracle customers. Amazon and Oracle are co-authoring a paper that will cover this and many other topics if interest to Oracle Cloud computing. Expect to see it in mid-April.
I was thinking of creating a Oracle database system with the Oracle DBMS installed on an external EBS volume so the separation between the os (and what will get lost after terminating the instance) and the ORACLE_HOME would be as sharp as possible. In the current image for example you will loose a lot of valuable information in the ORACLE_HOME. The ADR for example. Of course one can relocate a lot of stuff, installing on another device you got it all was my line of thinking. Could you describe a setup like that.
Further I have been breaking my head over the question how to automate the instance configuration in the boot sequence with an elastic IP addresses and ESB volumes. Is that possible? Instead of programming it external or doing it manually.
In increasing order of difficulty in terms of setup, the storage options are:
1. Everything on Ephemeral
2. Database on EBS
3. Database and OH on EBS
4. Everything on EBS (including root)
For enterprise class solutions, you need to be able to persist changes to the root volume. The easiest way to do that is with EBS (#4). There are other ways, but they require custom solutions.
The current method for implementing #4 is to create a small AMI that does nothing other than wait for the EBS root volume to be attached, mount it, and transfer control.
As for automating instance configuration during launch, the most commonly used methods involve creating an AMI whose sole job is to marshal the resources for building an environment and then terminate. The general idea is that you pass information about the configuration to the marshal instance along with the required credentials and it launches instances, creates volumes (optionally from snapshots), allocates EIPs, assigns EBS volumes and EIP to each instance, and then tells the instances how to find each other (notice that this works for multiple instances that must work as a cluster as well as single instances). Once this is done, the marshal instance terminates, leaving no trace of your credentials in EC2.
This will all be documented with examples in forthcoming joint white papers from Amazon and Oracle.
Thank you for your high quality answer. Your achievement is huge: you structured my mind (only in this respect though ;-)). I can't wait for the white papers to get published. My approach for instance configuration until now was to write PHP code to configure the instance after startup (from another server) using the AWS EC2 API's, but I had the feeling it should be done in the boot sequence. The idea of a marshalling instance takes the whole thing a step further and give's a lot of new food for thought.
Could you give some hints about the requirements of a minimal instance in the #4 scenario. And if I am not asking too much is it possible to tell us what tools you use (in the upcoming white papers) in the marshalling instance scenario (Shell scripting with the API tools, Ruby, Perl ?), so I can prepare myself.
You will understand that I am available for a early review of the paper(s) you are working on. ;-)
Thank's again, Taeke
The marshalling instance is minimal in that it needs only enough of the OS to run your marshalling code. It's a bare bones install with only what you need to retrieve the configuration information (usually through User Data, S3, or a combination of the two), obtain the necessary credentials, submit requests to the AWS services you'll need, and communicate with the instances. It can be as simple as a shell script that retrieves the User Data, decrypts a PK, issues calls to EC2 API tools, and connects to the instances with ssh to tell them how to find each other.
The choice of implementation language will depend upon what you need the marshalling script to do and your personal/corporate standards.
I didn't have time to do #4 , I did 1,2,3 , 4 meant I would have had to learn a lot more about managing EC2
I am interested in the whitepaper too. what is the current status of it?
Please mail it to email@example.com
Could you give us some insight in how to get the root volume on a EBS volume and how to use that after booting the AMI. I have been experimenting with replacing /sbin/init with a script that waits for a new EBS device with the new root volume, mount it and then changing roots with the pivot_root command. (see: http://developer.amazonwebservices.com/connect/thread.jspa?threadID=24091&start=15&tstart=0). It works but somehow it doesn't give me great confidence in its robustness.
Are your lines of thinking the same or real different? Really curious.
Forgive any lack of understanding on my part, but why do we need to create a 2nd failover instance? If the goal is a persistent Oracle DB using EBS would it suffice to just have the database on the attached EBS storage but to leave the db software on the evanescent ec2 instance?
Because IMHO a bundled EC2 instance is a static thing without much space allocated. The first thing to do with an installed database is to apply all the patches [ 184.108.40.206 + recommended + yadda yadda ]. You run out of space fast if you do this.
Best to keep $OH on a sizeable EC2 volume I've found :0
Is that what was meant by "second failover instance"? A second EBS volume for the OH? If so, I agree with Chris. You definitely want to store your OH on EBS. It makes the AMI smaller, it loads faster, bundles faster (if you need to make a change), and it allows you to easily persist all of the things Oracle likes to store in a software directory :-}. It makes patching easier, and if you need multiple OHs (say for ASM and a database, or to have a 10g and 11g OH during an upgrade), store each OH on a separate EBS volume so that they can plug and play. Consider creating a "master" of each of your OH versions as snapshots so that you can easily create new EBS volumes from them.
I thought you were talking about database failover and didn't understand how this fit the context of the thread. :-)
The approach most often used to use EBS as a root volume is to do as described in the thread you referenced. See the post by N. Martin in this thread.
In general, the idea is to create a micro AMI who's job is to mount the root EBS volume and pivot. This is the best practice for now. People have been lobbying for a native method to boot from EBS (add your voice to this, if you haven't already by contacting EC2 support), so perhaps we'll see it in the future.