We would like to deploy a big application which was written in Java EE 5 and would like to take advantage of the automatic scalability of Amazon Web Services.
The plan is to start with as few as 5 servers and than scale it up as the load increases. If the load goes back to normal , then we need to "kill" the unnecessary instances.
How can that be done in a meter of seconds/minutes instead of hours?
I believe the answer may rely on Puppet or Chef.
Has anyone experienced that? Is it even possible?
Sorry again guys,
I searched the internet a lot, but didn't find anything clear enough.
I am starting to think WebLogic has some limitations about scaling out in the cloud.
But can anyone help us on that? Please.
Maybe I am missing something here, but what I would like to understand is whether WebLogic can scale out automatically in the Cloud or not. How can WebLogic cope with a slashdot effect where you cannot predict that scalability will be much higher for just a few hours?
That's where real cloud solutions, like AWS (and now also Google Compute Engine) can be handy. I know other solutions, which are designed to be "Shared nothing", can spin out new instances in a matter of second or minutes. How WebLogic fits in this scenario?
Does WLS 12c change that? Or is it still hard (or impossible) to start/stop new instances automatically as the load increases/decreases? I am considering AWS EC2 instances or any modern cloud compute, like GCE as the "servers".
Maybe WLS 12c supports that out-of-the-box. If it does we would have to use Sticky Session, right? As WLS stores the user's session in the application layer (I think).