We are considering some use cases which would involve deploying a large number (potentially hundreds or thousands) of WAR files to a WebLogic Server instance. What are the theoretical or practical limitations to doing this? They would likely have some shared libraries in common between them. These WAR files will connect to a common database via JDBC.
I'm trying to understand the potential impact on startup time, memory utilization, the ability to manage large sets via the admin console or scripts, etc.
The potential impacts of deploying a large number of WAR files are:
- Increase in startup time.
Exact impact on start-up time is little hard to predict. It depends a lot on the size of the WAR file and the servlets that load as part of initialization and about how many JSPs are part of the application etc. At an average the impact would be in the range of 5secs to 30secs. Anything more than 30secs can be considered as a huge WAR file. So, if you would have about lets say 300 WAR files, then you might see that the startup time would be increased by 25mins to 1.5hrs. I would advice that we use the test environments to measure the exact impact for your WAR applications. Then that data can be used to device an appropriate solution
- Increase in memory resources.
Heap size needs to be set higher for such kind of deployments. Again approximately at an average 300 WAR applications, may utilize 300MB to 1GB of memory AND about 512MB of permanent generation. So, ideally a server holding more than 300 WAR applications would need to run with the following JVM arguments: -Xms1.5g -Xmx1.5g -XX:MaxPermSize=512m
- Memory resources affects during runtime
We should also consider the load on these Web applications during runtime. If a single server instance would be able to handle the load of the requests being made to these web applications after they are deployed. This would also have an effect on memory utilization (heap settings). Continuously utilizing more than 80% of memory can also cause frequent GC which would affect the performance as well as CPU utilization. So, we will have to measure that as well in a test environment with a load test (performance test environment)
- You have to use your test environment to validate the above theories in a non-perfect world and accordingly arrive at a threshold of web-applications that a single server instance can handle (Measure in terms of deployment time, i.e., start time ; Measure in terms of memory utilization including permGen during deployment OR start time ; AND finally the performance of the server in-terms of a load test for all these web-applications)
- I really do not see any issues with the admin console. But, browsing through the pages to monitor and manage these applications might be very cumbersome. Ideally you would want to use WLST scripts to administer the application deployments.
I wanted to make you think on another tangent.
Comparing deployment of hundreds (lets say 300) of web-applications on a single managed server "versus" deploying about 50 applications on a single managed server and having 6 servers (may be 12 for high-availability/redundancy):
I recommends the second option
- As it is not different in-terms of resources (for example, if you are running 1 server with 2GB versus 2 server with 1GB, it is almost the same utilization of resources)
- Stability is high with servers handling optimum load (below threshold)
- Calls between web-applications, whether they are in the same server instance OR on different server instance, are treated the same. So, no improvement in terms of inter application calls if they are deployed on the same server OR on different servers.