Skip navigation
1 2 Previous Next


26 posts

In this post I introduce Samplr: an open source, intelligent sampling profiler that can be embedded in any Java application for automatic identification of performance bottlenecks

Tools of the Trade

If you read the Developer Power story in the July/August issue of Oracle Java Magazine, you already know that I love VisualVM. It is the best tool I know for inspecting the inner workings of a running Java system.

The feature I use most in VisualVM is the sampling profiler. This is a very non-intrusive, easy to use profiler that works by taking regular samples of JVM running thread stacks and aggregating the results over time. Attaching this profiler to a live JVM is a quick and effective way to pinpoint hot spots in code. The sampling profiler is lightweight enough that it can be safely attached even to production systems.

Visual VM VisualVM Sampler snapshot view

Working as a consultant, I have had numerous opportunities to use the VisualVM sampler to diagnose performance problems in live applications. When faced with a performance issue my usual approach is to try to reproduce the problem in a controlled test environment, and then use VisualVM sampling to locate the performance bottlenecks in code.

This approach does not always work. Sometimes the right kind of test environment is not available. Other times the test environment is available but its not an exact replica of the production system and hece the problem does not happen there. And even with rhe right kind of environment sometimes the conditions that trigger the problem in production are not fully known.But since the sampling profiler is so lightweight, it is often possible to just connect it to a live production system and observe its behaviour firsthand.

Sampling a production system is not devoid of challenges: it requires de cooperation of Ops people (whom are not always easily convinced that sampling is safe in production environments) and lots of patience as one have to sort through a lot of thread stack information in order to locate the threads processing the relevant requests.And if your production system is clustered (which is often true) it is not practical to connect VisualVM to every single cluster node.

After some time using this great tool, I started to wonder: wouldn't it be nice if it was possible to show the sampling profiler the location of request boundaries both in space (code locations) and time (begin and end) and then have it sample just the right threads at the right times? It would be even nicer if this functionality could be somehow embedded in an application so that slow requests would be automatically recognized and sampled in production system without the need of Ops help or manual intervention.

A few weeks ago I stopped daydreaming and created Samplr, an open source sampling profiler based on and compatible with VisualVM that can be easily embedded into any Java application.

Samplr architecture


Samplr is designed to be included as a library in any kind of Java application. Once initialized, it starts a supervisory thread that is notified by applications when requests start and when they finish. These notifications associate the request with a specific thread and time slot.

The following sequence diagram depicts the interaction between the Samplr threads and the request processing threads:

Samplr sequence diagram Samplr example sequence

Once the monitoring thread identifies that a request is interesting, it wakes up a sampling thread that will take regular snapshots of the target thread stack. The definition of an "interesting" request depends upon how Samplr is configured. In the example above, Samplr is configured to sample only slow requests (requests that take longer than a predefined time threshold to complete).

Using Samplr

Samplr is distributed as a self contained JAR file which should be incorporated into the target application using the standard library inclusion methods.

The application then needs to initialize Samplr by creating an instance of RequestManager. There needs to be just one instance of this class per application, and it should be initialized with a set of parameters that tells it what requests to look for, how to sample them and how to store the results. An example initialization is provided below:


The code snippet above shows Samplr being initialized with instructions to sample requests that take longer than 30s (30000ms). It will record its results in the /home/glassfish/samplr-output directory. It will sample requests until they finish or for 3 minutes (300000ms), whatever happens first. More initialization examples can be found on the class

Once RequestManager is initialized it needs to be notified about request boundaries (start an finish). Requests are represented by the Request class and can be anything: a method call, a web request being processed by a servlet or any other unit of work. You can subclass Request in order to provide contextual information that needs to be recorded alongside the thread sampling in order to help debugging. An example is provided below:


Notifying Samplr about request boundaries is very simple: just call RequestManager.requestStarting and RequestManager.requestFinished at the appropriate code locations and Samplr will do the rest:


Collecting and Analyzing results

Samplr currently supports saving results to a filesystem. Once the sampling finishes, the request information and sampling results are saved into the configured output directory. Each request is saved in a unique sub-directory with the structure depicted below:

samplr output 

The most interesting file is request-sampling.nps: this file contains the saved thread sampling information in nps format. This file can be loaded and analyzed in either VisualVM or in the Netbeans IDE. The resulting visualization can be seen in the following screenshot:


Samplr nps files contain information only for a single request, which was flagged for sampling by the initialization criteria. This makes analyzing Samplr results much easier than analyzing VisualVM results, as all irrelevant information is already filtered out.

The files request-info.txt and sampling-info.txt contain contextual information about the sampled request that can further help understanding the results.


Current status and future directions

Samplr is very young, but is already being used in one production system, with great results. It has been designed for extensibility and has a lot of room for improvement. A promising improvement area is request boundary demarcation: currently request boundaries must be programmatically demarcated. Here are some ideas I plan to implement eventually in order to make using Samplr easier:

  • A Servlet Filter that automatically initializes Samplr and register requests, enabling automatic profiling for slow web requests.
  • An EJB 3.1 interceptor to do the same for EJB method invocations
  • A Java agent for transparent integration into running systems through bytecode instrumentation.


Samplr is free and open source. You are welcome to try it out and also to help make it better.

My JavaOne talk about Apache Wicket went really well: the room was almost full, lots of interesting questions and I met a lot of nice people. Thanks to all atendees!

You can find the presentation in the link below:

Some Wicket goodies for my JavaOne talk.

If you go to my JavaOne 2011 talk "Productively Fun Web Development with Apache Wicket and Java EE 6" you will see a demo where Bean Validation is used in conjunction with Apache Wicket.

In order to accomplish that, all that you need is a single Java class that bridges the Wicket validation framework and the JSR 303 validation engine.

I am posting here not only this bridge class (JSR303Validator) but also two other utility classes: ValidationStyleBehaviour and BaseEntityForm.

ValidationStyleBehaviour is a Wicket behaviour that displays form component erros messages close to the form component itself, instead of in a FeedbackPanel.

BaseEntityForm is a Wicket Form that uses any class anotated with Bean Validation annotations and uses JSR303Validator and ValidationStyleBehaviour in every form field, so that the resulting form validates according to the constraints expressed by the annotations and renders validation messages nicely.

The results can be seen below:


You can download the code here.

There is one talk I would like to comment on today: "Don't Be Pwned: A Very Short Course on Secure Programming in Java".

This talk, presented by Robert Seacord  and Dean Sutherland from SEI/CERT, was the scariest Java talk I have ever been to.

Do you believe the software you write is secure enough?  Believing it or not, I suggest you take some time to read through the CERT Oracle Secure Coding Standard for Java. It is a guide prepared by the guys at CERT that describe the main concerns you should have when writing secure code in Java. Or to put it in another way, it describes the many ways you are probably writing insecure code right now.

I never imagined how dangerous class inheritance can be in bypassing security managers, or how simple it is to crash most JVM versions by feeding them specially crafted floating-point numbers. Even if you don´t memorize all the guide rules, reading it is definitely an eye opener.

This will be required literature for all my team members and students from now on.

JavaOne started for me today with the Glassfish Community Event.

It was a nice opportunity to meet the Glassfish Development Team as well as people I have interacted with but not met in person until today.

The event, as well as the community party afterwards, had more people than last year. That is also my perception for the entire conference: there are more people attending this year than in the last year. If that is true or just my impression is something that only the conference organizers will be able to answer. The Java community seems, at least in numbers, to be stronger than ever...

From the information provided at the event, I can tell you that Java EE 7 and Glassfish are moving towards the Cloud: the next version of Glassfish (Glassfish 4) will have support for managing Glassfish on virtualized environments and will support multi-tenant applications as per the Java EE 7 spec.

There were many customer testimonials about how Glassfish is performing in production environments. Common themes: the high performance of the web stack (Grizzly) and the poor formatting of the server log messages.

 Looking forward to the JavaOne keynote tomorrow...

Heroku, the PaaS cloud owned by, announced a few days ago support for the Java language. In the process they declared Java EE to be irrelevant in the cloud world. Is that so?

I try to embrace anything that helps me develop, test or deploy code faster. Cloud Computing has the potential to bring radical improvements to all these areas, hence I have been following its developments closely over the last couple of years.

Cloud platforms come in different flavours, the main ones being IaaS (Infrastructure as a Service) an PaaS (Platform as a Service). IaaS clouds, like Amazon EC2 or Rackspace Cloud, provide infrastructure building blocks (CPUs, storage space, ip addresses, firewalls, etc.) in a programatic way. In a way, they transform infrastructure in code. You can then lay the software stack you need over these building blocks: operating systems, databases,virtual machines, programming languages etc. Lots of decisions to make.

PaaS clouds, like Google AppEngine or Heroku, would like to make some of these decisions for you.They restrict the menu of available stack components to a few supported configurations. In return, they relief you from the burden of having to manage these components. You stop thinking about "servers" and think instead about "applications" , "components" or other logical deployment units.

I am a big fan of IaaS clouds, but have yet to be convinced of the usefulness of PaaS. Take Heroku for example: their announcement about Java support revolves around the revolutionary idea of embedding Jetty inside your application so that your web application contains the web server, and not the other way around. They go to some lengths in criticizing Java EE for its complexity and to show how this containerless approach is going to make your life easier.

The problem is, this approach is not new: I have personally written applications using this architecture for a while now, going as far back as 2002. The idea of going containerless has a certain lure, but as the application grows  you start to realize its drawbacks. The moment you need a connection pool, or  transaction control, or manageability, or security, you start to recreate the container inside your application, one third-party library at a time. Fast forward a few releases and now your application is a collection of 150Mb of conflicting jar files.You have recreated something like the Java EE container inside your application, at a great cost and zero standardization.

PaaS vendors: as a User, I don´t want my platform to be simple, I want it to make my life simpler. These are different goals.

Simple != Simplistic.

Next week I will be presenting a free webinar about Java EE 6 and Glassfish 3.1.

The title can be translated to English as: "Java EE 6 and Glassfish 3.1: Simplicity + Ligthness = Productivity". The reason the title is in Portuguese is because this will be a webinar targeted at Brazilian Java EE and Glassfish users - as far as I know it will be the first webinar in Portuguese on these topics.

It will be based on a talk I gave at JustJava 2011 conference in São Paulo but will also have new content. I will talk about the new and cool features of Java EE 6 and how Glassfish enables developers to take advantage of these new features to buid applications that are at the same time lightweight and highly available. The Glassfish part will focus on the new features in glassfish 3.1: clustering, high-availability and centralized management.

The webinar will take place on Wednesday, June 22 2011 14:00 Brazilian time (10:00 a.m. PT / 1:00 p.m. ET / 19.00 CT ).

Registration is free but space is limited - you can register by following this link.

Many thanks to Alexandra Huff ( Principal Product Manager @ Oracle Glassfish) for making this webinar possible, and to Arun Gupta for introducing me to the Glassfish team a few months ago.

Just a quick note to annountce the availability of RainToolkit 1.3.

It can be downloaded at:

If you manage Amazon EC2 resources from the command-line, Rain Toolkit can make your life easier by automating a lot of common repetitive tasks.

New features in this release include:

  • Support for micro instances
  • Support for EBS instances
  • Support for Amazon Virtual Private Cloud (VPC)

Rain Toolkit is written in Java, using the AWS Java SDK and is released under the GPL v2 license.

I know what you are has always been a service. But the Cloud is making it evolve.

Cloud Computing has been one of my favourite topics in the last couple of years. Coming from a sysadmin background I am frequently in awe of how easy it is nowadays to conjure up vast amounts of computing and storage resources out of thin air. Of course I know that its not "out of thin air" ... behind the scenes there is a lot of powerful hardware, hard working people and clever software that makes it all possible. Some people say that Cloud Computing is just a bunch of computers connected to a network, but I disagree: I believe it represents a fundamental shift in how we approach computing.

Consulting, on the other hand, has been my main professional activity for the last couple of years. I love doing it but sometimes I feel it is too bureaucratic for the customers to hire a consultant: what you need is someone who can help you get your problem solved quickly, but you end up having to mess with quotations, contracts, physical access permissions etc.

Wouldn´t it be nice if you could conjure up a consultant in an as-needed basis in the same way as you can with hardware and software?

We at Logicstyle believe it would, and this is why we have created Expert-in-Tech. This is an online consulting branch which will provide on-demand consulting services in a wide range of technologies: Java SE, Java EE, open source frameworks, commercial and open source application servers, commercial and open source databases and operating systems. You can create support tickets on demand and choose a support level that better suits your needs.

So if you have a tricky problem, need an urgent patch for an open source project you use or have a task you are just too busy to deal with right now, give Expert-in-Tech a try. We look forward to working with you!

Please send any suggestions or comments about the service to contact at


mvn classpath:hell Blog

Posted by jjviana Dec 14, 2010

Apache maven is supposed to solve the classpath hell; or so I´ve been told...

Started using maven 2 in a multi-pom project. One of the requirements is to be able to deploy EAR files to Weblogic.

After looking around it seemed like the most stable way of doing this with maven and Weblogic 10.x is to use the Weblogic Ant task. Not ideal since we are migrating away from Ant, but still okay.

So I happily added the maven antrun plugin to my pom.xml file:


                 <path id="wlappc.classpath">
                <fileset file="${wlFullClient}" />

            <taskdef name="wldeploy"
            classpathref="wlappc.classpath" />
         <wldeploy action="deploy" verbose="true" debug="true"         
           upload="true" name="Myapp" source="myservice-ear-1.0-SNAPSHOT.ear" 
           user="${deployUser}" password="${deployPassword}" adminurl="${deployURL}" targets="${deployTargets}" />


Great! Now I can easily deploy my ear file before the integration tests are run.

But now, after all tests are run and Maven then tries to deploy the generated artifact to the remote repository, it fails with the misterious error:

[INFO] ------------------------------------------------------------------------
[INFO] Error deploying artifact: Failed to transfer file: Return code is: 401

After a few hours of debugging the repository user permissions and the repository installation itself (http error 401 is a permission denied error after all) I was almost giving up hope when it occurred to me to test disabling the antrun plugin execution. Suddently the repository deployment started working again.

It turns our there is something in wlfullclient.jar (the weblogic admin client used by the ant task) that is messing up with the maven deployment plugin. Since there is no way to isolate the classpath between the plugins I had to choose to either deploy to the integration application server or to deploy to the remote repository. I ended up choosing (for now) disabling the remote repository deployment for the EAR artifact:



This is hardly an appropriate solution, as now I have to manually manage the copy of these files to the production system.

For a build system that is supposed to get rid of the classpath hell, Maven is turning out to be for me a classpath devil itself (that is not my first classpath problem with Maven ).

One of the exciting things about teaching is the fact no matter how well you prepare for a class, events will always surprise you.Yesterday I was caught by surprise in the middle of a class by what seemed like a global Glassfish admin console outage.

I was teaching my Software Architecture students at IGTI how to change the default maximum thread pool size for the HTTP listener in Glassfish 3.0.1. We needed to do that in order to run a JMeter load test against our example application. The default maximum thread pool size in Glassfish is just 5 threads, which makes it impossible to stress the system with a decent  number of concurrent users.

I tried to access the Glassfish admin console on http://localhost:4848 on my notebook. It started loading but hang after the "Admin Console is starting..." screen. Restarted Glassfish, no change.

So I thought: "well, my Glassfish instance must be broken somehow". Tried it on a student notebook, same thing: admin console would still failed to load.

Switched to a remote virtual desktop(Ubuntu Lucid) running on Amazon EC2, started Glassfish, tried to load management console. Same result, hangs after the starting screen.

Luckily we had a coffee break coming up so I sent the students to eat something while I tried to figure out what was going on. What could possibly have broken admin console in 4 Glassfish instances running under two operating systems (Windows and Linux) in two different countries (Brazil and USA)?

My sysadmin years taught me to look for the network whenever something stops working without any apparent reason. So I ran netstat and found some suspicious connections from the Glassfish process  to * hosts. So I remembered that admin console has an auto-update feature. Maybe that had something to do with the failure to load?

To test this theory I disconnected my notebook from the network, restarted Glassfish and tried http://localhost:4848 again. Loaded promptly, worked like a charm. So I explained my students why Glassfish was behaving like that and told them to try the same solution on their notebooks. It worked for everyone so we were able to go on with the class.

What I explained to the students was this: admin console must be trying to check for updates while loading. If it cant connect to the update server (a fast kind of failure) it ignores the error and finishes loading. No problem there, only a few seconds lost. But when it can connect to the update server and the server is just not responding (a slow kind of failure) then it will wait forever for a reply and hence not finish loading.

This kind of unintended coupling is not uncommon in networked applications. But when it happens on something as big as Glassfish is gets kind of scary. I wonder how many other users where scratching their heads just then, wondering what was going on with their servers.

But this event, unpredictable as it was, presented me with the opportunity to teach one more Software Architecture lesson to my students: if your application uses a Cloud service, its better be prepared for the service to fail in all sorts of unexpected ways.


Automated functional tests are key to ensure the quality of large applications in incremental development processes. In contrast with unit tests where each test is supposed to be independent from the outside environment, functional tests are really integration tests: in order for them to run properly the process must start from a well known state.

The majority of enterprise applications use a relational database in order to store application state. As one starts to automate the functional tests a problem quickly arises: your automated tests will modify  the QA database and that may have an impact in the future automated test runs. For instance, you write a test for deleting a particular record and it runs fine, but that means in the next run the record won´t be there to be deleted. Of course you can write your automated tests in order to make sure this is not a problem (for example, by ordering your tests so that the record deleted in the delete test is first inserted by a previous test). But that makes the automated tests a lot more difficult to write and run reliably.

The ideal solution for this problem (in my oppinion) would be to reset the status of the QA database to a well known state just before starting the automated tests. That´s exactly what I did in a recent customer project using Ant, a little scripting and PostgreSQL:

1 - Create a template QA database

This is the database state you want to reset to at the beginning of the automated functional test suite. In PostgreSQL it can be done using a few commands:

createdb  -U postgres -O <owner> -T template0 AutoQADB
pg_dump -U postgres -f source_dump.sql -b -F c QADB
pg_restore -U postgres -d AutoQADB source_dump.sql


In this example QADB is your current QA database, AutoQADB is a copy of this database to be used exclusively by the automated test process. <owner> is the database user that should own the AutoQADB database. You should configure the application instance used during the automated tests to connect to the AutoQADB database.

2 - Create a database reset script

This could be done entirely using Apache Ant, but since in this project  the development environment runs under Microsoft Windows I found it easier to create a simple BAT file (Ant has some problems with the exec task on Windows). The purpose of this script is to recreate the AutoQADB database when required:

cmd.exe  /c "C:\glassfish-3.0.1\bin\asadmin.bat" stop-domain domain1
"c:\PostgreSQL\8.4\bin\pg_dump"  -U postgres -f source_dump.sql -b -F c  QADB
"c:\PostgreSQL\8.4\bin\dropdb" -U postgres AutoQADB
"c:\PostgreSQL\8.4\bin\createdb" -U postgres -O <owner>-T template0 AutoQADB
"c:\PostgreSQL\8.4\bin\pg_restore" -U postgres -d AutoQADB source_dump.sql
del source_dump.sql
"C:\glassfish-3.0.1\bin\asadmin.bat" start-domain domain1


In order for this script to work it is important that nobody is connected to the AutoQADB as the script drops it and re-creates it again as a copy of  QADB. Since in this project uses GlassfishV3, I included commands to stop the Glassfish QA instance (first line) and start it again once the database is ready (last line).

3- Invoke the reset script in your ant build before running the automated tests:

<target name="restore-QA-database" >
     <exec executable="cmd" >
         <arg value="/c" />
         <arg value="refresh_QA.bat" />

<target depends="init,compile-test,restore-QA-database,-pre-test-run" 
          if="have.tests" name="run-gui-tests">

        <webproject2:junit testincludes="**/*" 
              excludes="**/" />

In this example the reset script is called refresh_QA.bat. The run-qui-tests target runs all the tests in the automated functional test suite. By declaring that it depends on the restore-QA-database target we make sure the tests will always start with the desired database state.

The specific examples shown here work with PostgreSQL and GlassfishV3, but the technique can be adapted to work  for other combinations of database and application servers.



I had originally planned to write a blog post per day during JavaOne 2010. That being my first JavaOne I was of course completely unprepared for the hectic routine of sessions, meetings, parties and more sessions. I wrote blog posts in the first two days and disappeared ever since.

I am alive, well, and back to Brazil. I had a terrific time in San Francisco where I got to meet a lot of interesting people, some for the first time and some that I had previously known online. What explains my silence is the fact I got sucked into a new consulting assignment immediately after touching ground, and only now I have time to rest a bit and reflect on what I have seen.

JavaOne 2010 was all about The Cloud for me, and in many ways it changed my views as to what cloud computing really is about. I will have to write a post specifically about that in the future, but I am now convinced that the Cloud is an umbrella concept for a number of practices and technologies related to the simplification of system deployments ( in contrast to the more traditional view related to cost benefits from economies of scale). There were a number of very interesting announcements and demos related to Cloud Computing and I believe over the next year more and more Java-enabled cloud platforms will enter production stage ( and at some point after that we will see some common apis emmerging this space).

There was already a lot of coverage about the future of the Java platform, so I will skip the details here. The overall feeling I got is that in spite of concerns related to the JCP and the Google/Android lawsuit the Java platform is in good hands with Oracle. The JavaOne keynote was very efficient in making the imediate direction clear. I agree with some things ( like splitting Java 7 into 7 and 8 ) and don't understand others ( and JavaFX) but the important message is that the Java platform will be moving forward again, which is a good thing.

In the Java EE space there were no big new announcements. Instead, there was a renewed commitment with the development of Glassfish as an open source product. That is fine by me: Java EE 6 is still seen as cutting edge by many organizations and has a long way to go before becoming obsolete. My top Java 8 wish ( EE apps as first class citizens in the JVM ) is not going to happen anytime soon, but was hinted to be somewhere in the future of Java EE. This is an area where Java EE and The Cloud intersect. I have seen some demos of PaaS clouds based on Java technology ( CloudBees and VMForce, among others) and I wonder how these services are planning to solve the problem of application isolation. Google has solved it in App Engine by using a custom JVM. If more and more companies go down this route we may end up with a fragmented Java ecosystem in the Cloud.

I was surprised in JavaOne 2010 to not find any sessions about Apache Wicket, my favourite web framework. In order to remedy this situation in JavaOne Brazil 2010, I have submited a session proposal about Wicket development in Java EE 6. Should it be accepted I will be presenting Apache Wicket and how it provides a presentation layer option that fits in the Java EE 6 philosophy even beter than JSF 2.

There was a lot happening at day 1. The amount of simultaneous activity can be dazzling at times and I felt i needed a good nights rest in order to digest the enormous amount of information fed into my brain yesterday.

I am writing this post from the Mason Street tent and the vibe around here is electrifying. Its hard to remember as one goes about the day-to-day development activities that there are so many Java developers in the world...Of course we interact all the time on mailing lists and twitter and blogs but to see so many Java developers in person in a single place gives me an exact idea of just how big and active the Java ecosystem really is.

The main news item today was the  JavaOne keynote, which finally laid out definitely what are the Oracle plans for Java in the next couple of years. there was a lot of information for a hour and a half session, and more details will emerge in todays General Technical Session but the highlights are:

  •  Java 7 will be split in two releases: 7 in 2011, 8 in 2012
  • Oracle will continue investing in JavaFX but will kill JavaFxScript. JavaFX will be written in pure Java  and will be able to run on HTML 5 browsers through cross-compilation to JavaScript.
  • A new mobile initiative called will take JavaME to the next level. It will feature updated apis more suited to todays feature phones and smartphones. Maybe that explains the Google lawsuit?
  • Oracle is commited to continue developing Glassfish and pushing the Java EE specifications forward.

As I said this is a lot of information and there are a lot of gory details yet to be specified. I am particularly curious about the HTML 5  support in JavaFX as it could enable JavaFX applications on the ipad/iphone. Hopefully we will gain more in-depth understanding of these features in the General Technical Session.

The overal feeling I got was that Oracle is commited to take Java to the next level and they plan to listen a lot fo the community as they do it. At the end of the keynote everyone was invited to wear a t-shirt which had  the phrase " I am the future of Java"  stamped.

Lets go and try to make it true.

It was my first day at JavaOne 2010 today. And what a day!

The kick off for me was the Glassfish community event. The room was packed with Glassfish users and developers. It was nice meeting in person people I have been reading and interacting with online for such a long time.

There was a presentation on the Glassfish product roadmap and a break out for discussion of many Glassfish-related topics. I ended up in a very stimulating discussion about virtualization support in Glassfish. It is even more interesting in the light of what would be announced in the welcome keynote later on.

In the Oracle OpenWorld welcome keynote there were some announcements, all related to cloud computing. First HP presented their view of the data center of the future, and in their view the data center is a private cloud. They demoed software for automating private cloud design and deployment. Their offering for the private cloud consists in a lot of boxes connected through HP networking gear running operating systems, hypervisors and applications from multiple vendors.

Next came the Oracle keynote itself. From the moment Larry took the stage it was clear that his presentation would be all about the cloud. He started by defining cloud computing by comparing two services commonly associated with the term: and Amazon EC2. His vision is that cloud computing is more like Amazon EC2 than salesforce ( in other words, infrastructure as a service as opposed to software as a service). Next he announced Oracle's take on the private cloud market: a product that is a cloud in a box, tightly integrating hardware and software in order to deliver a high performance computer cluster for general purpose use.

During the announcement he stressed out that this system has been heavily optimized to make Java run as efficiently as possible. In fact he mentiobed Java as a key component many times, which means Oracle is not about to change their mind about the strategic importance of Java any time soon.

Going back to the Glassfish community discussion, the main question was: how can Glassfish integrate better with virtualized envirinments and the cloud? The question for me was strange at the beginning because i am used to think about application servers as being unaware of the details of the environment they live in. But as the discussion evolved it became clear that maybe  there is value to be gained in making the application server more aware of the cloud resources. Things like provisionimg, auto-scaling and self-healing require a tigther integration between these worlds.

One thing finally hit me today: while the public clouds are about cost savings and elasticity, the private cloud is about simplifying IT provisioning and deployment. In fact, the cloud computing paradigm can be seen as a metaphor that glues together a number of ideas related to the simplification of IT service delivery. This is the common trait that even the PaaS and SaaS definitions share.

What is clear is that whatever happens next the cloud is here to stay. And Java and the JVM will play a signifficant part on it.