4 posts

Hibernate and OSGI Implementation Made Simple

Last April I kicked off FossESI to discuss how to take existing Java applications built using older technologies and implement them using new technologies like OSGi, Spring, Camel and NOSQL databases.  At the time of the kickoff, we intended to begin comparing and contrasting 3 NOSQL databases.  And just after that started, I got a real-life opportunity to help convert an existing application over to OSGi.  What I thought would be a breeze turned out to be quite an effort!  What I'll be doing over the next few months is talking about some of the lessons I learned, some of the quirks of Karaf and OSGi, and some of the tips and tricks I learned.  This week, I'll talk about how to implement Hibernate 3.3.2.GA inside of Karaf 1.6.0 without creating any new bundles!

One of the major contributors to the OSGi movement is Peter Kriens who created an excellent tool called BND.  The good folks at apache-felix then created a maven-plugin that makes quick work of bundling apps, but I'll go over that in a later blog.  The reason I bring this up here is that a while ago Peter wrote which discusses how to create a stand-alone bundle with all of Hibernate 3's goodness wrapped up inside of it.  This is an excellent approach, but it has a few drawbacks:

  • It requires the creation of a new .jar file which may confuse new developers,
  • It is pretty complex, and
  • The BND tool, while most excellent and wonderful, takes some time to figure out.

What I'm going to do is show you how to leverage Karaf and SpringSource bundles to do something similar, but without having to bundle Hibernate.  As Peter says "Hibernate is one of the more complicated open source projects to wrap".  To do this, I'll:

  • Talk about an excellent source for bundles, Spring Source,
  • Describe what a features.xml file is,
  • Show you a working features.xml file for Hibernate 3, and finally
  • Show to modify one of Karaf's configuration files to automagically start up Hibernate when you start Karaf.

First, I 'd like to talk about Spring Source. These folks are doing an outstanding job of creating bundles out of commonly used java apps, like Hibernate.  I would be remiss if I didn't mention the fact that, without their hard work, simply deploying Hibernate using a features.xml file would not be possible.  The link points to thier bundle repository, you should bookmark it, its a great resource.

Karaf has a few core concepts that this blog will illustrate to you:

  • OSGi bundles,
  • Karaf Features,
  • Features.xml documents, and
  • the org.apache.felix.karaf.feature.cfg file.

When you have an application like Hibernate that has dependencies on other libraries, one way to deploy them is to turn them into OSGi bundles and then create a features.xml file. An OSGi bundle is simply a .jar file with a MANIFEST.MF file containing OSGi goodness.  There are a lot of great resources already available on the internet that go into detail about what an OSGi bundle is composed of, so I won't go further into it here. 

Deploying a large application composed of bundles using a features.xml file is much less time intensive than deploying them manually.  Most of the large bundled open-source programs have created features.xml files for you, Camel being the first one that comes to mind.  Unfortunately, at the time of this writing, I was unable to find one for Hibernate, so I made my own using a list of dependencies created by the good folks at Spring Source.

First, start Karaf.  Then, to deploy a single OSGi bundle into Karaf, you simply need to invoke the following command (for the uninitiated "karaf>> " is the prompt, dont' type that.)

karaf>> osgi:install <uri of the file>

In practice, the uri can be anything that can be resolved: a maven URI, a url, or even some file on your local file-system! Once you've typed that, Karaf will print out the name of the bundle you installed on the command-line.  Alternatively, you could find out the bundle number of the bundle you installed by typing:

karaf>> osgi:list | grep <bundle symbolic name>

That will give you the number of the bundle as it is installed in Karaf. Finally, you start the bundle by typing: 

karaf>> osgi:start <karaf's number for the bundle>

A bundle can have a number of states, it can be Installed, Active, or Active and Failed.  What you want is for your bundles to be Active. Usually, anything else is a problem. This is what you'd type to install and start an OSGi bundle from a maven repository:

karaf>> osgi:install mvn:org.dom4j/
karaf>> osgi:start 89

Now, imagine if you have 90 bundles in your application, including all of the dependencies.  That's a lot of typing!  A features.xml does all that heavy lifting for you. The features.xml file is simply an xml file where you identify a given feature, and then define the OSGi bundles and other features necessary for that feature to work inside of Karaf. 

Karaf will read in the features.xml file, and when you install a feature, it will automatically download each bundle listed from its associated URI, install it as an OSGi bundle, and start it.  It will do this for each bundle unless it finds a problem. If it finds a problem, it will then stop and uninstall each bundle it started.  If a bundle was already installed, the Karaf will "refresh" it, and if there's a problem, it will leave it active. This is the features.xml file I wrote for Hibernate, the file is called hibernate-features-3.3.2-GA.xml.

<?xml version="1.0" encoding="UTF-8" ?>
   <feature name="hibernate" version="3.3.2.GA">

Now, copy that bad-boy into an editor, and save it in your <karaf-home>/etc directory.  Once that's done, you'll be ready for the next step!

We can make Karaf aware of this features.xml file manually by typing:

karaf>> features:installUrl file:///home/myname/apache-felix-karaf-1.6.0/etc/hibernate-features-3.3.2-GA.xml

To verify your new features.xml file is installed, type

karaf>> features:listUrl

If you see your feature listed there, you can start your feature by typing:

karaf>> features:install hibernate

Give it a few seconds, and when you get the prompt, if you haven't gotten any errors, you can verify that hibernate is installed by typing:

karaf>> features:list | grep hibernate

Installing your features.xml and starting your feature this way is fun, but what if you've got a ton of features to install?  Manually is fun; but lets face it, it's not very sexy... If only there was a way to make Karaf aware of the features.xml on start-up?  Well, there is! We just add it to our <karaf-home>/etc/org.apache.felix.karaf.features.cfg file!  This is what it will look like when you download and install karaf:

#    Licensed to the Apache Software Foundation (ASF) under one or more
#    contributor license agreements.  See the NOTICE file distributed with
#    this work for additional information regarding copyright ownership.
#    The ASF licenses this file to You under the Apache License, Version 2.0
#    (the "License"); you may not use this file except in compliance with
#    the License.  You may obtain a copy of the License at
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS,
#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#    See the License for the specific language governing permissions and
#    limitations under the License.

# Comma separated list of features repositories to register by default

# Comma separated list of features to install at startup

This file is composed of two very important items for us, a list of features repositories, and also a list of the features to start up when we start Karaf.  Including our features.xml file is pretty straight-forward, we simply add the uri to the end of the featuresRepositories line so that it looks like this:


When I first did this I tried to put a \ character after the first comman and start each features file on its own line, but that created issues, so now I put all of my features on the same line.

When we start up Karaf, we can verify that the features file is installed by typing:

karaf>> features:listUrl

This produces a list of each features.xml file Karaf was able to successfully read.  If you don't see your features.xml file there, go look at it, there's probably an issue.  Also, check out the <karaf-home>/data/log/karaf.log file and see if any errors or exceptions were reported.

Ok, what if Hibernate is part of a larger set of applications, and you want them all to start up when you start up karaf?  Well, that's not too difficult.  Simply add your new feature to the featuresBoot line so it looks like this:


If everything works properly, when you start up Karaf, your happy new hibernate feature should be up and running and ready for some abuse!

That's all for tonight.  If you find an issue with this blog, have questions, or just like the cut of my jib, feel free to comment below!  Happy developing!

Last week's kickoff of FESI's research program went very well. There are a number of folks (>500) who are now following this blog on, and a number who have gotten involved.  While we wait for more folks, we'll be researching new technologies, the first being NOSQL databases. 

If the kind of topics we're research interests you, please feel free to join the project; we need to reach a critical mass of developers before we move on to our next phase of research.  While we wait, we will take a look at current open-source projects, identifying what's moving from bleeding-edge to becoming more accepted.  This will help us choose what technologies to research in-depth.  We'll start with NOSQL databases, but if anyone has a suggestion of another technology, we'll research that one next.

NOSQL databases seem to fit into the definition of being bleeding-edge to more accepted.  A nosql database is a database which is designed to be "non-relational, distributed, open-source and horizontal scalable. The original intention has been modern web-scale databases." (definition blatently stolen, from For the next week I'll be looking at three strong representatives in this category: Hadoop/HBase (they've got some great merchandise in thier online store, I recently got a Hadoop sweatshirt), Cassandra (what appears to be the strongest contender), and Voldemort (an open-source implementation of the Amazon Dynamo key-value store).  I chose these because they all have active, successful commercial implementations, are fairly unheard of as of yet, and I liked the names (yes, not the best criteria, but hey, its my blog, LOL).

To start, I'll install each of these and attempt a simple read and write.  Later this week, I'll share with you my experiences with Cassandra, next week Hadoop/HBase,  and the week after that Voldemort. While you wait for my next post, feel free to join the FossESI project, respond to this post, join our Facebook Fanpage (FossESI), follow us on Twitter, or play outside in the great summer weather.  Your choice. :-) 

Once we have a core of developers, we'll begin Phase 2 of our program, implementing a Struts/Servlet based set of simple applications on an open-source JEE container using a simple MySQL backend database. After this is done, we'll begin the third phase whose goal is to replace all of those technologies with the bleeding edge technologies available in the open-source community. Of course, as an open-source project, we'll make our integration code available via with the GPL 3.0 license.

FESI Research Program Overview

FESI is the Free and open source software Enterprise Solutions Institute. We are a research program designed to study tomorrow's internet technologies as a means to teach folks in the local workforce how to use technologies our customers will likely want to implement.  We also perform this research to prepare local engineers with the knowledge and skills to help introduce newer technologies into existing enterprises.  Finally, the integration code we create will be uploaded to and available to everyone under the GPL.

To achieve these goals, we will roll out our initial research in three phases: phase one we will complete setting up the research program and introduce members to some of the core technologies we work with, phase two we will construct a web-application using well established technologies, and phase three we will begin replacing the well-established technologies with cutting edge (but proven) technologies.

To get involved please email me or send an e-mail to

Phase one

Phase one is currently under way.  This blog will be where we tell the world how things are going. will be where we host our project and source all of our completed code.  Twitter, LinkedIn and Facebook will be used to further help promote our efforts.   We will also use this time to recruit more researchers to FESI. We will invite new members into our program to use the Camel tutorial introducing them to many of the technologies we will be using.  Once we have established all of the administrivia, phase one will be complete. 

Phase two

Phase two is a planning and rapid implementation process.  We will create a sample application using a well-proven existing technologies. To do this we will:

  • implement a schema, triggers, stored procedures and views on a relational database (either MySQL or Oracle),  
  • employ a middleware product to serve numerous java servlets, and old-school struts applications,
  • use hibernate to act as a bridge between the database and the middleware tier, and
  • use the web as our presentation layer. 

Phase two should not last more then a couple of months, as the folks who are interested in FESI will already have experience using these technologies.

Phase three

Phase three will be spent planning the conversion of the servlets and struts applications into a series of endpoints. To do this we will:  

  • replace the struts router with Camel routing and mediation,
  • convert all of the EJB's into endpoint applications,
  • replace backend RMI with JMS,
  • replace the middleware product with Apache Felix,
  • replace the relational database with Cassandra, and
  • move all stored procedures, triggers and views into Java Pojos.


The hands-on approach of actually performing the work on a sample application will teach the skills necessary to fill positions that are opening up at many of our local customers, and will provide the knowledge necessary to help steer the customers in the direction of technologies that work.  Our integration code will be available to the general community, and hopefully will serve as the basis for similar conversions.

This is the initial plan subject to changes based on the desires of the research participants. I expect this program it to take the better part of a year to complete. 

On April 28th, 2010, we will be kicking off the first phase of FESI's research.  This where we set up the project, and start to get our hands dirty. Attached is the flyer we will use to promote FESI and what we're trying to accomplish.  Hopefully it will help us get the word out to the local community.

Also as the kickoff, we'll be participating with a networking group that meets monthly.  I honestly dont' think a ton of folks will sign up for FESI tomorrow night, and that's really not the point.  The kickoff is the start date of FESI.  We can look back on this date in years to come as the birth, the place where it all began, and a date to celebrate our accomplishments in the future.

Good luck, and we're looking forward to seeing you there!