srikanth

13 posts

NOTE: A  slidedeck with 20 slides of "How To" JavaFX Dependency Injection with FxContainer is available here.  

The world is already filled with dozens of IoC containers. Do we need another one? That is the question I pondered a lot before setting out to write a DI/IoC container in JavaFX. I will briefly cover the facts that necessitated writing one and then show how to use it.

Constructor Injection or Setter Injection?

All IoC containers provide two type of dependency injection - constructor injection and setter injection. Constructor injection is technically impossible as the JavaFX language does not support overloaded constructors. While setter injection is possible, it does seem very artificial in JavaFX. JavaFX instance variables are typically declared as public or public-init etc. This means the initialization of these variable is typically as follows:

var finder = MovieFinder {
  movieDao: MovieDao { }
}

A MovieDao is provided when the MovieFinder is initialized. Notice that this falls under neither constructor injection nor setter injection. Due to lack of a name, I call it Init Injection. If we were to use Spring to inject the Dao into the MovieFinder, then the MovieFinder class should look like this:

public class MovieFinder {
  public-init movieDao:MovieDao;
 
  public function setMovieDao(dao:MovieDao) {
    this.movieDao = dao;
  }
}

Writing setters method for instance variables that are public-init seems very artificial. If somehow, we could inject the dao using the init, it would be ideal.

Basics of JavaFX Init Injection

The code snippet below captures this idea of Init-Injection using JavaFX reflection:

FXLocal.Context ctx = FXLocal.getContext();
FXClassType clzType = ctx.findClass("org.fxobjects.MovieFinder"); //get the class
FXObjectValue objValue = clzType.allocate(); //allocate memory for the class-instance
FXVarMember varMember = clzType.getVariable("movieDao"); //get the variable-type
FXValue varValue = ctx.mirrorOf(Some String or object); //create the variable-value
objValue.initVar(varMember, varValue); //initialize the variable-type in class-instance with the variable-value
//do more initVar as needed
objValue.initialize(); //Finally initialize the object

This is exactly the core idea behind the dependency injection in FxContainer (but on a much larger scale and internally complex to address real world problems)

NOTE: There is another additional side effect of setter injection that is covered in detail in this post - http://weblogs.java.net/blog/srikanth/archive/2010/06/12/wiring-javafx-objects-spring-tread-caution

Guice and Spring

Two of the most popular IoC containers are Guice and Spring. Spring supports both annotations and XML based configuration. Guice supports annotation and programmatic binding configuration and a mixture of both. Guice is most productive by combining annotations, generics and sprinkled with programmatic binding as needed. Unfortunately JavaFX supports neither annotations nor generics and this is a big disadvantage. This essentially means we are down to Spring. In addition to shortcomings of Spring (in the context of JavaFX) listed earlier, Spring jars add up (spring-core, spring-beans + spring-context + commons) and the size of spring jars alone exceeds 1MB at minimum.  It is not bad if your application is a huge one and is regularly used (and hence cached) by users. How about if you could get most benefits of Spring XML-like features of Dependency Injection with a footprint less than 75 K and in a style natural to JavaFX. Well that’s FxContainer for you !!

What is FxContainer?

FxContainer small footprint IoC container that is xml driven. [For those that cry of death by xml, you could really organize the xml into different logical files and mitigate the issue quite well.]

FxContainer 1.0 can be downloaded from the project site - https://fxobjects.dev.java.net

SVN is located here - https://fxobjects.dev.java.net/source/browse/fxobjects/

FxObjects is the umbrella project that encompasses several related but cohesive JavaFX frameworks with the single goal of making enterprise JavaFX development easy. FxObjects enables methodical, pattern based and test friendly way of developing JavaFX applications based on ideas and best practices distilled from real life usage. For those interested in seeing FxObjects and FxContainer ease JavaFX application development, here is a big picture: (Only few components are ready. There is more work to do. Anybody interested in contributing?)

FxObjects Architecture Roadmap

A whirlwind tour of FxContainer

If you are familiar with Spring, you will feel right at home with FxContainer. I will use simple examples to illustrate the features. Listing below shows the wiring of objects. (This is a subset of a large sample shipped with the framework) Both Java and JavaFX objects can be mixed. A JavaFX object can hold reference to Java and JavaFX objects and primitives. A Java object can hold reference to a JavaFX object via interface.

Simple Dependency Injection

<fxcontainer>
    <fxobject name="movieFinder" class="org.fxobjects.samples.fxcontainer.MovieFinderImpl">
        <property name="someVar" value="Some random value"/>
        <property name="javaHelper" ref="javaHelperObj"/>
        <property name="jfxHelper" ref="jfxHelperVarObj"/>
    </fxobject>
    <fxobject name="movieLister" class="org.fxobjects.samples.fxcontainer.MovieLister">
        <property name="finder" ref="movieFinder"/>
    </fxobject>
</fxcontainer>

Import XMLs and Properties, Pattern Substitutions

Large xmls can be split into logical xml files and “imported” into another. Properties can also be imported in a similar way and substituted in Spring style for ${ } placeholders

<fxcontainer>
    <import resource="/org/fxobjects/samples/abc.xml"/>
    <import resource="/org/fxobjects/samples/server.properties"/>
    <fxobject name="movieFinder" class="org.fxobjects.samples.fxcontainer.MovieFinderImpl">
        <property name="someVar" value="${propKey1} has a ${propKey2} "/>
    </fxobject>
</fxcontainer>

Wired Object Lifecycle - Singleton, Lazy-Init, Init-Method etc.

All wired objects are singletons by default. Similarly all object initializations are lazy by default and loaded only when the object or the containing object graph is requested. Both of these can be overridden. An init-method can be called after FxContainer finishes wiring the object. Listing below shows all three characteristics. In addition an optional feature called load-order is shown. Load-order is applicable only to eager initialized (i.e. lazy-init = false) objects and determines the order in which object is initialized. Lesser the number, earlier the initialization

<fxcontainer>
    <fxobject name="movieFinder" class="org.fxobjects.samples.fxcontainer.MovieFinderImpl"
             init-method="someMethod" lazy-init="false" singleton="false" load-order="0">
     …
    </fxobject>
</fxcontainer>

Collections, Sequence, Map Support

Collections are supported too. In addition to the usual list, set, map, a JavaFX specific sequence is also supported !. While sequence can be defined only inside a JavaFX parent, others can be defined in Java or JavaFX. Listing below shows a condensed example of all these.

<fxcontainer>
    <fxobject name="jfxHelperVarObj" class="org.fxobjects.samples.fxcontainer.MovieFinderJFXHelper">
        <property name="movieNames">
            <sequence>
                <entry value="Terminator" />
                <entry value="Terminator 2" />
            </sequence>
        </property>
        <property name="movies">
            <sequence>
                <entry ref="${movie1Ref}"/>
                <entry ref="movie2"/>
            </sequence>
        </property>
        <property name="movieList">
            <list>
                <entry ref="movie1"/>
                <entry ref="movie2"/>
            </list>
        </property>
    </fxobject>
</fxcontainer>

Advanced features in Collections support

Below is an example of some advanced features in collection support. Notice the usage of valueClass attribute. If you are using a primitive list within a JavaFX object or primitive list in Java object without parameterizing (i.e. without Generics), the container needs a way to figure out the type of the property. Similar scenario exists for Map. If the objects are of ref types, then these keyClass and valueClass are not needed.

<fxcontainer>
    <fxobject name="jfxHelperVarObj" class="org.fxobjects.samples.fxcontainer.MovieFinderJFXHelper">
        <property name="integerList">
            <list valueClass="Integer">
                <entry value="2"/>
                <entry value="5"/>
            </list>
        </property>
        <property name="codeMap">
            <map keyClass="java.lang.String" valueClass="java.lang.Integer">
                <entry key="mov1" value="1"/>
                <entry key="mov2" value="2"/>
            </map>
        </property>
        <property name="movieMap">
            <map keyClass="java.lang.String">
                <entry key="mov1" valueRef="movie1"/>
                <entry key="mov2" valueRef="movie2"/>
            </map>
        </property>
    </fxobject>
</fxcontainer>

Final word

In developing FxContainer, my main motive was to create that DI container that was easy to use, small footprint and feels natural to JavaFX style of programming all at the same time.

If you have any ideas for improvement or questions, concerns, comments – you are most welcome to post to the project forum website - https://fxobjects.dev.java.net/servlets/ProjectForumView

I’d recommend you also check out FxObjects and see if it helps you uncomplicate enterprise JavaFX application development. Inspite of the 0.1 release, it is very stable, feature rich and makes JavaFX application development fun.

You know, JavaFX isn’t just about jazzy interfaces. It is about elegant programs that are beautiful inside-out. FxObjects and FxContainer are a step towards that.

In the previous installment of Effective Architecture, I covered TDD with Model-View-Presenter. However the code I presented had synchronous server calls. In JavaFX (like Swing), the UI code runs in the Event Dispatch Thread (EDT). It is unwise to block the EDT. Hence it is encouraged to execute all server calls on a separate thread.

SwingWorker

Swing provides SwingWorker to execute tasks off the EDT. SwingWorker also allows switching to EDT for things like setting the UI data etc. A snippet is shown below.  When a button is pressed, the SwingWorker executes the code in doInBackground() in background. When completed, the code returns the data as an Object. The SwingWorker then calls process() and done() on the EDT, to allow the user code to update the UI etc.

Listing 1. SwingWorker

final JButton button = new JButton("Save");
ActionListener l = new ActionListener() {
    public void actionPerformed(ActionEvent e) {
    SwingWorker worker = new SwingWorker() {
        protected Object doInBackground() throws InterruptedException {
          //connect to server amd get data
          return data;
        }
        protected void process(Object data) {
          //update ui here
        }
        protected void done() {
          //done
        }
};
worker.execute();

Async Tasks - The JavaFX way

JavaFX provides a similar feature. But it is more generic and basic. We need to customize it to suit our needs. Others have described it in more detail (Baechul and Clarkeman have described it in more detail here - http://blogs.sun.com/baechul/entry/javafx_1_2_async). I will not repeat it here, but instead show a UML diagram with a brief summary.

JavaFX Async design

  1. To run something in a different thread, two classes should be created - The actual task performer (a Java class that implements RunnableFuture interface). The JavaFX class that works like a factory for the former
  2. The task performer implements the run() method and provides the custom code in it.
  3. The caller invokes the JavaFX Factory.start()
  4. Once the task is completed, the task performer can switch back to the EDT (if needed) by using FX.deferAction(..) to perform UI modifications.

Notice that there is lot of boiler plate code to get a Async Task going. The async task execution should ideally be made to perform just like the SwingWorker (but without using the SwingWorker). In a nutshell, the SwingWorker is a Command pattern that is configured and then execute() ed. It is pretty trivial to create a few classes that offload this burden. This is where the Command pattern comes in handy. A Command object can be easily created that wraps all this complexity and reduces it to a simple JavaFX object like this with 3 configured functions:

Listing 2. A sample Async command that runs a service asynchronously

 var command = DefaultAsyncCommand {
    runInBackground: function(modelObjectFromUI:Object) {
        //do all the server calls here
    }
    onSuccess: function(modelObjectFromServer:Object) {
 //update UI here
    }
    onError: function(e:Exception) {
 //show error dialog and do any other error processing
    }
} ;

When the UI wants to make the server call, it needs to just call 

Command.execute(modelObjectFromUI);

The execute() function is already implemented by the Command and internally initiates a AsyncTask and passes references to the 3 configured functions. The async task runs the runInBackground function in background and the rest in EDT. I will not go into the details of the DefaultAsyncCommand and bore you. It is provided in the links at the end.

AsyncCommand and Unit testing

Unit testing asynchronous calls is a major challenge. There are only two ways to test async calls.

  1. Trigger the call and poll
  2. Call service execute off the EDT and wait in loop till result arrives.

Both are not good options for unit testing. Hence I resort to a third method, perhaps the best option. I will not use async calls in the unit tests, but rather use Mock Async commands. If the Async command framework is well tested, then it can be safely replaced with the MockAsync Command. The Mock Async Command looks like a regular async command, by extending the real async command, but with a twist. It runs the code synchronously on the EDT!!  This makes unit testing  asynchronous calls a breeze. However there are some more details to look at: Listing below shows the now-familiar Model-View-Presenter being wired by Spring (See my previous JavaFX architecture blogs for MVP details).

Listing 3. Spring xml for wiring the service, async command and the presenter

    <bean id="timeConsumingService" class="org.fxobjects.samples.async.LongRunningService"/>

    <bean id="asyncCommand" class="org.fxobjects.command.async.DefaultAsyncCommand"/>

    <bean id="helloWorldNodePresenterBean" class="org.fxobjects.samples.async.HelloWorldNodePresenter"
                                                                           init-method="init">
        <property name="id" value="HelloWorldNodePresenter"/>
        <property name="defaultNodePresenter" value="true"/>
        <property name="timeConsumingService" ref="timeConsumingService"/>
        <property name="asyncCommand" ref="asyncCommand" />
    </bean>

Notice that the remote Service is injected into the presenter. (In reality this could be a Hessian Proxy etc.) Also notice that the async command is not internally created, but injected. The command needs some initialization after the beans are injected into the Presenter. The presenter has a cusom init method to do this. (by configuring the init-method property in Spring, the initialization is called by Spring automatically after bean injection)

Listing 4. Initializing the Async Command in the Presenter init() method. This is called by Spring after the bean initialization

    public function init():Void {
        asyncCommand2.runInBackground = function(modelObj:Object):Object {
            var toWhom:String = modelObj as String;
            var returnMessage:String = timeConsumingService1.sayLongHello(toWhom);
            return returnMessage;
        };
        asyncCommand2.onSuccess = function(returnModelObj:Object):Void {
            var wishMsg:String = returnModelObj as String;
            helloWorldNode.messageText2.content = wishMsg;
            helloWorldNode.processingText.content = "Processing Complete";
        };
        asyncCommand2.onError = function(exc:Exception):Void {
            Alert.inform("An error occured while executing Long Running Service 2");
        };
    }

Notice that I did not use the default JavaFX initialization mechanism here as I showed in Listing 2. Instead of the above mechanism, I could have created and initialized the Command as shown in Listing 2. But, remember – we need to write our code to be testable don’t we? A code written as Listing 2 is not testable because, it already uses the AsyncCommand and does not allow us to inject a MockAsyncCommand. This customization is done so that it allows our test code to inject a mock async command and still get initialized the default way. The Test code is follows:

Listing 5. JUnit Testcase demonstrating the use of MockAsyncCommand to test the asynchronous calls

public class AsyncTest extends TestCase {
    var presenter:HelloWorldNodePresenter;
    var node:HelloWorldNode;
    var jMockContext:Mockery = new Mockery();

    public override function setUp():Void {
        //do all the setup that would otherwise done by spring
        presenter = HelloWorldNodePresenter { };
        var cmd:DefaultAsyncCommand = MockAsyncCommand { };
        presenter.setAsyncCommand(cmd);
        presenter.init();
        loginNode = presenter.getNode() as HelloWorldNode;
    }

    public function testHello():Void {
        var toWhom:String = "Srikanth";
        var expectedRetValue:String = "Sorry, It took me long to say Hello World to you {toWhom}";
        var mockService:ILongRunningService = jMockContext.mock(ILongRunningService.class);
        presenter.setTimeConsumingService(mockService);

        var expectation:Expectations = new Expectations();
        expectation.oneOf (mockService).sayHello(toWhom);
        expectation.will(
            expectation.returnValue(expectedRetValue)
        );
        jMockContext.checking(expectation); 

        assertTrue(node.messageText.content, expectedRetValue);
    }

  ..
}

 Until now you saw how I made async calls to the server. This command is independent of the protocol used to connect to server. Be it Hessian, RMI-IIOP or any other fancy protocol, all you need to do is wrap the service call in the generic DefaultAsyncCommand and keep going!!. Things are slightly different with Http (but not very different). I will cover that next.

Async Http calls

JavaFX provides HttpRequest out of the box for executing http calls. HttpRequest is async by nature. And so there is no need to write additional wrapper code for async support. However there are a few twists in Http command:

  1. There is no need to provide the runInBackground() function.
  2. The request body is generally in string format (xml/json). Therefore the UI model objects should be converted into strings using appropriate “request builder”. This is applicable for PUT and POST.
  3. The response arrives in string (xml/json) format. Hence a ResponseBuilder have to be provided as the xml/json response from the http call needs to be converted into objects. (A Get HttpCommand only needs a Response Builder). 

As you will see, there is a lot of advantage using Command pattern here too.  I will introduce a custom GetHttpCommand class below. It is based on the same old and familiar command object initialized in the presenter init method (but without the runInBackground section)

Listing 6. Configuring and initializing the GetHttpCommand in Presenter

public class CafeSearchNodePresenter extends NodePresenter {
    public-init var cafeSearchCommand:GetHttpCommand; //injected by spring
    public function init():Void {
        cafeSearchCommand.onSuccess = function (modelObj:Object) {
            var resultSet:ResultSet = modelObj as ResultSet;
            handleSearchResponse(resultSet.getResult()); //populate the UI with results
        }
        cafeSearchCommand.onError = function(exc:Exception) {
            //handleError(exc, "An error occurred when searching");
        }
    }
}

I have used the same principle of injecting the command first and the initializing it in init() method.

The actual http endpoint to invoke in background is specified using Spring while wiring. (You don’t have to necessarily do it this way, but is a recommended approach) Look at the GetHttpCommand wired below:

Listing 7. Spring xml for wiring the ResponseBuilder, GetHttpCommand and the Presenter

    <context:property-placeholder location="classpath:org/fxobjects/samples/httpcafe/fxobjects-httpcafe-config.properties" />
    <bean id="cafeSearchResponseBuilder" class="org.fxobjects.samples.httpcafe.CafeSearchResponseBuilder">
        <property name="applicationSchema" value="/org/fxobjects/samples/httpcafe/LocalSearchResponse.xsd"/>
    </bean>
    <bean id="localCafeSearchCommand" class="org.fxobjects.command.async.http.GetHttpCommand">
        <property name="urlTemplate">
            <value>${protocol}://${serverName}/${contextPath}/localSearch?appid=null&query=coffee&zip={zipCode}&results=10&output=xml</value>
        </property>
        <property name="responseBuilder" ref="cafeSearchResponseBuilder"/>
    </bean>
 
    <bean id="cafeSearchNodePresenterBean" class="org.fxobjects.samples.httpcafe.CafeSearchNodePresenter" init-method="init">
        <property name="id" value="CafeSearchNodePresenter"/>
        <property name="defaultNodePresenter" value="true"/>
        <property name="cafeSearchCommand" ref="localCafeSearchCommand"/>
    </bean>

 The example I use above connects to the Yahoo Local Service to search for coffee shops.  The same example ships with Netbeans as a JavaFX sample and uses a Pull Parser. I instead use JAXB to generate classes from the xsd and then convert xml into Java objects. This example has HTTP GET method and hence needs only the response builder. The task of converting the xml response into Java objects is performed by the response builder as wired in the above example.

By extending the JAXBResponseBuilder (available in the download), the response builder is really a simple class with hardly 5 lines of code. The source code for the GetHttpCommand and supporting classes is also included in the links at the end of this post.

The Command object for HTTP needs some configuration. In particular, it needs a url template and a response builder. The command thus configured is injected into the presenter. The url template is a feature that makes using the HttpCommand easy. As you can see, the Url template has placeholders identified by braces. The values to be substituted for placeholders can be conveniently passed in a Map to the execute() method in GetHttpCommand and it does the rest.

Listing 8. Presenter method invoked when search button is clicked (Notice the search parameters passed in a Map to the GetHttpCommand)

    public function doSearch():Void {
        //clear previous searches if any
        delete cafes;

        //do validation
        if (node.zipCodeTextBox.text.trim().length() == 0) {
            Alert.inform("I would like to search, but please provide me zipcode");
            return;
        }

        var paramMap = new HashMap();
        paramMap.put("zipCode", node.zipCodeTextBox.text.trim());
        cafeSearchCommand.execute(paramMap);
    }

Testing http calls

Much like a MockAsyncCommand, I have introduced a MockGetHttpCommand as well. This mock command is setup by providing one of 2 expectations: expectedResult and expectedException. If  expectedResult is provided, the onSuccess() function is called with that result, thus making tests really easy without any change to the actual code.

The test code isn't much different from our earlier test. Its just that we dont need to use jMock. Instead the MockGetHttpCommand is configured as needed

Listing 9.

public class YahooCoffeeshopTest extends TestCase {
    var presenter:CafeSearchNodePresenter;
    var node:CafeSearchNode;

    public override function setUp():Void {
       presenter = CafeSearchNodePresenter { 
           searchCommand: MockGetHttpCommand { }
       }; 
       node = presenter.getNode() as CafeSearchNode;
    }

    public function testHello():Void {
       var expectedResultObj = ...
       presenter.searchCommand.expectedResult(expectedResult);

       node.zipCodeTextBox.text = "78729"; //Search coffeeshops in the beautiful Austin
       presenter.doSearch();
       assertTrue(node.table.results, ...);
    }
   ..
}

Additional considerations for async calls

When the async call runs off the EDT, the user is free to interact with the UI. This may be a good or a bad thing based on your application. It is good because user can interact with the UI. Bad because user can change data in the current UI and assume that it is saved, which is not the case. Typical applications run the server calls off the EDT, but then still block additional inputs and provide a option to cancel the running request midstream (perhaps because it is running too long)

  1. Use the modal dialog to accept user input to stop asynctask.stop
  2. Or you could just use a glasspane like rectangle with a progress indicator.

FxObjects: Incubator project

I have been discussing many interesting ways of effectively dealing with real world JavaFX projects in a series of blog posts now. I have captured all of these patterns practices into a open source project and named it FxObjects - https://fxobjects.dev.java.net. It is still in infancy, but is stable for all functionality provided. Take it for a spin. It is Apache License. Most of all, it is very simple. There are 6-7 fully working examples. As I discuss more patterns, I will add it there. Your participation is most welcome and is very much valued.

Async Command Sample Project (5 MB+): fxobjects.dev.java.net/files/documents/11182/151722/async-sample.zip

HttpCommand based Yahoo Search Implementation (5 MB+): https://fxobjects.dev.java.net/files/documents/11182/151723/httpcommand-async-sample.zip

In the last installment of my post, I briefly described how to do Test Driven Development (TDD) in JavaFX using Model-View-Presenter (MVP) pattern. In this installment, I illustrate this particular piece in detail and provide working code samples. You can download the code here (Caution: 6.7 MB download).

The download is a zip file containing all 4 Netbeans projects. The code is tested with libraries compatible with JavaFX 1.3 and with Netbeans 6.9. You will also need a Glassfish v3 instance to deploy the web application. No additional configuration is needed. The application uses the out of the box data sources configured in Glassfish v3 (jdbc/sample) to connect to the built-in Derby database schema "SAMPLE"

Test Terminology

I have used JUnit and jMock to build Unit Tests in the samples. If you are not familiar with jMock, or have a faint idea about the difference between fake objects, Test Stubs, mock objects etc., look at the brief definitions of the test terminology, [in line with their definitions from the xUnit Patterns book - Great book, by the way]

  1. System Under Test (SUT) or Object Under Test: This refers to the actual object or combination of objects being tested
  2. Collaborator: A SUT does not operate in isolation. It relies on other objects for input and depends on some other objects for intermediate input/output. This set of objects that SUT depends are called collaborators
  3. Fake Object: A much light weight implementation of a component that SUT depends on
  4. Test Stub: Test stubs provide input to SUT. When I use test stub, it is almost always auto generated. By auto generated, I mean that I use jMock to mock the object and set the input expectations on it. The auto generated test stub provides mock input to the SUT. As you will see later, there is no need for a stubbed UI in JavaFX.
  5. Mock Object: Mock object is like an auto generated stub, except that it provides output that SUT relies on. Hence the expectations set into the mock object test the SUT behavior

Mapping Model-View-Presenter components to TDD

Figure 1 below shows these objects as color coded entities in a traditional TDD based on Model-View-Presenter.

Traditional TDD

Figure 1. Traditional TDD for UI

A few salient points about Figure 1:

  1. The UI Presenter is the SUT.
  2. The stubbed UI provides the input data. Traditional TDD for UI relies on stubbing the UI. This is because the UI creation itself is difficult. The UI implements an interface and the widgets are identified as Clickable, Selectable etc.
  3. All other Collaborators are faked or mocked.
  4. The UI Presenter has reference to the UI Interface, presentation model and the service interface (one or more).
  5. The UI also holds a reference to the Presenter

JavaFX variant of TDD

In JavaFX, I treat the UI (Node) and the Presenter as a single unit (SUT Couplet) as shown in Figure 2. The JavaFX specific nuances that result in a slightly different form of test setup are as follows:

  1. In JavaFX it is cumbersome to declare the UI interface and then create the stub and actual UI class.
  2. Declaring the UI using an interface makes it more Java-ish, thus losing advantages of the JavaFX style including binding – which is crucial for UI states, enabling, disabling etc.

https://fxobjects.dev.java.net/files/documents/11182/151350/tdd2.png

Figure 2. JavaFX TDD for UI

A few salient points about JavaFX based TDD:

  1. There is no stub for the UI. The UI is created AS IS with a bunch of fields
  2. The only logic that resides in JavaFX UI is the state binding to the model data. The second non-logic piece that resides in UI is the layout.  The initial focus during TDD is to get the former correct.
  3. The layout with HBox and VBox can be created after the initial functionality is completed. Initially, the UI contains a create() method returning a javafx.scene.Group with all the UI widgets as its contents.
  4. The presenter is not the sole SUT. The presenter and the Node together form the SUT couplet.

Sample Application

Now let us turn our attention to the actual application I created to demonstrate the TDD with MVP and a few other architectural aspects as well. The sample application has 2 screens. Figure 3 shows these 2 screens.

  1. The first screen allows the user to simulate login as an end user or as store manager.
  2. The subsequent screen allows the user to search for products.
  3. The end user can only search on product code. Store Manager can search on product code and quantity on hand.
  4. The end user sees 4 product fields – Code, Description, Sales Price and whether available or not.
  5. The store manager sees 5 product fields – Code, Description, Purchase Price, Sales prices and Quantity on Hand

https://fxobjects.dev.java.net/files/documents/11182/151355/tdd-pic1.jpg https://fxobjects.dev.java.net/files/documents/11182/151356/tdd-pic2.jpg

Figure 3. Screen 1 and Screen 2 of the Sample Application

Application Architecture

The application uses simplified 3-tier architecture as shown in Figure 4 below. I will describe the architecture using the product search use case.

  1. The JavaFX client side consists of ProductSearchNodePresenter and ProductSearchService Hessian Proxy wired together by Spring. (In fact all components are wired using Spring.)
  2. When  the user clicks on Search, the UI just calls presenter.doSearch(). You will notice that the node falls back to the Presenter for everything - for decision making during binding, for event handling and making backend calls and getting data and setting into the Node back again or traversing to the another node – the whole shebang.
  3. The Presenter uses the Product Search Service (already injected by Spring) to communicate with the remote services
  4. The backend/remote services is a web container with a simple Hessian servlet implementing the Service interface. It receives the call and responds with data.
  5. All data exchange between the two ends is via Hessian over HTTP.
  6. The presenter creates appropriate Presentation Model objects and sets the data into the table/grid

Sample JavaFX Application Architecture

Figure 4. Application Architecture for the Sample JavaFX Application

Spring based Injection in JavaFX

The sample JavaFX application is entirely wired using Spring. This is no accident. The benefits of using Spring in any decent sized application - whether it is JavaFX, Swing or on the server side goes without saying. Spring Dependency injection, when used correctly, adds value to the architecture and simplifying it. Okay, Enough said.

The spring configuration xml for the JavaFX application defines the Node Presenter, Search Service Hessian Proxy and are appropriately injected into one another as required. For instance, ProductSearchNodePresenter is injected with the a Spring Hessian Proxy implementing the IProductSearchService interface.

<beans ...>
    <bean id="productSearchService" class="org.springframework.remoting.caucho.HessianProxyFactoryBean">
        <property name="serviceUrl">
            <value>http://${serverName}:${port}/${contextPath}/ProductSearchServlet</value>
        </property>
        <property name="serviceInterface">
            <value>org.fxobjects.mvp.service.productsearch.IProductSearchService</value>
        </property>
    </bean>
    <bean id="productSearchPresenterBean" class="org.fxobjects.mvp.ui.productsearch.ProductSearchNodePresenter">
        <property name="id" value="ProductSearchNodePresenter"/>
        <property name="productSearchService" ref="productSearchService"/>
    </bean>
... .. </beans>

The spring xml is loaded in the Main  script as follows:

var ctx = new ClassPathXmlApplicationContext("org/fxobjects/mvp/mvp-spring-beans.xml");
var appController:ApplicationController = ctx.getBean("appController") as ApplicationController;

Note: If you have looked at the code already, you also get an idea of how the so called Application Controller gets the right node and binds it to the scene graph. In the xml, you just mark the desired node presenter as default node presenter and the Application Controller picks that and displays it as first screen. This is a simplified implementation of the so-called controller chain – which I will discuss in future posts.

Netbeans Projects

3 Netbeans projects are created as shown below in Figure 5. One additional project holds the JUnit Tests. Both the front end and the backend projects depend on mvp-common – a Java class library project that holds the classes for data exchanged and also the service interfaces.

https://fxobjects.dev.java.net/files/documents/11182/151367/file_151367.dat/Netbeans%20Projects%20JavaFX.jpg

Figure 5. Netbeans projects in the Sample JavaFX application and their inter-relationships

Note that the JUnit project is nothing but a JavaFX script project with a script file with run() method. This run() is marked as the entry point. All Tests are run from here. At this point Netbeans does not provide a mechanism to individually select a TestCase and run it. Hence it all has to be run from this run() function. This JavaFX-JUnit project imports all necessary jars for testing (JUnit, JMock) and spring, mvp-javafx and mvp-cmmon.

public function run():Void {
    var runner:TestRunner = new TestRunner();

    //Verify that the first node displayed is the LoginNode.   
    runner.run(DefaultNodeTest.class);
   
    //LoginNodePresenter and LoginNode Unit Test
    runner.run(LoginNodeTest.class);
 
   //ProductSearchNodePresenter and ProductSearchNode unit tests
    runner.run(ProductSearchNodeTest.class); 
}

Getting to TDD

Ok. Now I get to the core topic of this blog post. How to do TDD with JavaFX correctly?  When I do TDD with JavaFX – here is how I proceed.

Preparations

  1. First and foremost: The interface and objects for data exchange are fleshed out to some degree
  2. Then, I just create the Node with the widgets, but without its layout. I just create a Group and add all widgets to it and return in create() function. The widgets do not have any bind logic. They are dumb widgets at this point.
  3. Then I create a Presenter JavaFX class and create bidirectional reference between the Presenter and the Node.
  4. I add event handling function in the Node (for instance button clicks) to just call functions in presenter
  5. Then I stub the callback method in the presenter.

Now the real TDD fun starts

Now, let me start writing JUnit tests for verifying the initial state of the UI. For e.g. the ProductSearchNode should not display the quantity on hand search box for end user. Since my application looks for the currently logged in user in a predefined place, I do the following in my JUnit test

public class ProductSearchNodeTest extends TestCase {
    var presenter:ProductSearchNodePresenter;
    var node:ProductSearchNode;

    public override function setUp():Void {
        presenter = ProductSearchNodePresenter { };
        node = presenter.getNode() as ProductSearchNode;
    }

    //test Quantity On hand textbox is invisible to end user
    public function testQOHInvisibleForEndUser():Void {
        ApplicationData.currentlyLoggedInUser = TestUtil.createDummyEndUser();
        assertFalse(node.quantityOnHandTextBox.visible);
    }

Obviously the test fails now, so I proceed with fixing it. The fixing involves adding the following bind condition (in bold) to the JavaFX UI Node (and the test will pass after that)

    public-read var quantityOnHandTextBox:TextBox = TextBox {
        visible: bind prodSearchNodePresenter.shouldShowQuantityOnHandOption
        layoutInfo: LayoutInfo { width: 50 }
    };

A few points to note here:

  1. The TextBox does not decide whether it should make itself visible. Instead it relies on a appropriately exposed bind variable in the Node Presenter
  2. Also note the Presenter is created in setUp() function in the testcase. And then the presenter creates the actual Node.
  3. Since Presenter is in charge, it should always create the Node. This puts the Presenter in control to decide whether the Node should be destroyed or to keep it around for future display. The Node itself should not make that decision. In this sample application, presenter.getNode() function caches the Node and returns the same instance for subsequent invocations.
  4. Since it is the presenter that creates the Node and also gets callback when button clicks occur, we can easily simulate button clicks by calling the appropriate callback methods on the presenter directly from the test. Not even a stubbed out event is required! Can’t get easier than that.

More TDD and what a Mockery!

Now let us look at another JUnit Test. The Quantity On Hand is a textbox in the Search UI and the data entered should  be numeric. But right now, the text box allows anything. (Well, I could have used a TextBox that allowed only numeric characters, but then I could not have possibly demonstrated this jMock introduction test case to you !).  Here is what I envision as happening when the doSearch is clicked to trap those data type errors

  1. When non numeric data is entered into the quantity on hand textbox and search button is clicked, the Presenter.doSearch() is called.
  2. The doSearch method validates the data types and then create a list of error messages
  3. Then it passes the error messages to the Application Controller for display.
  4. No further functionality is invoked.

Note that our previous test involved pure state checking. That was achieved with assertXYZ() calls. Our current test involves checking the internal behavior of the doSearch() function itself. How do we test that? That’s when jMock comes into picture.

  1. jMock allows me to mock interfaces.
  2. This in turn allows me to plug the mock service implementations into the Presenter instead of the real implementations.
  3. Also, as you might have seen in the code, the NodePresenter is owned by the ApplicationController – a top level class that controls the entire application. ApplicationController implements the IApplicationController interface and has many methods, and one of them useful for us right now is displayErrors(). This method can be called to display the errors.

Based on this, here is how my test code looks like

  • Create a Mockery object. This acts as the context for jMock expectations.
var jMockContext:Mockery = new Mockery();
  • Set a non numeric text value in the Quantity On Hand textbox
    public function testSearchDataTypeError():Void {
        ApplicationData.currentlyLoggedInUser = 
                              TestUtil.createDummyStoreManager();
        node.productCodeTextBox.text = "SW";
        node.quantityOnHandTextBox.text = "ABCDEF";
       .. ..

    }
  • Create a mock implementation for the IApplicationController and set it on the Presenter.
        var mockAppController:IApplicationController = 
                      jMockContext.mock(IApplicationController.class);
        presenter.setOwningUnit(mockAppController);
  • Set the expectations. Expectations let me sketch my expected behavior of SUT when interacting with a mock object – expected method call, how many times, expected input, expected output and in what sequence etc. This one below sets the expectation that the displayErrors() function on mock controller is called with a list of errors.
        var expectedErrorMsgList = new ArrayList();
        expectedErrorMsgList.add("Some error message");
        var expectation:Expectations = new Expectations();
        expectation.oneOf (mockAppController).
                          displayErrors(expectedErrorMsgList);
        jMockContext.checking(expectation);
  • Make the actual doSearch() call on presenter.
        presenter.doSearch();
  • If the doSearch() behaved as expected, then it should internally create the data type errors list and call the ApplicationController.displayError(List). This verification is done by caling the following at the end. An exception is thrown if the actual behavior did not match the expected behavior.
        jMockContext.assertIsSatisfied();

NOTE:

  1. I ate my own dog food when writing this test code.  In this test I wanted to peek into the internal expected behavior of the SUT without mocking the SUT itself!!
  2. Initially I was planning to code the entire logic of validation and error display within doSearch(). With that approach, I would not be able to test the error behavior.  Hence I added the displayErrors(List) into ApplicationController.
  3. But ApplicationController was a JavaFX class. An interface would have been better. Hence I ended up extracting the IApplicationController interface from the existing ApplicationController by refactoring.
  4. Similarly the parent class NodePresenter was using the ApplicationController straight away. I ended up changing this to use the IApplicationController interface.
  5. This allowed me to mock the application controller which is a great thing because it allowed me to plugin mock app controllers and check expected behavior, navigation behavior among other things.
  6. As they say, TDD helps to refactor code to create cleaner and elegant code. This is one such good example to prove that point.

More mockery and Expectations – JavaFX doesn’t like the double braces

We aren’t done with mocking yet. I now show you the test code for an actual search by mocking the IProductSearchService (and that JavaFX does not allow the double brace syntax of jMock Expectations and how to get around it).
To test a normal search that returns results, I setup the test code as follows. I believe you are comfortable with most of the code except the code in bold

    //test that a normal search returning result from server will display all the data fetched in UI
    public function testSearchAll():Void {
        ApplicationData.currentlyLoggedInUser = TestUtil.createDummyStoreManager();
        node.productCodeTextBox.text = "";
        node.quantityOnHandTextBox.text = "";

        var crit = new ProductSearchCriteria();
        crit.setProductCode("");

        var resultList:List = new ArrayList();
        var p1:Product = new Product("SW", "Software", 100, 8.00, 100);
        var p2:Product = new Product("VW", "Volkswagen", 100000, 10000.00, 100);
        resultList.add(p1);
        resultList.add(p2);

        //create the mock service and set it as the real service
        var mockSearchService:IProductSearchService = 
                         jMockContext.mock(IProductSearchService.class);
        presenter.setProductSearchService(mockSearchService);

        var mockAppController:IApplicationController = 
                         jMockContext.mock(IApplicationController.class);
        presenter.setOwningUnit(mockAppController);

        /*
            set expectations that Search Service will be called with a empty search criteria
            after that assert the state that the results are a non zero and set in table and the presentation model
        */
        var expectation:Expectations = new Expectations();
        expectation.oneOf (mockSearchService).searchForProducts(crit);
        expectation.will(
            expectation.returnValue(resultList)
        );
        jMockContext.checking(expectation);

        presenter.doSearch();

        jMockContext.assertIsSatisfied();
        assertTrue(sizeof presenter.products == 2);
        assertTrue(sizeof node.productTable.rows == 2);
    }

The code in bold also shows the expected return value from the search service. Notice that it is a bit verbose. A similar jMock expectation in Java would have looked like this:

Expectations expectation = new Expectations() {
    {
        oneOf (mockSearchService).searchForProducts(crit);
        will(returnValue(resultList);
    }
});

Now I call this is a ideal compact and fluent interface. However JavaFX does not like the double braces for instance initialization blocks. Hence we have to break it down into non-fluent method calls.

Final dose of jMock – Using named sequences for in sequence verification and failure verification

Here is my final dose of jMock to get you test infected ?. This one tests the behavior of the SUT when the remote service is unreachable. The code in bold shows the sequencing of expected behavior when more than one “thing” is expected.

    //test that a failure in the remote service (manifested as a spring framework remote connect failure exception) is dealt correctly by the UI and is dispatched to
    // Application controller to display the errors
    public function testSearchServiceFailure():Void {
        ApplicationData.currentlyLoggedInUser = TestUtil.createDummyStoreManager();
        node.productCodeTextBox.text = "SW";
        node.quantityOnHandTextBox.text = "";

        var innerDummyException:Exception = new Exception();
        var thrownException:RemoteConnectFailureException = 
           new RemoteConnectFailureException("Cannot connect to service", 
                                                       innerDummyException);

        var crit = new ProductSearchCriteria();
        crit.setProductCode("SW");

        //create the mock service and set it as the real service
        var mockSearchService:IProductSearchService = 
                    jMockContext.mock(IProductSearchService.class);
        presenter.setProductSearchService(mockSearchService);

        var mockAppController:IApplicationController = 
                    jMockContext.mock(IApplicationController.class);
        presenter.setOwningUnit(mockAppController);

        /*
            set expectations that when there is a failure on any kind of searching,
            the applicationController's handleError is called
            after that assert the state that the results are empty in table and the presentation model
        */
        def seqName:String = "service exception sequence";
        def exceptionSeq:Sequence = jMockContext.sequence(seqName);
        var expectation:Expectations = new Expectations();
        expectation.oneOf (mockSearchService).searchForProducts(crit);
        expectation.will(
            expectation.throwException(thrownException)
        );
        expectation.inSequence(exceptionSeq);

        //assert that the handle Error on the controller is called
        expectation.oneOf(mockAppController).
                    handleError(thrownException, 
                       ProductSearchNodePresenter.SEARCH_SERVICE_ERROR_MSG);
        expectation.inSequence(exceptionSeq);

        jMockContext.checking(expectation);

        presenter.doSearch();

        jMockContext.assertIsSatisfied();

        assertTrue(sizeof presenter.products == 0);
        assertTrue(sizeof node.productTable.rows == 0);
    }

The sequence name in the above listing (code in bold) ties together all the expected behavior in that order and sets the expectation that they are invoked in that order. The Mockery.assertIsSatisfied() verifies the order of invocation as well.

Summary

This was a long blog indeed. I am glad that you are still awake and made to this point !! But hopefully I have kept this very important concept of TDD readable and conveyed the importance of doing TDD in making your system architecture elegant and building a system that works today and works tomorrow. I have also demonstrated that MVP (Passive View MVP) is your friend in getting an elegant TDD friendly JavaFX architecture. Along the way you have also seen how I used Spring for loosely coupling the system with dependency injection and all that jazz. You also got a sneak peek at the Controller Chain and how it is used for node navigation. I encourage you to download the accompanying application and go through it to understand the code in detail (link here). And don’t forget to write elegant TDD style JavaFX applications

PS: As a parting note, I would also like to note that JFxtras – a fine collection of JavaFX controls has its own Testing framework. It is based on JUnit, but extends it in some ways to write JavaFX-ish tests. It also has an Expectations framework. However I am not sure if it allows me to mock interfaces, set expectations, sequences etc. in a way that jMock does. I was familiar with jMock from my previous projects. Hence I used JUnit and jMock in this post. If you are adventurous, try it out and let me know ?

What’s next

This is not the end of it !!. There is more on the way. Notice how I used synchronous communication for connecting to remote services. What I should have really used is asynchronous communication. But that would muddle the concepts and make things complex. In the next installment I will cover asynchronous communication with the remote services using asynchronous Command objects and how to test them. Following that, I will show how to break a large JavaFX application into modules, using Application Controller, Module Controller and using Application Events. Another future blog will cover mechanisms for data exchange – not specific to JavaFX but in general those that play well with overall “agility” of the architecture. Of course Security and dealing with it is also in the offing. So, watch out this space for more blog posts related to Enterprise JavaFX architecture – the effective way! And do not forget to provide feedback and what would you like to see covered in this exciting area.

In spite of the cool animation and glamour power of JavaFX, the largest usage of JavaFX will be for building “boring” enterprise software combined with some visualization. There is a dearth of resources exploring architecture options for building serious and large applications using JavaFX. This blog post is an attempt towards addressing that gap.

An enterprise JavaFX project can be a complex beast. No two projects are alike. You might have to build a brand new system, integrate with an existing system (partially or completely) or deal with project specific or company specific constraints. Hence there is no single architecture that satisfies all the constraints. However, some of the common criteria for architecture and design of JavaFX application are as follows:

  1. Simplicity of architecture
  2. Usage of proven design patterns
  3. Less coding, Less maintenance
  4. Domain Driven Design (where possible)
  5. Ease of Unit Testing

Some patterns lend very well to the above criteria and I will use those patterns and principles for the design and end-to-end architecture useful in fairly large JavaFX application.

Front End architectural principles

The best place to start is obviously the actual front end itself.  Pure Java UIs have the following option in this area.

  • Obviously there is the Swing Application Framework (JSR 296).
  • There are other options such as Netbeans RCP, Spring RCP, Eclipse RCP frameworks and can be combined well with a dose of OSGi for extensions.

If I were to implement a moderately large JavaFX project none of these models suit my need well.

  • The former (Swing Application Framework) is too high level. A high level application framework is a good start, but JavaFX in RIA mode has its lifecycle cut out. There is always the creation of Stage, Scene and then displaying it. There isn’t much scope for enhancing this. So this is not a feasible solution.
  • On the other extreme, we have the low level RCP style frameworks. Developing a JavaFX app in RCP style today is not possible because such a framework does not exist. In addition, it forces you to implement the UI in a certain manner which may not be suitable for every scenario.

Typical JavaFX Node Lifecycle

Given this, a middle ground is my preferred solution i.e. to capture commonalities in the behavior across pages (Node in JavaFX parlance. From here on, I will refer to these individual JavaFX pages as JavaFX node) into a framework. The boilerplate code that goes into every JavaFX node could thus be captured in one place that controls the lifecycle of the JavaFX node. Any good JavaFX architecture does so by capturing these fairly generic lifecycle phases:

  1. User enters some data and submits
  2. UI level Validation occurs
  3. Domain objects are constructed and validated again if necessary
  4. A request is built. Data in appropriate from the UI is converted to a form appropriate for transmission
  5. Request is sent and wait for response synchronously or asynchronously (based on our choice of communication mechanism)
  6. In both cases, we might get actual response, or exception in some format (500 error, 203 error, authorization error etc.). If the request was asynchronous, then the data from response received has to be set into the UI (same page or different one) in an appropriate manner. If the data has to be set into another UI, some kind of transition to another page should also occur.
  7. Server requests generally should be asynchronous and the user should be provided with a mechanism to cancel if it takes too long
  8. In addition, if the user signals aborting a particular action mid stream, there should be option to veto (dirty checking) by other components (For instance, “You have made changes. Do you really want to move away from this page?” etc.)

Now that we know what the general lifecycle should be, the question arises – Where should this be captured – View or Controller (if there is such a thing as Controller). I will address that next.

Model-View-Controller or Model-View-Presenter

MVC and its variants are the defacto-kings of web frontend implementations. However it is the Model-View-Presenter that lends itself well to rich UI development. The nice thing about it is that the MVP style UI is that it separates all the aspects of event handling from the UI into the Presenter class itself.  This makes it easier to unit test the UI in isolation (in combination with mock objects if needed).
There are two variants of MVP – Supervising Controller and Passive View (Check these links to read them from Martin Fowler’s essays - martinfowler.com/eaaDev/SupervisingPresenter.html andhttp://martinfowler.com/eaaDev/PassiveScreen.html).

If the data binding worked well with JavaFX, perhaps Supervising-Controller style would have been a good fit. But since the opposite is true (as noted in my earlier blog - http://weblogs.java.net/blog/srikanth/archive/2010/06/21/javafx-bind-%E2%80%93-too-much-hype), it is the Passive-View MVP that best fits JavaFX. A typical division of responsibilities among participants in a Passive-View MVP implementation in JavaFX is provided below (for the JavaFX node lifecycle mentioned earlier): 

http://objectsource.com/blogs/wp-content/uploads/2010/07/ClassDiagram0.pnghttp://objectsource.com/blogs/wp-content/uploads/2010/07/SequenceDiagram0.png

  1. Every JavaFX node is implemented as a subclass of CustomNode.
  2. Every such subclass will hold reference to a NodePresenter. (A question arises how will a node get reference to a node presenter. The answer is Dependency injection of course. I have personally used Spring for wiring JavaFX objects on the client side itself. Guice could perhaps be also used - http://weblogs.java.net/blog/srikanth/archive/2010/06/12/wiring-javafx-objects-spring-tread-caution)
  3. It is the NodePresenter that is notified on every event in the UI.
  4. NodePresenter then starts the lifecycle on the JavaFX node. It connects to the server, using other helper classes and waits for the response.
  5. After the response is received, it passes control to another NodePresenter (and not to another JavaFX Node), also passes any data and asks the next NodePresenter to take over.  (Indeed the view is passive ?)

Why is MVP style JavaFX UI test friendly?

You can use JUnit to test JavaFX. If you have been previously been disappointed at how JUnit cannot test a JavaFX UI, then adopt this MVP approach and soon you will find that the JavaFX UI can be unit testable. (JFxtras also comes with a JUnit extended framework for unit testing. Take a look at it - jfxtras.org)

  • Notice that the UI need not be completed to start testing it. All it needs is appropriate fields. The  unit test could just instantiate the node, set the data, and then could call doOnSave by itself and let the NodePresenter do the rest of the stuff. At the end of it, the test case can assert that the data is set into the current node or some other node.
  • Similarly the code to connect to the back end need not be present to unit test this piece. You could either stub or provide mock expectations for the back end code and the mock expectations work against the presenter, once again letting the developer to assert the data in the node widgets

A typical Passive View style implementation of in JavaFX is slightly different than one might see in GWT or elsewhere. For instance, in GWT, the UI contract is captured in Java interface and the UI widgets also provide an interface making it easy to stub the entire UI. Such a facility does not exist in JavaFX.

Application Event

A JavaFX Ui is never going to be plain vanilla. It might be even a composite of multiple nodes. A change in one node of a UI might need to trigger changes in another node in another part of the UI. The JavaFX bind syntax is not a good candidate as it tightly ties the source and the target. Hence there is a need to fire up Application Events so that other nodes in a composite UI can listen for changes in the underlying model and react accordingly. Note that in standard Java, this is implemented using PropertyChangeListeners etc. However one never has control over the backend might be designed and the equivalent could be achieved by firing Application Events.
Not all events to be fired are Application Events. An Application Event is something has a significant business meaning. Smaller changes could be fired as Command Event and State Event. A command event indicates that something has been triggered in the application. A state event indicates something has changed in the application. Use these events accordingly. Event LIsteners  could be implemented the standard Java interface way or you could take advantage of JavaFX mixins to achieve this.

Controller Chain
 

I am not sure if this term controller chain has been defined in industry literature, but I sure feel the need for it.  In a small application, one could have few JavaFX nodes and a few Node Presenters. As the application grows, there is a need to group the Nodes into modules and modules into application. In such a scenario, we need to have controllers for module and the entire application itself. Notice that I referred to these as controllers and not presenters. That is because presenters are micro managers and controllers are hands-off managers. At the Node level there is need for micro management. But at the module or application level, such a need ceases to exist. But nevertheless, there needs to be a hierarchy in the chain of controllers (hence the term controller chain) with some ownership relation between them.

Advantages of having controller chain

  1. When JavaFX nodes need to transfer control to another node, ultimately the change in scene has to be effected at the Stage level. With a single level hierarchy of presenters/controllers, soon every node presenter starts “talking” to every other node presenter and ultimately get linked to the Stage somehow. This is definitely not desirable. Using bound functions to trigger the change of node from one node presenter to its module controller and then a lookup at the application controller level will be a better design
  2. Modules can be cohesively designed and the application can consist of loosely coupled modules
  3. The top level Application Controller can communicate with Stage and decide who should be the default node/module during startup
  4. ACLs can be assigned at the module controller level (In a classic Java EE application, the model is role based. When the user clicks on a link, a decision is made whether or not to allow access to the link. However it is better to design rich UI on a ACL based authorization model)
  5. Allows the application to grow bigger with the evolving pains. Modules are transparent to each other

NOTE: It is easy to confuse whether OSGi could also be used as an alternative for controller chain, but it is actually complementary to the controller chain.

Event Bus

An event bus is not an absolute necessity, but if a JavaFX app really grows so big that using listeners across modules results in tight coupling, then event bus should come into the picture. Otherwise it is just a over engineering.

What’s next

This blog post covered a lot of basics of UI design and architecture as it relates to JavaFX. JavaFX comes with its own quirks and the architecture and design has to adapt to that. I have not gotten into a lot of dirty details. But I will soon provide a reference implementation covering all of the above points I discussed, so stay tuned. In addition, I will also cover asynchronous communication and data exchange mechanisms, the EDT in JavaFX, security considerations and testing (including testing async calls) in more detail in the coming installments.  Until then, follow these rules to churn out elegant and testable JavaFX applications !!



 

One cannot cruise through JavaFX land these days without hearing about JavaFX keywords “bind”, “inverse” and “on replace”.

In short, one could think of JavaFX bind as an Observer pattern at the language syntax level.  It is one of the biggest “zing thing” purported about JavaFX in common literature. It has even been claimed that the JavaFX bind can be applied for declarative automatic data binding.

[What is data binding – In a typical application the data comes from a server. A common mechanism used in Swing based UI development is to automatically tie the data from the Java based model object to UI widgets. This mechanism is called Data Binding. JGoodies data binding is used frequently with Swing. Similarly SWT and JFace data binding is popular in Eclipse RCP world]

Having successfully used SWT/JFace data binding in the past, I was ecstatic about JavaFX providing language level bind support and decided to try it out in a real JavaFX project and found it is not all that great as hyped. [One would of course say – “Yes it has its limitations”. But what are those limitations exactly? Instead of hand waving, this blog examines them thoroughly] 

Let’s start with the first and obviously a big limitation.

Limitation 1: A JavaFX attribute cannot be bound to a Java attribute. 

A JavaFX attribute cannot be bound to a Java attribute. I understand there are underlying platform reasons for that, but regardless that’s a big limitation as far as I am concerned. A lot of JavaFX applications that I build are large corporate applications and my data always comes from the server as Java objects. I'd have loved to bind these Java objects to the JavaFX UI widgets - ideally bidirectional bind or at least a unidirectional bind. Both options dont work.
What makes this even more confusing is that the JavaFX compiler happily allows the unidirectional binding syntax, but does not work. However JavaFX compiler throws an explicit error that inverse binding with Java objects is not possible.
This compiles, but does not work

     var firstNameTextBox:TextBox = TextBox { text: bind empJavaObj.firstName }

and this does not compile

    var firstNameTextBox:TextBox = TextBox {
        text: bind empJavaObj.firstName with inverse
    }

One might think, “Hey. No problem – I will copy the data from the Java POJO (or a JPA Entity) sent by the server as a response my service call into a JavaFX presentation object. Something like this:
 

    var firstNameTextBox:TextBox = TextBox { 
        text: bind empFxObj.firstName with inverse
    } 

where empFxObj is a JavaFX object representing a Employee. Well, you might be surprised that this also has its share of gotchas. That brings to me to the next limitation.

Limitation 2: A JavaFX object must be a def  for bidirectional binding to work

def is sort of equivalent to Java’s final. Wow! Take a deep breath for the repercussions of this limitation to sink in. This implies that the UI widget attribute can be bound only to a constant JavaFX presentation data object. The memory reference of this JavaFX presentation data object cannot vary (but its attributes can vary). Well, for me, the whole point of data binding is to get data from the server as Java POJOs, construct a fresh JavaFX object and set it in the UI so that the data binding magic takes place. If the Java FX object has to be a constant, then all of bind effort is wasted (Remember folks – To reach this point, we have gone through the effort of copying data from the Java POJO into a JavaFX object – and this can be error prone effort especially when the returned dataset or object graph is large and involves conditional copying of data.)

Workarounds for Limitation 2

Having said that, there are two workarounds for this limitation and I did not like either. But I will mention them anyway.

Option 1: Dispose the UI every time

Since JavaFX expects the bound object to be a def, we can do this – On every invocation (or the end of it), we can throw away the UI itself and construct a fresh one.  When the UI is newly constructed every time, the def is also a fresh one and so, the def is no longer a show stopper. So, if you had a controller that is listening to the UI button actions, it could simply access the def JavaFX object from the UI and make appropriate calls to the server to save the data and then construct a new UI with the freshly returned/constructed JavaFX object.

Why I don’t like Option 1

This works, but it works for simple scenarios. The whole reason one chooses JavaFX is because the UI is far more richer/complex than what can be done by traditional Web 1.0 or Web 2.0 AJAX/DOM manipulation (Very Rich Internet Applications – VRIA). This approach works like a charm for the Hello World application, but does not cut it for decently sized applications. When the UI and hence the scene graph is complex, disposing it on every server call is not a wise thing to do.  [Unless of course one likes to develop slow and non-responsive application with JVM constantly garbage collecting and CPU cycles wasted in constructing new scene graphs every time ? ]

Option 2: Write some interesting code

Here is the second work around. I will show you the code snippet for the UI first.

public class VeryComplexUINode extends CustomNode {
    var currentModel:FxEmployee on replace {
        boundModel.firstName = currentModel.firstName;
        boundModel.lastName = currentModel.lastName;
        
     }
    def boundModel:FxEmployee = FxEmployee {
        lastName: "";
    };
    def lastNameTxtBox:TextBox = TextBox {
        text: bind boundModel.lastName with inverse
    };
}

The VeryComplexUINode is my simplification of indeed a very complex UI node. Only one textbox is shown to illustrate the work around. Notice that the node maintains two instances of model objects. One is the boundModel and the other one is currentModel. Whenever the controller gets new data from server and creates a new JavaFX presentation object, it will set the currentModel on this node. The currentModel has an onReplace block that faithfully copies data into the boundModel. When data makes it to the boundModel, voila the UI widgets also show the latest data.

Why I don’t like Option 2

To get the JavaFX bind working, we have coded the following

  1. We copied data from the Java model object into a JavaFX presentation model object
  2. We then again copied data from the JavaFX presentation object into another

I can already hear you say: “Wait a minute!!  I am jumping through hoops here to get the bidirectional syntax work. Wasn’t the goal of data binding is to prevent this kind of coding anyway?” My point exactly. If I am copying the data from one object to another and so on just to get JavaFX bind syntax working in my scenario, I would rather not do it. Instead I would directly set the UI data manually from the Java model objects into the UI widgets.

Limitation 3: bind with inverse cannot work with expressions

Consider a case when the data to be set is based on a condition as shown below

var firstNameTextBox:TextBox = TextBox {
  text: bind
        if (empFxObj.lastName == null) "N/A" else  empFxObj.lastName  with inverse;
}

But guess what? You are not allowed to do that. Bidirectional binding don’t allow expressions. Again, I understand why it is disllowed, but this is a minor irritation while building business applications, as objects from one tier to another don’t map plain-vanilla.  A transformation of data between tiers is always in the cards. [You might be wondering – why in the world would anybody not have a last name. Certain cultures don’t have last names. Closer to home, “The Artist formerly known as Prince” does not have a last name!].

NOTE:

  1. Followers of “Presentation Model is redundant v/s necessary” discussion might say that – “Hey the presentation model needs to be dumb and the transformation code needs to reside in domain to presentation model mapping tier”. You are right, this code should reside in the mapping tier, but the above code is meant to illustrate the JavaFX limitation - not a showstopper, but a nice to know limitation
  2. There is at least one case where a JavaFX presentation model is suitable. In JFxtras libraries, The XTableVIew component can only be bound to a JavaFX object. That bound JavaFX object has to extend a predefined XObject class.

Design choice: Using bind as alternative for loosely coupled Event Listeners

Now, some of you might have thought about a lot of uses for the bind syntax and I am one of them. I even thought I don’t have to write event listeners, change listeners in my system and just use bind. Well a deeper thought proved how wrong I was – The primary fact about JavaFX bind is that you know whom you are binding to. That results in a tight coupling between the source and listener.

This is not an issue if the listener and source are in a same UI. There is nothing wrong about it. (In fact this is the only case where using JavaFX bind saves additional coding and is an elegant way to do things).

If you are trying to mimic an event bus sort of thing (for instance enabling the copy icon when text is selected somewhere in the UI). It can be theoretically achieved, but the code will not be pretty.

Summary

The possibilities with JavaFX bind are over-hyped. However use bind only in limited scenarios (Not many as you saw above). There are a whole lot of cases where bind seems like a good idea at first, but read this blog post for the devils in the details. The whole point of this blog post is to convince you to bid good bye to data binding in your JavaFX UI architecture using the bind syntax (at least for now and foreseeable future). It is a slippery slope that will lead you pointless customizations of your UI objects, plugging in PropertyChangeListeners in JavaFX, forcing you to invent one hack after another to support your one-off cases and duct-taping your application to prevent it from falling apart.
 

I recommend that you do yourself a favor and forget data binding in your application using the JavaFX bind syntax.

I am drawn to JavaFX these days. Not because it is cool (which, it is) or because I want to do whiz bang effects, but just as an explorer to do an unbiased check on whether it can be a tool useful to create regular corporate UIs in any better fashion than regular Swing. (Granted, I squirmed a bit when using it on my first project due to lack of UI components, but JFxtras http://www.jfxtras.org has helped me on that front.)

I like Spring framework. Who doesn't? (Well some don't. But anyway, that’s not the point). I wanted to see if I could combine the both to see if it is a match made in heaven. In other words, I attempted to see if I could wire JavaFX components using Spring. After trying a couple of scenarios, I found at least one scenario, where this may not work as expected. Here are the details:

I started with bare basics of using JavaFX with Spring first. Look at the code in Listing 1 for wiring a simple JavaFX class using Spring.

Listing1 

<bean id="myfxobj" class="a.b.SomeJavaFXClass">
  <property name="variable1" value="Initial Value1">
  <property name="variable2" value="Initial Value2">
</bean>

Needless to say that I cannot do constructor injection since there is no concept of constructor in JavaFX. So, my next resort is setter injection. It worked and Listing 2 below shows my JavaFX class.

Listing 2

public class SomeJavaFXClass {

    public var variable1:String on replace {
        println("variable1 repacement called");
    }
    public-init var variable2:String on replace {
        println("variable2 repacement called");
    }
    init {
        println("init called");
    }
    postinit {
        println("postinit called");
    }
    public function setVariable1(s:String):Void {
        println("setVariable1 called");
        variable1 = s;
    }
    public function setVariable2(s:String):Void {
        println("setVariable2 called");
        variable2 = s;
    }
}

 Nothing outstanding, but here are the concessions I made (or slightly violated JavaFX principle)

  1. Generally it is not a norm to write setters in JavaFX. But nonetheless I wrote the setters on the JavaFX class.
  2. Notice that variable2 is declared as public-init. In JavaFX it means, the variable should be initialized only when the object is created. By providing a setter injection, I am essentially violating that principle in theory. But it did not break anything.

Emboldened by the initial success, I went a step further to wire the JavaFX UI itself with Spring (Normally, when I create Swing UIs, I like to wire the UIs itself in Swing). Listing 3 shows the wiring that I could do (but I did not do)

Listing 3

<bean id="uiNode" class="a.b.MyNode">
  <property name="variable1" value="Initial Value1">
  <property name="variable2" value="Initial Value2">
</bean>

In JavaFX, as you know, CustomNode is the preferred base class for all custom UI. So, I create a subclass of CustomNode. Listing 4 shows this subclass. Pretty simple – nothing fancy. Amon other things, you will also notice the create() function. This is an abstract function in CustomNode. It has to be implemented by concrete subclasses of CustomNode. The create method is your passport to creating all the UI, so JavaFX runtime could use it when needed.

Listing 4

public class MyNode extends CustomNode {

    public-init var variable1:String on replace {
        println("variable1 repacement called");
    }
    public var variable2:String on replace {
        println("variable2 repacement called");
    }
    init {
        println("init called");
    }
    postinit {
        println("postinit called");
    }
    public override function create() {
        println("create called");
        return HBox {
            content: [  Text { content: "Default Node" }  ]         }
    }
    public function setVariable1(s:String):Void {
        println("setVariable1 called");
        variable1 = s;
    }
    public function setVariable2(s:String):Void {
        println("setVariable2 called");
        variable2 = s;
    }
}

 Instead of calling Spring, I used traditional JavaFX invocation to look at the order of invocation. as in Listing 5 i.e. by initializing the node in the traditional way .

Listing 5


var myNode:MyNode = MyNode {
    variable1: "init blah1"
    variable2: "init blah2"
};
myNode.setVariable1("Later blah1");
myNode.setVariable2("Later blah2");
Stage {
    title: "Application title"
    width: 250
    height: 80
    scene: Scene {
        content: [ myNode ]
    }
}

Listing below shows the output I got.

variable1 repacement called
variable2 repacement called
create called
init called
postinit called
setVariable1 called
variable1 repacement called
setVariable2 called
variable2 repacement called

See the output carefully. This is te key takeaway from this blog.

Notice that create() gets implicitly called even before init(), postInit() and setters. That means if the UI creation logic in create() depended on the initialization of variables for some reason, then Spring setter injection results in bugs.

Granted, UIs do not always depend on initialization variables, but there are enough cases where they do. In such cases, keep this gotcha in mind and tweak your implementation to rely on other mechanisms.

One such mechanism is to write a plain vanilla create and then bind the visibility of UI elements to the instance variables. These variables will be initialized in the very next step anyway.

This way, you keep an eye out for this Spring “gotcha” coding pattern in create() and quash the problem before it even occurs.

Happy JavaFX’ing with Spring
 

Bean validation is a nice API for for validating Java objects and is included in Java EE 6. But it can also be used anywhere, regardless of the layer. It can be used with or without JPA and in a stand alone Java SE.

  • It formalizes and encourages the validation approach at the domain model level.
  • It helps in de-duplication of the validation logic that we are accustomed to having all over the place - UI, business logic and elsewhere to get it back to the domain model - where it really belongs

In the past, people used (and a lot of them still do) anemic model objects, without ever giving a thought to the fact that they were central to their domain. A lot of validation is central to the domain, but was written elsewhere. I have noticed that people are thinking that the validation (and more) should be brought into the fold of the model objects. Bean Validation could serve as a selling point for Domain Driven Design. It worked for me.

Back to my point - I was desgining a JavaFX application. Not the ones with animation - but regular boring app, but with a horrendous set of validations. Writing all of them in the UI will be such a pain, in addtion to just not being right. Unknown to me, a backend was starting to take shape in another part of the world, where, among other things, validation was implemented as business logic in Spring POJOs wrapped in Session EJBs (you know, the typical hangover from early Spring-J2EE days). There were JUnit tests in place. That emboldened me to offer a refactoring of the validation into the domain classes - which were already JPA entities anyway. Not surprisingly I was met with resistance, but a bit of explanation around Domain Driven Design and the "insurance" of JUnit tests convinced the sponsors they should give it a shot. A few sessions of refactoring (based on guess work) and JUnit regression tests later, the validation logic was sitting pretty in the domain model, with a combination of standard annotations and custom constraints. This impressed the client so much that they started taking an active interest in domain driven design in other part of the system.

With the validations in domain model, I added the domain jar to JavaFX bundling. The required validation libraries from Hibernate were also added. The JavaFX application jar was signed (to allow appropriate JVM permissions for validator). The UI fields were retrieved to-and-from the domain model A Hessian remoting over HTTP allowed the objects to be dispatched to the server. The validations were shared between the JavaFX app and the backend. A happy ending indeed.

Moral of the story:

  • You can easily use Bean validation in JavaFX. Don't be scared. It would be a lost opportunity and time if you don't
  • You can use Bean Validation to sell the value of Domain Driven Design to your team and managers. It is one of the easy ways to convince - because it is a easy concept to unerstand even for non-technical managers.

Before you jump off and start doing everything with Bean Validation - remember that there are four types of validation

  1. Data Type validation
  2. Basic Domain Value Validation - trivial NotNull x less than y etc
  3. Cross field Domain Validation
  4. Complex Business Rules Validation crossing mutliple domain objects not in the same object graph

Remember that the first one should be done in the UI. A verification of whether a input is a String or Integer and pasing the int can only be done in the UI. It is better if you customize your input fields to accept only certain keys - thereby effectively eliminating this type of validation. Bean validation is ideally suited for 2 and 3. The fourth type should be evaluated case by case and generally is really business logic that belongs to EJBs or sometimes rule engines

Happy JavaFX'ing with Bean Validation

Struts is a very mature framework. Some may think it is old fashioned or not so cool kid on the block, but like it or not, it is a force to reckon with. If I were running a business requiring a solid web infrastructure, I would bet on Struts. After all, the bottomline for the business is project success and not playing with cool bleeding edge framework. (That's the passion for us, developers). And that's probably the reason why Struts is so popular.

Anyway, I have been using Struts ever since it came out there. I have seen developers use Struts in many ways - Some right, while others blatantly incorrect. A bunch of best practices emerged in my mind due to common sense and experience. And so, I decided to document them.

What started as a personal notes was growing into a full fledged book. And then, I decided to try my luck with self publishing. About an year back, I self-published the book. The book, Struts Survival Guide: Basics to Best Practices, as I called it, was successful (by my yardstick). I did not make any profit in the process, but I did not incur any loss either. It was a labor of love and a very rewarding experience at the end of the day.

I sold off all the copies of the book. And now, the ebook is available free for download here

What follows is my experience with self-publishing

As soon as I started writing the full fledged book, I realized that writing it is going to be tougher than my little notes. I have written articles before, but book authoring was a different ball game altogether.

For starters, I was my own editor, reviewer and graphics editor. That means I not only had to write, but also cross check the facts, fix the grammar and create graphics and illustrations. And I was doing all of this after my day job. (Needless to say that there are still a bunch of grammar mistakes, but no factual errors to my knowledge) I spent countless weekends and evenings working on all aspects of the book. It was painful and rewarding at the same time. It was a tough job to get reviewers for a fledgling book with uncertain future. However I did get a few of my acquaintances to review some chapters.

Finally, the book was completed. Now came the task of getting it printed. I realized I had to launch a company to publish and retain rights over it. The rule says that the book can be published only by a entity owning the ISBN for the book. And so, I registered a LLC in Texas over the Thanksgiving weekend of 2003. It was so easy to do over the internet; I didnt even have to get off my chair.

I also purchased ISBNs from R R Bowker. ISBN is that number on the book one hardly notices. It is the equivalent of UPC in the product world. Bowker sells ISBN in chunks of 10.

With the ISBN in hand, I shopped for somebody who could create cover page for me. Luckily there are a lot of businesses out there who can create cover pages for a decent price. The number of pages in the book, quality of paper to be printed upon etc. have to be known before the cover page is created. With their software, the cover page creators feed the page count, paper thickness and book size to create a template, draw some good eye-catching pictures and plug in a the ISBN and create a bar code out of it.

Next came printing. the printing cost directly depends on the number of copies (or print run as they call it). Higher the copies, lesser the price. I did not want print a lot and let it rot in my warehouse (read apartment ;-D). I did not want to print really less and pay too much per copy either. Finally I printed enough copies so as to break even when most of them were sold. And boy, was I lucky...

Next came copyright registration. It was pretty easy. Fill out a form and send it to Library of Congress

Any book is of no use without a credible and established sales channel. For book publishers, the sales channel comes in two-three forms. Third party retailers, Distributors, and direct sales. Large third party retailers such as Amazon, Barnes and Noble tend to buy directly from the publisher. Other smaller retailers and libraries buy through the Distributors. Then there is the direct sales from the Publisher's web site. Distributors and thrid party retailers take as much as 60% of the total list price as their commission.

I tried like crazy to get a distributor, only in vain. Luckily for me, Amazon.com is very small publisher friendly. I set up a account with them to sell my books. They take 55% of the sales price, but are very prompt in payments. Plus they turned out to be my biggest source of sales. If it were not for Amazon, I would be sitting on a pile of books.

No good book would sell without marketing. I joined the Publisher's Marketing Association (PMA), an entity that provides some marketing for small publishers. They hooked me up with Baker and Taylor, the largest wholesaler in US. A lot of libraries buy their books from Baker and Taylor. They accounted for my second largest sales after Amazon. 

All said and done, people buy book only if they come to know. Here too, PMA provided me with options to bundle my flyers with other publishers and mail them to Libraries, colleges and so on for a small fee. I dont know definitively, but all my Baker and Taylor orders might have been from Libraries.

Another source of marketing for me was the book promotion in Java Ranch (http://www.javaranch.com). Those folks organize book promotions every week and I booked a slot in advance. On the week, my book is due for promotions, I would have to answer a bunch of questions posted by their readers. Four lucky readers would get a free copy of the book. I think it is a great idea and worked out really well for me. A lucky Slashdot review for my book also worked its magic.

The toughest part was getting credibility to the book. One needs reviews, forewords from well known folks for that. It goes without saying that I did not get any. I sent out book copies and previews to a bunch of guys, but only one person was kind enough to review. Jessica Sant (again from JavaRanch) did an independent review and gave it 9 out of 10 horseshoes (4 out of 5 stars on Amazon). And I thought getting reviewers for the initial chapters was tough.

One final piece of the puzzle was direct sales. I set up my web site and sold ebooks and paperback through it. Selling Paperback was easy. I hooked up with Paypal and linked my site to it. When a payment is made in Paypal, it sends me an email. At the end of the day, after my day job, I reply to all of them and mail the book via USPS.

Selling ebook was a challenge. It is norm that people buying ebook get it immediately. I did not have the infrastructure to set up my own credit card processing. When a payment is made in Paypal, I learnt that it not only sends me an email, but also posts (HTTP) the buyer data to a url I provide (Poor man's web service). I signed up for Java hosting. My Hosting provider gave me a Tomcat where I deployed Struts application (Yeah... eating my own dog food) that would persist the Paypal posted data to a MySQL database. The buyer could then immediately download the ebook. Problem solved. 

Marketing, sales and customer support were difficult and hadn't it been for my wife, I would have been left with a bunch of angry customers. These tasks were tougher than writing the book on the first place. If one counts time as money, I made a huge loss. But then the experience was its own reward. There are some things money cannot buy. For all others things, well you know.....

I often hear from readers, why not just best practices book? There are other books that explain basics. From my perspective, it would be a non-seller. When readers buy a book, they expect complete coverage of the subject. And that's exactly what I did in my book.

Another thing I often hear is: What is the best practice for task XYZ? Why is it not covered in the book?
My answer is: It is definitely impossible to cover all best practices in such a small book. Moreover there are very few absolute best practices. Others are best practices relative to a project. The book lays foundation and prepares your mind set about best practices. Use your judgement in all other cases. 

Enterprise applications are all about data manipulation. Data flows through the system from one tier to another.

Consider a typical J2EE application built using Struts. The user fills out the HTML Form. The data makes its way into a Action Form. The they are copied into Value Objects using Value Object Assemblers (or DTO Assemblers). Then the data is copied into Entity EJBs (if used) and are persisted. The whole process is reversed when user retrieves data.

Traditionally value object assemblers are hand coded. More recently Commons BeanUtils has been used to simplify this tedious and errorprone task of copying data from one object to another. Remember BeanUtils.copyProperties()... However, BeanUtils have some important limitations.

BeanUtils can automatically copy (and convert) properties between objects, when the names of the properties are same between source and target objects. In large projects, this happens a lot as the web tier objects do not match persistence objects in both name and structure simply because they were developed by different groups.

BeanUtils package provides PropertyUtils class that can copy any property between source and target. However you have to explicitly invoke PropertyUtils.getProperty() and PropertyUtils.setProperty(). This works very well in many scenarios (like generic frameworks for example.), but doesn't work when you have to copy data between objects in the absence of mapping metadata. This would be almost like coding target.setXXX(source.getXXX()).

BeanUtils (actually ConvertUtils class) can only convert from String to primitives (such as int, boolean) and basic java.lang objects (such as Integer, String) and vice versa. and nothing else. If you want to convert from java.sql.Timestamp to long (say) or foo.bar.Blah to java.util.Date, you simply cannot. The only extensibility in BeanUtils is to write custom converter from String to foo.bar.Blah and vice versa.

If you are facing a scanarios like one of the following a new framework called OTOM (stands for Object ToObject Maping - pronounced as Autumn :-D ) can be a big help.

  • Source and target objects have different names and structures
  • You have to convert data from and to arbitrary data types
  • You want to map data conditionally

Like all of us, I have faced this problem. Being the lazy person that I am, I didn't do anything about it until recently. The truth is that such a need at one of my client scratched my itch.

And so, last week, I rolled out OTOM 0.5. As is evident, it is not complete yet but has most of the features for helping in the above mentioned scenarios. So, how does OTOM work? It is pretty simple actually.

OTOM has a Swing based GUI that allows you to graphically map properties from a source class to target class. Variety of mappings are allowed - Direct Mapping, Type Conversion Mapping, Collection Mapping (to be added) and even Conditional Mapping. The GUI generates a XML mapping file that stores the conversion metadata. The GUI looks as follows:

https://otom.dev.java.net/images/screenshot1.jpg 

The mapping metadata is stored in xml format. OTOM uses the concept of repositories to store the metadata. A repository consists of multiple Class Mappings. A Class Mapping stores the mapping information from a source class to a target class. Each ClassMapping has a collection of PropertyMapping correpsonding to mapping of JavaBean properties in the source class to those in the target class. The mapping can be conditional too. All thise can be done through the GUI itself. Writing the GUI as an Eclipse Plugin is still in my radar, but given that my immediate priority was to provide IDE independent GUI, I chose Swing.

The source code for the mapping can then be generated using an Ant task provided. The actual generation is done via Velocity templates. One of the nice features is that the generated code has absolutely no dependency on the OTOM framework. The generated code also does not use reflection as the mapping metadata helps me avoid it.

So, the next time you have to copy data from one object to another, think if you can use OTOM. Typical usages for OTOM include data mapping from Struts ActionForm to ValueObject (or POJO), JAXB generated objects to POJOs and so on. If you have suggestions for improvements, please post it in the project forums.

Spring break might be just a week away, but it still feels like OTOM to me ;-D. But then hey, I am from Austin, TX and there is no such weather as winter down here.

Logging with Log4J is simple and seems to be trivial and doesn't warrant a blog. However Logging in enterprise projects raises interesting requirements and possibilities.

The first question is where do you put your Logging library. With JDK Logging, you pretty much have no choice. It is always located in the classpath and loaded by bootstrap classloader, the mother of all class loaders.

Log4J brings two choices to the table. You can put it in application server's classpath or package it as a dependency library along with the EAR.

If yours is the only application hosted on the server, either choices will mean the same thing. However if there are multiple applications hosted on the same VM, care must be taken before putting the Log4J jar in the system classpath. In Log4J, all Loggers are singletons. This means if you have Loggers with same names in multiple EARs, then the Logger defined later overwrites the earlier one. In other words, you might find that the logs from your application end up in another application's logs.

This can be a problem even when there are no two loggers with same names. The catch-all root logger that exists in all your log4j configuration can pose a threat. If none of the defined logger categories are able to log the message, then the burden falls on the root logger. The root logger might be configured by the "other" application hosted on the shared server In other words never rely on the root logger and always defined a logical root logger. If you are using the fully qualified class name as your logger name, then define the toplevel package name uniquely identifying your application as the logical root logger. For instance "com.mycompany.myapplication" can be the logical root logger.

You might say, "Hey I have Log4J packaged in each EAR. So, the Loggers are singletons at the EAR level and I dont care about the names I assign to them". Before you go that route, consider how you aggregate messages per user basis. With Log4J, you are most probably using Nested Diagnostic Context (NDC) aren't you? Chances are that you are using a Servlet Filter to set the session id as the NDC contextual identifier. If your application is a standalone, then bundling the Log4J in your EAR is the right option.

However, if your application collaborates with other applications (EARs) and tracking user activity across applications with NDC is important to you then you are out of luck with Log4J bundled in the EAR. NDC manages a static stack of contextual information per thread. When your application makes call into another application's EJBs, then you are cutting across classloaders, and the NDC from the caller is not available in the callee. The only way that can be made available across applications is when the Log4J is loaded by the parent classloader.

Well, I might say "Put your Log4J library in the system classpath and your problems will be solved". But the reality is that you have to often live alongside other applications that have bundled Log4J in their EAR. Worse you might have to collaborate with them. Most likely you will not have the liberty to change their logging logic or configuration.

One solution that comes to my mind is using AOP in conjunction with ThreadLocal. For example if yours is the calling application and the callee relies on NDC, then you can store the identifier as ThreadLocal variable. Using Advices you can then associate the threadlocal value with the callee's NDC. And thus you have effectively carried over the unique identifier for the user activity acorss the thread of execution. The class using ThreadLocal should load from the system classpath though.

No matter how you log in your system, you might have run into situations needing to filter logs across multiple files, possibly across multiple applications for a given user at various times based on NDC. Utilities like Chainsaw or LogFactor5 do this for a single file. There is a need for having a broad based tool that does time based NDC filtering across multiple files. Perhaps there is a open source tool out there satisfying my requirements.

Another question perhaps outside the realm of Log4J itself is "How to correlate Logging that occurs across VMs?". A question that needs to be addressed in distributed environments. This may be impossible without encapsulating the correlation identifier in the invocation itself. The collaborating systems (caller and callee) should be able to interpret the correlation identifier. But then, it also results in tight coupling. I dont know if there is really a good solution to this problem.

Imagine you entered a retail outlet to shop that just says “OPEN”. Now what is your reaction if something suddenly throws you out of the shop – No reasons given. And then you find the retail outlet with a sign “CLOSED”. You will be frustrated won’t you? You'd expect that the outlet lets you shop now that you have entered it before the "CLOSED" sign is put up. right?

Guess what - a lot of J2EE systems in production might not be doing that at all. When new deployments go out, active users on the system are unceremoniously ousted.

What we want is:

  • Some minutes before the site is brought down, stop new users from entering the sytem.
  • However, let the existing users continue to use the syetm with a message that system will go down in 15 minutes.
  • When 15 minutes pass, bring down the system.


 

As any other problem there are many solutions to this problem at different levels.

At the hardware level, you can configure your router to not accept new requests for the application. You can also configure this at the web server level. In this blog, I am glossiing over a J2EE based solution to this problem.

I'd like to call this "Retail Outlet" Pattern, since it mimics the real world shopping experience. When the “OPEN” becomes “CLOSED” it lets existing shoppers to continue their unfinished work for another 15 minutes without letting any new shoppers in.

To implement, it requires a JMX MBean that holds info if the shop should be closed or not. Why JMX? well, a lot of people who start and stop the production application servers are administrators, not developers. MBean properties can be edited from any SNMP console, which all of them are familiar with. Some application servers allow editing of MBeans from their own console. Most of all, changes in JMX attributes are reflected immediately. (Another area where I like to use JMX is to control Log4J log level. More on that later.)

Another component is a Servlet Filter that interacts with another object (Session Lifecycle Listener) that counts the number of active users and keeps track of it. When the Mbeans switch is flipped to indicate the “CLOSE” status, the Filter blocks all new requests (those without a session), but let those with an active session to continue for another (say) 15 minutes.

A word of caution though. I have never used this approach before. However I would like to try this out. Any thoughts on your experiences?

Entity EJB sends shiver down my spines, I have to admit. Recently, I had to evaluate Entity EJBs (2.0) for a client of mine. I have had used propretiory Entity EJB extensions to implement persistence in enterpise applications, but the standard and portable Entity EJB of today is still dispappointing and no way a serious candidate.

The immaturity of EJB-QL alone was enough to convince me that Entity EJBs are not the way to go. I would get no where with a syntax that does not support the SQL semantics that we all have come to live with - ORDERBY, GROUP BY, HAVING etc. And then there is of course the lack of support for Date in the EJB-QL. There were more of course. But I had to do an unbiased evaluation keeping aside my personal preferences & prejudices. It was not if, but when I would stumble into a show stopper. And then came the inheritance. That was some interesting beast that is worth a blog.

We all know EJB specification does not support component inheritance. Per the spec, the Bean provider can take advantage of the Java language support for inheritance in the Remote & local interfaces and the implementation class. What does this really boil down to for application developers like me?

It means one thing. Suppose that you have a finder method on the Home interface for a parent EJB. You would expect that a finder on the parent Home interface would have the capability to return the child EJBs (remote or local) too. The lack of component inheritance implies this is not possible. Couple of work arounds have emerged to address this scenario.

Adapter Strategy:

This strategy has a lot of variation. I will go over one. In this strategy, an Adapter for the Base EJB is introduced. The Adapter implements the finders on the Base Home interface. All invocations for Base EJB Finders go through the Adapter. The Adapter will then issue queries on Homes for each subclass EJBs and aggregate the results. That this is a performance hog goes without saying. What should have been accomplished with 1 query resulted in n + 1 queries, where n is the number of sub classes.

Containment Strategy:

I think this name was first coined in a ServerSide article (http://www.theserverside.com/articles/article.tss?l=EJBInheritance) by Daniel O’Connor. Some vendors also advise to use this strategy with their application servers. In this strategy, the Base EJB has CMR with local interfaces of its subclass EJBs. The finder on the Base EJB performs a database “batch read” to get the subclass EJBs. The business methods on the Base EJB will use aggregate the results from the children (CMR) to return the EJB(s).

Then there is the third option to “flatten the EJBs, get rid of your inheritance”. I don’t like any of these. With the first option, performance is compromised, while richness of the domain model is compromised in others. I believe any technology (not just EJB) should be non-invasive This example of EJBs (read technology) driving the domain modeling and making it unnecessarily complex is just one example of invasiveness. Domain model should be logical and simple (and no simpler).

Bottom line:

Technologies come and go. If your business needs to survive, don’t tie the heart & core - your domain model and business logic to technology. Technology should complement, not compete or alter your design.

Imagine a Struts 1.0 world where an ActionForm was absolutely needed even for prototyping an HTML form in JSP using Struts custom tags. Things were good until the separation of concern came into picture. In real life projects, different people play different roles. Application developers have the responsibility of developing Java code and page authors would exclusively prototype the page and its navigation using JSP markup tags. Since the Java code beingdeveloped is constantly changing, the developer does local builds on his machine. Similarly the page author would certainly like to add or remove fields from the prototype during the page design. Since the HTML forms map to ActionForms, the above scenario implies one of two things.

  1. The page author constantly pesters the Java application developer to modify the ActionForm.
  2. The page author develops the ActionForm all by himself.

While the former hampers the developer productivity, the latter leads to overlap of responsibilities and headaches. Both options are not ideal. Struts 1.1 has solved this problem by introducing DynaActionForm. Although originally designed for developer

Filter Blog

By date: