1 2 Previous Next

elevy

18 posts
An interesting article: Social Enterprise Architecture that describes the idea of using the Social Technologies to improve the engagement and interactions of the business and IT, and the Enterprise Architecture organization to provide more structure and semantics to the interactions and collaboration tools in the enterprise. There is a tool called Semantic Center that was written in Javaand ADF that makes the concept a reality. It is built with WebCenter and Sharepoint integration capabilities. That solution has inference, you can get answers to questions like what projects are affecting what pieces of the infrastructure in the next 3 months? It is a very neat solution! Worth looking at it.  

I have been working a lot with the Swing APIs for a while now; created my own custom components, built filthy rich applications, and wrote a few blogs on the different things I found interesting.

As soon as the first Android phone was available, I went out to get my hands on it (the G1).

Since then, I always wanted to start at least looking into the APIs and writing my first Android app. For different reasons, I haven't been able until a couple of weeks back.

And I finally did it!

I created a really cool Mortgage Calculator for the Android Market. I called it: Mortgage Shark Calculator. It allows you to compare 56 loans in one screen.

Android is easy to learn, especially for anyone who has been playing with the Java 2D API's.

http://www.zilonis.org/screen1.jpg

I created a custom View, in which I am drawing the different payment amounts for all the mortgages. I wrote a simple event handler that listens for the touch events on the screen, and when the touch event lasts more than 400 milliseconds I draw a pop up on the screen.

http://www.zilonis.org/screen2.jpg

It was extremely easy to make all this custom behavior with the APIs.

I am using the alpha bits of the colors to make the background of the pop up semi-transparent giving it a more professional look.

You can also change the parameters of the comparison matrix by pressing the menu button on your phone, and then editing all the attributes of the loan.

http://www.zilonis.org/screen4.jpg

Give it a try and let me know if you would be interested in knowing the details of how I coded any specific piece of it.

Imagine you get to an organization that has several applications accounting for more than 800,000 lines of code. There are defects everywhere, releases after releases, lots of developers cranking code, after every release more defects... how do you stop the spiral?

This might sound like a spiral of death, or a suicidal path perhaps ;)

Before I continue, I have to say that the number 1 requirement to succeed in a situation like this one, is to have a strong leadership team. In this case, I mean by strong leadership team, a team of managers that understand technology. "Professional Managers" not aware of technology would not be enough.

That been said, let's go back to how to stop the spiral:

1) Make sure you have some basic tools in place (i.e. version management system, automated build script, etc.)

I had to include this one, as I have interviewed candidates for senior IT Positions that were not aware of what a version control system is.

2) Get a continuous integration system like Hudson or any other commercial.
Hudson is a great project. If you haven't seen it, go download it, and play with it. It is a great tool!

3) Include as part of your build some "code complexity metrics" utility.
You can use Sonar, which is available as a plugin for Hudson. This is another great tool. I use the complexity metrics as indicators as I explain below. You can monitor how your code is evolving, and answer questions like: is the code getting worse?. This indicators are not going to help you know if you are moving the code in a better shape, but if it is going south you are going to know, and are going to be able to take action to correct it.

4) Find a utility to monitor the activity in your version control system.
There are a couple of commercial solutions (I don't want to advertise any in particular) or you can use statcvs (you can find it in sourceforge.) The only problem with statcvs is that it does not work on branches, and is only useful if you were using cvs in the first place. If you don't want to spend the few bucks the commercial tools cost, you can write your own scripts.

5) Code reviews
Use your repository monitor to see what files are getting modified every week. Join that data set with the "code complexity metrics" extracted in the step 3. And schedule deep code reviews in the most complex objects. If you have the bandwidth, you should review every single change that goes into the repository. If not, set a goal, and use the complexity metrics as a guideline of the areas to focus the most

6) Prepare for the code reviews well.
My intention in this blog is not how to complete a code review. There is plenty of material in the web, just google for it. Every organization is different, I suggest you develop your own code review process with your team. A checklist is very handy. There are some tools as well that will help you document the results, and measure the effectiveness.

7) Make sure that every bit of changed code has a well automated unit tested created, and that it works.
This is the best way to start. If you pretend to create unit tests for every single method of your 800,000 lines of code, you will never finish, because of that you will not start as well.
There are several challenges on writing good automated unit tests. I will not cover those in this blog, perhaps I will write about it in a later one. However, I can tell you that it is not an easy task. One of the most difficult things to resolve is that the code is rarely stateless. There is state stored somewhere, and the test cases are dependent on that state. There are different ways of addressing this problem. I don't like the "mock" strategy. The ideal situation is to have a "static" data set where you execute your automated test cases. There are some commercial solutions that will help you with that.
I like dbunit. It has the ability to recreate a database from a previous recorded one.
Another strategy can be to have the test cases setup the data they need to execute properly. But again, this is a topic for an entire blog, I will continue for now. Use Hudson to automate the execution of your automated test cases, and reporting.

8) Promote learning in the organization
Guide the developers with good practices, and document them. The best practices can change from organization to organization. Yes, I am not crazy. For some organizations there are things that might be important that for others are not even relevant. Alright, here is an example: flexibility. For an organization that sells software that needs to be configurable for any customer, flexibility might be the most important requirement from an architectural perspective. In that case, if the software is not flexible enough it might mean that it will not be successful.
However, in an organization that requires the software to work in a specific niche, you might be wasting your time making it super flexible, most importantly, you are making it more complex, and harder to maintain than necessary.
In summary, make sure your team understands what is important for your business. And that should be your guiding principle number one always: adding value to your business, the reason your organization exists.
A pretty cool idea that I am starting to adopt, and I am waiting to see widely adopted, is to leverage the multimedia we have today for this purpose.
Instead of *only* writing tons of documents on guidelines, and standards that very few will ever read, you can record videos of best practices, and examples of using the IDE configuring the projects. Videos are a lot easier to follow, and are not that difficult to create. Every new member in the team, has a video library to watch and follow to get setup and running. And any developer can always refer to the videos to see how to do certain things. Specially with the latest 4GL like IDE's features for JSF/JavaFX/Swing development

9) If you can, standardize the IDE.
A lot of people will want to shut me up when I say this one. But a standard IDE has a lot of value for an organization. Yes, every developer has its preferences, and is more efficient with the tools is familiar with. But the organization as a whole will benefit tremendously by standardizing the IDE. When a developer has a problem, anyone will be able to help to resolve it. The code will be more consistent, specially when the advanced features of the IDEs are leveraged.

10) Classify properly the defects.
Use a good "task" management system that allows you to keep track of the issues, and their assignment. Do your research, there are some tools that are really powerful, and there are some that are going to become a bottleneck. One thing for sure, the whiteboard in the manager's office is not good enough.

11) Understand dependencies and interfaces.
I did not find a tool that does this, so I went out and created it. I parsed all the 800,000 lines of code, and extracted the dependencies. Then I used graphviz to create nice dependencies diagrams. They helped a lot to understand the underlying complexities and interactions. The developers where able to see graphically what objects would be dependent on the changes they were performing. Document the interfaces, and make sure that they are well defined. Simplicity overall is the most important principle for me. Again this will change from organization to organization. If a service provided by a component can be stateless, then make it stateless.

12) Don't rewrite that app, please.
If the app is working, don't rewrite it, unless you are very certain that this time it is going to be significantly better. If you have the same constrains (i.e. no time, same people, etc) it is going to be difficult to make it a lot better this second time. I have seen developers rewriting the same app several times, and they always end up writing it with similar problems. Yes, I know that the second time you write an application you will write it better because you are aware of all the aspects of the problem. But, still, I have seen people rewriting the applications again and again with the same/similar problems. Remember good enough exists, there is no perfection.

13) Be careful about creating your own framework.
You will find yourself surrounded by developers that say they want to make a framework that would be configured with an XML file, and will be super flexible. I remember when I wrote my first Servlet 2 Architecture application. I did my Hashtable at the time (there was no HashMap back then) with a keyword -> EventHandler map. EventHandler was an interface with one method: processRequest.
A few months later, struts was out there, and I was lucky enough to not get caught by the pride trap, and went to struts.
Now a days, most of the infrastructure components you will need to do, are already there. If you think you have something unique, go for it. But please, make due diligence and make sure you are not creating the next big framework or "wheel".

14) Use a tool to manage deployments.
When you have multiple projects in parallel, how do you know what versions of what objects to move from the environments? I haven't find a tool that does that really well yet. Perhaps I will publish an open source tool for this soon...  
Vote for Zilonis, the only open source multithreaded rules engine written in Java.

Finally, after about 7 months I get back into writing a new blog entry. I changed jobs, and it hasn

The JavaOne slides are available now for download. My session on how to extend the swing API creating your own components can be reached at:
Extending Swing: Creating your own components

Thank you all for the positive feedback you have gave me, it was a great experience!  
It has been a while since my last posting. As you can imagine, I have been really busy. Last week I was presenting at the JavaOne, and it was amazing. Thank you all for all the positive feedback on my session. I will most likely writing something about the demos I showed, and may be I will post them here with Java WebStart. That will have to wait for now...

Now back to the blog that I have been thinking to write:

Form hidden fields, query string arguments, and cookie values are frequently used as parameters to keep session state on the client of web based application. In this blog, I explore an option for securing those values. The key to this approach is that even if the developer is unaware of the problem, they are safe, thanks to the underlining framework.

The problem is that it is extremely easy for an attacker to change the values of any of these parameters. Making the Web Application vulnerable if the developers are not careful.

My first recommendation to address this problem is to avoid using those parameters in the first place. If that is not possible, then strong validation is necessary.

However, there are some cases where that validation is not easy to do. How do you achieve security?

In this blog, I am going to describe a solution for the parameters in the query string. It would be trivial to extrapolate it and apply it to the other cases.

The idea is to use a keyed-Hash Message Authentication Code (HMAC).

HMAC is just a hash value combined with a private key. In this case we are going to use it as a checksum.

The idea is to get the URL with the parameters, pass it through the HMAC algorithm, and append this "checksum" to the query string. In this way when we receive a URL we can calculate the checksum again. If it matches, we know that the URL was generated from our application. If it doesn't, ALARM! someone modified the query string.

A problem that we would still have, is that if someone was to be able to get to the computer of an authorized user, after looking at the browse history, the attacker can replicate the requests, and potentially violate the system security.

A way to stop that from happening would be to include a timeout as part of the query string, and then obtain the HMAC. The validation phase would include validating that the timeout has not expired.

There is no constraint that does not have side effects. In this case, a bookmark would only be good for the lifetime of the session (until the timeout of the query string is good). I personally don't think of this as a problem, on the contrary I see this as a benefit. The links that we would protect using this technique, are the type of links we would not want the user to bookmark in the first place.

Now that we have gone through the theory, let's see some code of what would it take to implement something like this. In this case, I will extend the struts framework. Obviously, the same idea can be applied to almost any other framework as well.

The first step will be to identify which Actions are going to have this security. For that we can extend the Action Mapping class to include a requireHMAC parameter:

 
public class SecureActionMapping extends RequestActionMapping {

   protected boolean requiresHMAC;

   public boolean getRequiresHMAC() {
      return requiresHMAC;
   }

   public void setRequiresHMAC(boolean requiresHMAC) {
      this.requiresHMAC = requiresHMAC;
   }

}

Then in the struts-config.xml file we need to configure struts to use this as the ActionMapping class: 
<action-mappings type="org.zilonis.hmaclinks.SecureActionMapping">
     .... all the action mappings
   <action path="/editUser" type="EditUserAction">
      <set-property property="requiresHMAC" value="true"/>
   </action>
</action-mappings>



In the Struts html tag library we have the custom tag html:link. To generate the links all we need to do is extend it to include the HMAC and the timeout as part of the link.

To generate the HMAC:

 
   private final static JTIME="&time=";
   private final static HMAC="&jval=";
   public static String appendHMACSecurity(String url) {
      url+= JTIME + System.currentTimeMillis();
      url+= HMAC + HMACGenerator.genHMac(url);
      return url;
   }

   public class HMACGenerator {
      private final static SecretKey key = genKey();
      
      public static String getMac(String url) {         
         Mac mac = Mac.getInstance(key.getAlgorithm());
         mac.init(key);
         byte utf8[] = url.getBytes("UTF8");
         byte digest[] = mac.doFinal(utf8);
         String result = URLEncoder.encode(new Base64Encoder.encode(digest));
      }
      private final static SecretKey genKey() {         
         KeyGenerator keyGen = KeyGenerator.getInstance("HmacMD5");
         return keyGen.generateKey();
      }
   }



I have omited purposely Exception handling for readability.

An interesting trick here is that I am generating the key in a static final field. That means that the first time the application gets started, the private key is going to be generated. In this way, there is no need to have any key management procedures with the operations team. If you were on a situation that your application does not get restarted in a long time (more than 15 days) you might want to considering putting some code there to generate a new key every now and then...

To verify the HMAC we just need to extend the FrontProcessor servlet to check if the action requires an HMAC. Then a similar piece of code to generate the HMAC and compare it with the one we are receiving. Notice that we are not decrypting the HMAC. The HMAC never gets decrypted. All we do is generate again our "checksum" and verify that it is the same we are receiving.

Finally we just need to check the timeout.

This technique might be overkill for some environments. But there are some situations where other options are not really feasible. This is easy enough to turn on/off on any use case you application might have.  
Being a server side developer, never had to worry about "help". I mean, about how to provide help to the users of my applications. They were simply just another JSP with ... well that's not what I want to blog about this time.

I have been working with client side Java for a while now, learning different interesting things that you can do with Swing. The time came to put together a "help".

As I always do, before writing a line of code, I started to google around to find out how others have done it. Other people had to include their help in the rich Java client applications at some point right?.

I found a project called JavaHelp. I started reading about it. All I could find was extremely old. All the documentation seemed to be outdated. I wondered, is this project still alive? is this the way to go?

I continued doing more research, and I decided I had to give it a try. I downloaded a open source authoring tool, which did not end up being really good. That's why I will not bother mentioning it in here.

Continuously reading any documentation I was able to find, encountered some Java code on how to call the JavaHelp system.

Finally, got it to work.

Then, discovered that you can generate indexes for including the search capabilities. And you know what? it works great!!!!

As I mentioned before the documentation was not really good.

To get started, just create your content in HTML. Use a page per section of your help.

Then you need to create some configuration files to define how the table of contents is going to be structured, and what are the views that you want to include.

Here is what I did:

1) Created the helpset definition file: master.hs

<?xml version="1.0" encoding="ISO-8859-1"?>
<helpset>
<title>reis-help</title>
<maps>
<mapref location="map.jhm"/>
<maps>
<view>
<name>Table of Contents</name>
<label>Table of Contents</label>
<type>javax.help.TOCView</type>
<data>toc.xml</data>
</view>
<view>
<name>Search</name>
<label>Search</label>
<type>javax.help.SearchView</type>
<data engine="com.sun.java.help.search.DefaultSearchEngine">chapters/searchDb</data> </view>
</helpset>

2) Create the table of contents in the toc.xml file

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE toc PUBLIC "-//Sun Microsystems Inc.//DTD JavaHelp TOC Version 1.0//EN" "http://java.sun.com/products/javahelp/toc_2_0.dtd"> <toc version="2.0">
<tocitem target="Module_1_help_id" text="Module 1 Label"/>
<tocitem target="Module_2_help_id" text="Module 2 Label"/>
</toc>


3) Create the mapping file. As you probably saw in the step 2, each of the entries in the table of contents references the documents with ids. Those ids are defined in the mapping file that references the real file. Here is a sample one:

<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<!DOCTYPE map PUBLIC "-//Sun Microsystems Inc.//DTD JavaHelp Map Version 1.0//EN " "http://java.sun.com/products/javahelp/map_1_0.dtd">
<map version="1.0">
<mapID target="Module_1_help_id" url="chapters/3. module1Help1.html"/>
<mapID target="Module_2_help_id" url="chapters/4. module1Help2.html"/>
</map>
4) Create the index (only if you want to use the search, which I recommend).

run:

jhindexer chapters


Where chapters is the directory where you have your content. It will go recursively through the directory structure and index all the files.

Note that jhindexer has to be in the PATH env variable. It is installed at the bin directory of the JavaHelp distribution.

That's it!

Now all you have to do is call from your Java code the help component.

All the documentation I found uses the broker to create the JavaHelp component. It is like a level of indirection between your code and the code that creates and opens the JavaHelp dialog. The problem is that you don't have control over the dialog it generates. You can not set the icons, you can not get to the GlassPane if you want to do something fancy there.

I did a little bit of scanning in the source code (open source advantage!) and found that there is a class called JHelp. It is a Swing component that does everything you will need.

Here is my Java code that opens the JavaHelp window:

 
                String pathToHS = "/master.hs";
        try {
            hsURL = MainFrame.class.getResource(pathToHS);
            hs = new HelpSet(null, hsURL);
        } catch (Exception e) {
            e.printStackTrace();
            return;
        }
        JHelp jHelp = new JHelp(hs);
        JDialog dialog = new JDialog(this, "Help", true);
        dialog.add("Center", jHelp);
        dialog.setSize(new Dimension(950, 700));
        dialog.setLocationRelativeTo(null);
        dialog.setVisible(true);
That should do it for you too.

There are others things I read about that I haven't tried. You can merge different help files dynamically. In this way the user can to install some modules at this time, then later when the other modules are downloaded/installed you can merge dynamically the help for those new modules.  
Finally the slides from the Zilonis Rules Engine JavaOne presentation are available. In the slides I presented the internals of how Zilonis uses the Java Concurrency API's to manage concurrent access to the Working Memory.

The Java Concurrency API and Deadlock Prevention in a RETE Rules Engine to Implement a Pricing Service

elevy

Power-N Architectures Blog

Posted by elevy Sep 21, 2007
Early in the dot com boom era, a lot of companies implemented the popular portals. Those portals allowed us to "integrate" multiple applications together into a single screen. I write "integrate" between quotes, because there was nothing really integrated, it was just that the set of applications were displayed in the same screen. The reason for the low level of integration was not because of lack of need, or lack of will. It was primarily because the applications at the time were written to produce data mixed with the rendering information, loosing all the semantics of the "data" that was being produced as a result of a request from the clients.

With the services and technologies that are available today I see a completely different future.

The ability to create a Rich Client user interface, with most of the benefits that a Thin Client offer, is an amazing technological resource. Think about it for a second. You can create a Rich Client application, that has zero maintenance cost. It is downloaded from the web, it runs on your client machine, and when there is a newer version, it downloads automatically an update (auto-update). For those of you that haven  
elevy

Single Sign On Blog

Posted by elevy Aug 29, 2007
In the last few weeks I was asked to help to integrate a set of built-in-house web applications with a Single Sign On (SSO) solution. After working with people from different teams, I realized that it would be a good idea to write a brief description of how the SSO solutions work in general. Perhaps this might help you to get started if you have to do something like this at some point.

SSO is by no means a new technology. It has been in use for a long time. Even before the Web Applications were available.

The most primitive of SSO systems is a piece of paper per user with a small table listing systems with the user names and passwords. This list can be generally stamped on the user monitor. Later on it can evolve, instead any simple piece of paper, it can be a post-it.

passwords.jpg
(for those of you interested in how I created this picture, I did it using the napkin look and feel)

Yes, you might be thinking that I am kidding here. And to some extent I am. However, this has been a big concern in the corporate world. That's the way it used to be, not by design, and it still is in some companies. Lots of applications, managed by different teams in the famous "silos", not integrated, each requiring the user to authenticate with its own username/password... you know the picture.

I think that that's how the need of SSO got started.

Early SSO systems worked as the post-it that the users where sticking to their monitors.

They were repositories of users/passwords pairs protected by a password. In that way before the user would authenticate to the destination system, they would first access the SSO repository, fetch their passwords, and continue authenticating with the system they were intending to work on the first place.

Lately, In a web based environment, this can be extremely simplified with a well know device: cookies.

 

Example

We will go over an SSO implementation with an example. Let's have 3 major components: The SSO server, Application A, Application B.

Here is how the system would work:

1) The user tries to access the application A.

2) Application A realizes that the user has not been authenticated. (See "user has been authenticated" for details).

3) Application A sends an HTTP redirect to the SSO server.

4) The SSO server sees that the user is not authenticated (again, See "user has been authenticated" for details).

5) The SSO server requires the user to authenticate.

6) The user submits username/password

7) SSO Server validates username/password. If they are valid, the user is "granted permission".

8) The user is redirected to Application A.

9) Application A sees that the user has been authenticated, and proceeds.

 

Granting permission:

When the username and password are validated by the SSO server, a unique large token is generated for the user. The token is going to have a unique identifier for the user's session. The SSO server keeps a list of the tokens associated with the credentials of the user that owns it. This token is set by the SSO server in the user's browser as a cookie.

 

User has been authenticated:

For an application to validate that a user has been authenticated it has to follow this steps:

1) Check for the token in the cookies.

2) Query the SSO server for the credentials associated with the token. If the token is valid, the SSO returns the credentials of the user for the application to continue. If the token is not present, or is invalid, the application knows that the user has not been authenticated, and is redirected to the SSO server.

This makes it look like there is a lot of work to get this type of setup. Luckily, it is not complicated at all. Most SSO servers come with a plug-in that is installed in the application/web server that intercepts all the requests, and performs the logic just described. Any application deployed in such a server will automatically get the user credentials, populated by the plug-in, just as if the user was authenticated locally using the JAAS framework.

 

A Note on Cookies

As most of you know the capabilities of setting and reading cookies are restricted by the domains. A web server that does not belong to the domain where the cookie was set will not be able to read the cookie.

For that reason the applications and the SSO server have to belong to the same domain. Indeed, if they are not to be part of the same domain the cookies would not work. For that case the URL rewriting technique can be used. 

Development TIP:

When you are developing your app, no need to authenticate with the SSO. Just have each developer to work with a simple JAAS authentication within a local flat file (most of the IDEs have this by default). Get them to complete the development, and when you are ready to test, deploy it in your testing environment using the SSO plugin.  
In my last blog I explained how I built a Dock bar using the Timing Framework and the Glass pane. This one is a continuation, where I explain briefly a slight improvement on the bouncing effect.

You can try the new version with Java Web Start: http://www.zilonis.org/samples/demobutton.png (again jdk 1.6+).

Here is the source code under GPL.

The improvement is to get the icons to bounce lower and lower after each cycle. I am pretty sure that I could have gone overboard and use some physics equations here. However, I followed the keep-it-simple principle.

Instead of having one Animator object that repeats itself 6 times - 3 bounces - I created 3 different animator object. Each one goes from 0 to a different height. Then I used the Triggers available in the Timing Framework for starting the next animator after the current one is finished. I was surprised of the simplicity of the API.

 
     TimingTrigger.addTrigger(animator[0], animator[1],
                          TimingTriggerEvent.STOP);
     TimingTrigger.addTrigger(animator[1], animator[2],
                  TimingTriggerEvent.STOP);
     TimingTrigger.addTrigger(animator[2], animator[3],
                              TimingTriggerEvent.STOP);

The other improvement I added was that after the bouncing animation is completed, I get the bar to animated into the standby state, instead of what I was doing before, which was to make the glass pane not visible. In this way it has a more smooth transition.

    animator[3].addTarget(new TimingTarget() {
       public void begin() {}
       public void end() {
          glass.disolve();
       }
       public void repeat() {}
       public void timingEvent(float arg0) {}
    });


Here you can see the disolve method: 
   public void disolve() {
      if ((disolver == null) || (!disolver.isRunning())) {
    for (int i = 0; i < iconsOnBar.size(); ++i) {
       IconOnBar iconOnBar = iconsOnBar.get(i);
       iconOnBar.setMouseLocation(MOUSE_OUT);
    }
    disolver = PropertySetter.createAnimator(500, glass,
                        "progress", 0f, 1f);
    disolver.setAcceleration(0.3f);
    disolver.setDeceleration(0.2f);
    disolver.addTarget(new TimingTarget() {
           public void begin() {}
           public void end() {
              glass.setVisible(false);
       }
           public void repeat() {}
           public void timingEvent(float t) {}
    });
    disolver.start();
     }
   }


We just make sure that there is no other disolver animation in the works, we update the state of all the icons to be "mouse out" and start the animation.

elevy

Java Dock (Launch Bar) Blog

Posted by elevy Jul 25, 2007
With the timing framework and the glass panel, you can create almost any UI component. Offering cool and complex behaviors.

In this blog I present a version of a launch bar (Dock).

Here is a screen shot of it in action:

http://www.zilonis.org/samples/dock/dock.jpg

To give it justice, you should try running the demo with Java Web Start: http://www.zilonis.org/samples/demobutton.png (again jdk 1.6+).

Here is the source code under GPL.

The component extends JPanel. That provides the foundation to paint the background and the icons. It has an inner class that is the responsible of the animations as the glass pane on the parent frame. Here is where all the magic happens.

In the default conditions, when the user is not interacting with the component, the glass pane is not visible.

The Dock component paints the background, which is a shape with rounded corners and using anti-aliasing creating a smooth effect on the edges. It was tricky to get the anti-aliasing working.

You probably noticed that it uses a nice gradient.

The gradient is cached in a buffered image with the width of the image and just a couple of pixels in the height. In this way we can recreate the entire gradient using less memory, and the painting is a lot faster.

An important principle that I try to follow is to keep the code as simple as possible. Not introducing optimizations before I know that it is necessary. In this particular case, I followed the same philosophy, until I saw that the animation was not performing, when I started optimizing the code as I continue describing later in this blog.

On top of the background, the component just paints the icons in the order they were added one next to each other.

There is a mouse listener, that is waiting for the user to move the mouse over the component. At this time, we make the glass pane visible. And start animating!

The glass panel is the one that paints the animations.

The trick is to record for each icon it's current state, and the state after the animation. In this way we can calculate its size and location based on how long the animation has progressed.

Each icon has an Id which maps to its location in the Dock. When the mouse moves over the Dock, we determine the icon id that the mouse is over. We are going to call it mouseOverId.

In the glass paint method we iterate over all the icons. To determine the size of each icon after the transition is completed, we use the Math.abs of the differece between the mouseOverId and the iconId. That gives us the distance between the mouse location and the icon being painted.

One tricky thing was to determine the horizontal position of each icon. The solution was to make sure that the entire dock is always centered on the screen, and that the icons are centered in the dock. With that in mind, every time a icon expands or retracts the location of the dock is recalculated, and the icons location in the dock too. It actually works really well.

Then we use the timing framework to animate the transition from the previous size to the next size. Using acceleration and deceleration allows the animation to look very smooth.

There is another event that we listen. It is the mouse click. In that case we start another animation with the timing framework. This time to simulate a bouncing effect. For that we use a deceleration of 9.8 ~ 10 (gravity!) It looks very real. I just need to make the height of the bouncing lower and lower after each cycle

Every icon is appended with the mirror image, using the swingx ReflectionRenderer.appendReflection(...)

In the case of the bouncing, we create the reflection, and then append it at a distance proportional to the bouncing effect, to make it look more real.

When I finished codding it, it was working great in the demo app (similar to the one that you are seeing in the web start). However, as soon as I tried to integrate it with a more elaborate user interface with lots of windows and components in the background the animation started to be really slow.

I discovered a couple of things that I was doing wrong. Hopefully my learning experience will help you not to make the same mistakes:

1) Everytime you create a Graphics object, you need to call the dispose method as soon as you are done with it.

2) When you don't need a BufferedImage anymore, call the flush method

Those are 2 things that were sort of surprising to me, comming from the background of server side development, I did not expect that I had to free resources explicitly. I understand that working with graphics is resource intensive and it is extremely difficult, if not impossible to actually leave the gc to do it efficiently for you.

3) Probably the most important thing, when you call repaint from the animation trigger, call it with the clip. Specially when you are working on a transparent glass pane, and there are a lot of components underneath it. The repaint with the clipping area is going to make sure that only the components that are in that area get repainted, not the entire application. Even if your paint method does not take advantage of the clip, you should still pass it by.  
If you went to the Extreme Makeovers JavaOne 2007 presentation, you probably enjoyed as much as I did the fancy table sorting animations that were presented. I got impatient waiting for them to release their code, and went to write my own.

I did not write it exactly as it is presented there, but I followed the same principles. The layout of the cells was done with the GroupLayout instead of the GridBagLayout. I didn't finished the "spanning" effect yet. Here you can download the srcunder GPL

And for those of you that want to see it in action, you can try it now: http://www.zilonis.org/samples/demobutton.png (yes, sorry, you need jdk1.6)

Try clicking the column headers to see how the table gets sorted. It is very fancy. After using it for a while, when I click on a table that does not have this effect I feel like it did not sorted the rows!.

I can't wait to use this effect in one of my projects.  
The Zilonis Rules Engine is a Java Rules Engine that is Thread-Safe. It was presented yesterday at the JavaOne. The presentation discussed the challenges of implementing a Pricing Service in Retail, why using a Rules Engine would be a challenge, and the way Zilonis solves those scalability issues.
We finished the session with the details of how it uses Read-Write Locks to achieve the multithreading capabilities with a high degree of concurrency. Also the Zilonis Analysis Tool (100% written with Swing) did its debut, and was used as a way to explain how the RETE algorithm works.

After cleaning up a little bit the Analysis Tool, I will be updating the Zilonis Rules Engine web site, and its repository releasing all the updates that I have been working on for quite a while.

Some attendees approached me afterwards the presentation to tell me that they enjoyed the session. Thank you guys for your feedback.

I am planning to write some blogs explaining some of the optimizations that I have included in the engine.

Also, I will be describing in detail all the different parts of the implementation for those of you that were not able to make it to the presentation. Most likely this part will make it in the documentation at the Zilonis web site.  

Filter Blog

By date: