Skip navigation
1 2 3 Previous Next


45 posts

For meanwhile more than 25 years I am writing computer programs. More than a decade I spent with programs accessing databases, virtually always relational ones. I soon learned that this is rather hard work. Not only that you need to know about the theory behind RDBMS iself, but also you need to know the technical APIs (like ODBC, ADO, RDO, JDBC, JDO, JPA, CMP, ...), the structure of the database itself ("Schema": table names, keys, data types, etc.) and it's management system (like Oracle, Microsoft, Sybase, etc.). And certainly there are lots of tools. I've seen come-and-go so much different tools, all of them blown with lots of never used features. Each different to use and to install.


What I always missed, as a coder, was some "small" solution. I just wanted to look up the tables, the names of the columns, the data types. I don't needed a graphical map ("ER diagram"). Just something that I can quickly start to peek into the schema. But almost all of the tools I had, where just too heavyweight. They needed endless to start, or had been buggy, or you needed much too much mouse clicks to get to your information. My dream was to have something like a type orcat command, which just prints some information, without further stuff around it. So I checked the web and noticed that there was not so much in that direction, actually. So I started some experiments months back and what I came up with short time later was a database scanner: A CLI tool scanning a database once, giving the result as a XML print on the console. Cool, finally I had my quick-one-shot-solution, and it didn't cost a buck. Great!


I used it a lot in different projects and actually it turned out that it was really handy, since I could just go to the command line to get a schema overview in a few seconds. No installation. No bloated IDEs. Just text based "database source code", just as I always liked it to be But one day a friend told me that it would be smart if the file could be saved to disk so he can look at it offline later, and that there would be something like a tree-like viewer since it's just easier to browse through the schema by clicking in the tree instead of looking at XML nodes. Well, I didn't liked the idea first, but after I finished a prototype and tried it for some days, I actually had to confess that he was right: It was just fun to double click the pre-scanned schema file and see the structure of the schema in nice GUI tree instead of a XML text file. It was just handy to carry around the file, or send it to partners, which had no idea of the XML syntax but just used XSLT or my GUI based tree viewer to inspect it - without the need for expensive, proprietary team-based modelling tools. We even stored the XML in SVN to see the differences in versions and branches: We now had been able to see an actual history of database changes just by storing the XML schema instead of binary models or proprietary table definition scripts. And all with a really small tool utilizing some JDBC metadata commands and a bit of abstraction and JAXB ontop of it: Java is just great!


As it turned out that more and more people liked to try my tool when I told about it at the afterwork parties, I saw the need to really publish it instead of just giving it to friends on their USB stick, so I set up the possibility to download it from my persoanl WebSite Head Crashing Informatics for evaluation purposes. As some of them had been really persistent about getting personal help, I abstained from Open Source at the first place (actually I am planning to open source it when I find the time to set up that) and set up a commercial offer for just around nine Euros (a price nobody really cares about as daily lunch is more expensive). As the evaluation is not limited any way (you can use all features virtually endless but there is neither liability nor support) I should actually call it "Free Edition".


So next step before thinking of publishing as open source, I first like to enrich it with more features. There is my private faviourites list already, but maybe my wishes are not so essential than yours. So here is what I'd like to get from you: Please download it, try it out in daily use, and tell me what you think about it. Is it useful? Small and fast enough? What would you liek to see in the next (or any following) release? What is essential, what is optional? Are there any bugs? What databases did you try (currently I know it is running on Oracle, Sybase and Microsoft)? What about the price? Too much or ok for you (maybe you like to order now before the price will raise)? Like to have an unsupported free edition instead of thenine euro supported one? Test it and send all your comments to me by email!




Note: An overview of all my recently published blog entries and printed articles (and more stuff) can be found on my private WebSite Head Crashing Informatics (

After more than a decade in the Java universe, today I had just enough of remembering where my executable JARs are located and typing all the lengthy path names, so I finally taught Windows to deal with Java archives just the same way as it deals with it's native executables EXE and CMD. The trick is so simple that I actually do not understand why the JRE installer isn't applying it automatically to prevent everybody from reinventing the wheel.

Nobody wants to type so much

Two of my most needed programs when coding on Windows, even in the Eclipse era, are the Command Line Interface (CLI) and the Text Editor. As programmers always are in a hurry, and as I am faster in typing than in clicking, I typically start those by entering their respective binary image names at the start line (thanks Microsoft for not completely dropping it even in Windows 7). After decades of Windows, I meanwhile know that those images are "%WINDIR%\System32\cmd.exe" and"%WINDIR%\System32\notepad.exe". In fact, this was not always the case. I can remember that before Windows NT it was notcmd.exe but COMMAND.COM (which actually still is existing even on Windows 7). But actually, I never typed the complete path, and I typically do not type the extension. Actually I don't care where to find the program, and I don't care whether the image is an EXE, COM, CMD or whatever. Windows knows where to find them, even if I am in a different current working directory, even if I omit the file extension. So now I want that magic for JARs, too. I want that my Java applications is found and started, without giving the absolute path, without giving the .jar file extension.

Getting rid of the file extension

For this to work, the first step is to tell Windows to consider a particular file extension as searchable. This is as easy as adding the extension ".jar" to the list of searchable extensions, which is stored in thePATHEXT environment variable. At the command line, this can be done temporarily (i. e. for just the current CLI session) using SET PATHEXT=%PATHEXT%;.JAR, but to make it persistent for future uses, it makes sense to instead set this in the Windows registry (e. g. using the "Extended Systems Settings" GUI). This was half the rent already. Now Windows knows that you will omit this extension from now on.

Getting rid of the path

The second step is to tell Windows to consider a particular path as searchable. This is as easy as appending the location to thePATH environment variable, which can be done in several ways, e. g. by using the command line for a temporary change using SET PATH=%PATH%;C:\WhereMyJarIsLocated or using the mentioned GUI. That's it. Now Windows will search that place(s) for my JARs. Great.

No magic included, unfortunately

As I wrote, it's just two little tweaks. No tricks. No Magic. All the rest is done by the operating system's intrinsic fmunctionality, plus a trick applied by the JRE installer: The installer already was kind enough to register the "javaw.exe" program as the executor for JAR files. So Windows knows what to do with our JAR once it was found. In the very early ages of Java, one had to do that manually, which was not complicated, but just another more GUI click to do.

But the world is not perfect yet. When running our executable JAR, what Windows actually is executing is not our JAR but the Java VM Launcher (i. e. %WINDIR%\System32\javaw.exe). That launcher is interpreting (or compiling and then executing) the content of our JAR (for those who forgot). So, the operating system's list of processes does not contain our JAR. It just contains javaw.exe, once for each started executable JAR. This is rather annoying, as one cannot easily find out which PID in fact is executing which JAR. You certainly can configure Task Manager to display the complete (and rather lengthy) command line invoked to run this process, but it is just not as smart as real EXEs, which are directly named as the process name. Sad but true, there is no simple help for this. Using a hard or soft link is not enough to rename the process (it still will show the target's name, not the source's name). In fact, to solve that, one would have to write a wrapper (or copy javaw.exe, or use one of those fancy wrappers available on the web). It would just be nice ifjavaw would create such a copy the fly and hand over execution to the newly created copy, but in fact I doubt that Oracle will do that any soon...

Regards, Markus.

An overview of all my different publications and products can be found on my personal web site Head Crashing Informatics (

It actually happened a few weeks ago already, but I simply didn't find the time to spread the word earlier -- just too much other stuff to do (see end of posting), so I tell you now: [url=]WebDAV Support for JAX-RS[/url] 1.2 is out! What has happened since 1.1? First of all, 1.2 is a complete internal overhaul, which not only finally is covered with unit tests rather completely (which revealed several previously unknown bugs now being fixed thanks to that unit tests), moreover the overhaul provides improved performance and simplicity in use. This comes mostly from the fact that lots of WebDAV XML elements actually are intended to be though of as enumerations (like or ). Unfortunately the original WebDAV DTD does not clearly express this (to be understood as: DTDs cannot express that, only XSDs could, but WebDAV is defined in terms of DTD still, unfortunately), so it is impossible to implement it as a Java enum. So the solution used in 1.2 (and current maintenance release 1.2.1) is to use JAXB technology to replace equal instances on the fly by Java singletons. Said that, memory consumption of parsed WebDAV XML bodies is obviously less with 1.2 thank with 1.1 -- without changing the application itself, solely by replacing the WebDAV library! Ontop, when willing to modify the application, comparisons on such "pseudo enums" can be simplified. Using == instead of equals() will provide a correct and valid result since 1.2, and improves execution speed certainly. And as more singletons have been added to the library, there is no more need to create lots of empty instances like new Write() -- this can just be written as WRITE now, which is way better to read. Check the JavaDocs to find the places where singletons have been added! Second, the new code provides a starting point for using header values in a more readable manner. Instead of declaring @HeaderParam as String and then manually dealing with String alchemy or conversion to Java objects, it is now possible to natively declare these headers as WebDAV types. This is implemented for TimeOut currently, but more will follow in future releases. No more String algorithmic, no more manual conversion, all done under the hood automatically! But one warning: There was a bug in Jersey preventing this feature from working well. So to use this on Jersey, an upgrade to Jersey 2.6 is essential, unfortunately (as this feature implies a recompilation anyways, that shouldn't be a real drawback). So how to obtain 1.2? Using Maven all one has to do is to declare the following dependency, as WebDAV Support for JAX-RS 1.2.1 is found on the Maven Central Repository: <dependency> <groupId></groupId>

One day I found myself in the situation that I had to write a unit test which checks whether my code is annotated in a particular way. I wondered how one could do that without doing an integration test that actually processes that annotations. My first idea was to use the Reflection API, which in fact worked, but was not looking smart. In fact, I wanted to have a Hamcrest matcher instead, since tests are much more readable that way. So before turning my code into a matcher (which is easy but has potential for code duplication with other projects), I did as I always do: I googled for some open source. And luckily, such a project existed: Hamcrest Reflection. The project, founded by Hamcrest author Nat PRYCE, was in a very beta state: Almost done, not published, no downloads, no site, no build scripts, few bugs, in short: rather dead. What a good chance for me to contribute to open source! So I just adopted the project to get it to live again, allowing people to download and use it easily, and to make it pretty simple for others to contribute. Or in other words: I adopted a zombie project to reanimate it. Here is what I did: 1. Build and publish to Maven Central unchanged I needed really quick a working JAR in Maven Central to finish my own project. So I checked out the code from Nat's SVN repo, then checked that the original source worked correctly (which it did, but due to a wrong test assumption I had to @Ingore a test), packed a JAR by hand, wrote a minimalistic POM, and published everything on Maven Central. For that, I applied for another project hosting and synchronization at Sonatype OSS Maven Respository (, which syncs to Maven Central rather frequently. It is the fastest way to deploy open source to Maven Central, and it allows to publish even non-mavenized projects. In my case, I used mvn deploy:deploy-file to simply upload the manually built JAR, waited for some hours until the next sync was done, and was able to include Hamcrest Reflection in my own project by simply adding a Maven dependency entry. Yippee! 2. Request owner status at original author Next I wanted to pimp the web site and upload some fixes. So I needed at least the contributor state, but hey, this is a zombie project, so I directly proposed to take ownership. I asked Nat PRICE by some concise email, and few hours later I was the project owner. Cool. 3. Publish initial web site To get a zombie back into life again, people must learn what the stuff is good for, where to get it and how to use it. So the next step was to upload some initial web site, containing really only that minimum information. It contained the download link from Maven Central, and the GAV coordinates, so everybody was able to obtain and use it. Great. 4. Start using tracker As soon as people are starting to use the software, they will find bugs or have ideas for new features. So they need a way to tell you. And typically you want to prevent to tell lots of people "I already know this". Typically, this is done using an issue tracker: People can check it before posting another bug report. As Nat hosted the project on Google Code, there was a tracker set up already, but unused so far. So I just added the sole known bug to the tracker to pevent more and more people telling me the same problem. Strike. 5. Providing a build script Know that I had a public list of open bugs, I wanted to be able to merge fixes. For that, I needed to get rid of building JARs by hand. So I needed a build script. In my case, it was simple. As I had an initial POM already from the Maven Central upload, I just needed to extend it with few lines so that it actually builds the JAR and runs the test. Mainly changes were adding the existing source paths and dependencies, and after few minutes I had mvn clear test run perfectly. Yes! 6. Mavenization As Maven is a de-facto standard (and due to its declarative style my absolute favorite) I wanted to Mavenize the project, which effectively results in more people being faster able to contribute, due to CoC. In fact this was as simple as moving files in the default place, getting rid of Nat's self-invented folder structure. Now everybody familiar with Maven would find things where expected. Nice. 7. Code Cleanup One thing which happens always as soon as you accept contributions from other people is formatting mismatch. Someone has set his line length to 80 while you work with 160, you both do autoformat, and you're screwed, as the diff won't be concise to read anymore. That was the case here, where Nat obviously had used blanks while I used tabs, and I love to have parameters and variables final where Nat did not care. So I used Eclipse's automatic code cleanup and formatting to get all the missing finals, this's, tabs, and so on. This is a one-time burden for SVN, but after that, all commits will solely contain of the real changes only, as I am the only active committer and have saved an Eclipse profile for this project. Fine. 8. Tell people how to contribute To wake up a zombie, you need help of others, as a living project is hard to drive alone. But how shall people know how they can help? So as the project was in an at-most simple way to get an build, I added the absolute essential information about contributing to the single-page web site on Google Code. Now I just need to wait for contributions and nobody will every again need to ask me by email how he can help. Good. 9. Fixing bugs The sole bug was rather simple to fix, so I removed the @Ignore and added the fix. Now the way is free for an initial automated release. Yay! 10. Releasing Certainly it is not very smart to upload files by hand to Maven Central. In fact, I want to get this done simply by mvn release:prepare release:perform. This was the most simple thing to do: Marking as SNAPSHOT, committing, running Maven, done. So here it is: Hamcrest Release 0.1-3, fully automatic built, signed and published on Maven Central. 11. What next? Well, for me, the project does what I need, so I just wait for feature requests and bug reports. Or for someone who has great plans with it and wants to take over ownership and possibly drive more automation, like using a public CI service. For me, I think my job is done: Maven Hamcrest turned from a half-baked zombie to a fully functional project infrastructure. Time to look out for the next zombie! 12. I want you for zombie reanimation! Why not joining me? If you know a zombie which you like to be maintained again, just follow my example and adopt that project. It is simpler as you think. The complete steps above just demanded few hours of my spare time. If I can do it, you can do it, too! :-)  
Some time ago, I had the impression that everywhere I stepped in the endless Java universe, I came across rather outdated technology. Things that were hyped years back, but for some reason had been left behind by mankind in the course of time. Disenchantedly roaming that programming desert I almost went depressive looking at all the rusty wrecks of former featured APIs laying around all along. The end of Java seemed to be pretty near. But things have changed. The gods of programming remembered that there is live besides iOS and PHP, having a still bravely beating JRE at its heart. They pumped up the old lungs again with a fresh breath of Java. A breath smelling like Androids, Raspberry Pies, and work stealing lambda expressions. And they inspired the remaining followers by sharing a vision of HTML5 and JavaScript hauled by rhinos with German names. So in the end, the augurs of the apocalypse apparently failed once more. At least until the Oracle's next dark foreboding.  (Video by courtesy of inviticon / All rights reserved.) 
You can find a list of all my publications of my personal web siteHead Crashing Informatics.  
As this JavaRanch article by Mark Spritzler proofs, there seem to be some people that like to have a generic visitor pattern, so I decided to open source mine (LGPL), which lies around here on my disk for some time. Have fun using it, it is as simple as linking to: 
Regards Markus 
A complete history of all my publications can be found on my web site Head Crashing Informatics.  
Swing is not dead, still. While a whole lot of evangelists try to talk it dead, it is still part of the JRE. While SWT is not, still. And while JavaFX is not, still. Dispite all hypes and rumors. It is not even declared to be deprecated or obsolete. So in fact, there is no other real alternative to Swing as long as the GUI must work solely with JRE means (I won't say AWT is an alternative). And as long as that is the case, I (and lots of others) will stick with Swing. At least until JavaFX is completely open sourced and part of the official JRE specification. So if you're one of those "few" people (like me) that "still" work with Swing, you're propably also a user of JGoodies. Karsten Lentzsch's excellent open source libraries (developed here on!) extend Swing by validation, beans binding, and other great features (things supposed to be part of Swing for years, but still not done in the JRE. Shame on you, Swing team!). I do not know the reason, but several years ago it was stopped to publish the JGoodies libraries in Maven Central, so it was a rather pain to always find, download and link against the latest release. So I asked Karsten to push into Maven Central again, but he phoned me and said he fears the stress of dealing with GPG and Maven (what I can really understand, as GPG and particularly its integration into Maven and the upload of certificates is really a pain). Ok, so I agreed to sign and upload in his name, as for me it makes no difference whether I upload it into my own Nexus instance or into Sonatype's open source instance. So here it is: Today I uploaded the current release of the libraries into The Central Repository. You can find all of the libraries (and for the first time, JGoodies Common!) when searching for the group ID "com.jgoodies". As long as I am using Swing, be assured that I will always continue with signing and uploading (unless Karsten won't do on his own). Promised! --- A complete history of all my publications can be found on my web site Head Crashing Informatics.  
I hate adding lots of huge multi-JAR all-purpose common libraries to rather small projects! Huge footprint just for a single class is a side effect of many popular frameworks, unfortunately, due to rather coarse-grained modularity. So I started to publish some of my commons (LPGL'ed) code as single-class self-contained artifacts on The Maven Central Repository. You simple need aRange<T> class (as the JRE still doesn't contain one and Apache's is numeric-only)? No problem. Here it comes:<dependency> <groupId>eu.headcrashing.treasure-chest</groupId> <artifactId>RangeClass</artifactId> <version>[1.2.2, 2)</version> </dependency> Have fun with it, and stay tuned for more of my Maven Central uploads... Next to come is JGoodies (yes, Swing is still not dead!). Regards Markus 
An overview of all my publications can be found on my web site Head Crashing Informatics.  
It eventually happened that I had to ensure that a class of mine is annotated in a particular way (I didn't want to bind the whole framework that uses the annotation just to ensure this single issue, as this was a unit test but not an integration test). So I wrote my own Hamcrest matcher with few pieces of reflection inside. Short time later I noticed that Hamcrest co-owner Nat PRYCE already did the same and published his stuff on Google Code in a rather disregarded project named hamcrest-reflection, so I decided to get rid of my own hack and use his library instead. Unfortunately it was not found in Maven Central so far, so there was a good opportunity to give back a micro-contribution to the Open Source Community: I contributed a POM and uploaded the library to, so few minutes later it was sync'ed to Maven Central. Now everybody can easily replace his own stuff (like I did) by a rather well-tested common piece of open source, simply by adding the following dependency to his own POM: 
        <version>[0.1, 1)</version>
This is a simple example how open source helps to reduce the amount of own code (and in turn reduces the amount of possible bugs), and how easy it is to contribute to open source (everybody can write a POM for a library and upload it to Support Maven Central: Adopt an open source project and upload the artifacts to! 
A collection of all my publications can be found on my personal web site Head Crashing Informatics.  

You want JAXB to unmarshal singletons? You already spent lots of time coding rather complex workarounds applying XmlAdapters and afterUnmarshal callbacks? The solution is astonishingly simple. Possibly so simple that nobody in the JAXB team ever thought it would be necessary to put the word "singleton" somewhere next to the JavaDocs for this... Anyways, here is the solution:

import javax.xml.bind.annotation.*;
@XmlRootElement @XmlType(factoryMethod="createMySingleton")
public class MySingleton {
  private MySingleton() {}
  public boolean equals(Object obj) {
    return obj instanceof MySingleton;
  public int hashCode() {
    return 0;
  public static final MySingleton MY_SINGLETON = new MySingleton();
  private static MySingleton createMySingleton() {
    return MY_SINGLETON;

Hope this is of any use for you.


A collection of all my blog entries can be found on my web site Head Crashing Informatics.

JAX-RS 2.0: A first interim report

It's been a few months already that the expert group of JSR 339 started discussion about the details of JAX-RS 2.0. The target defined by spec lead Oracle are clear: Java EE 7 shall have a RESTful API that augments current JAX-RS 1.1 API by (among others) a Client API, HATEOAS support and asynchronous invocations. So what's the status with state?

As three corner stones of the RESTful team at Sun, Marc Hadley, Paul Sandoz and Roberto Chinnici, left the team quite at the time of JAX-RS 2.0 project planning (Marc left before, Paul abd Roberto soon after) Oracle needed some time to find adequate replacements. So the first thing that happened in the project was the installation of new project leads, Santiago Pericas-Geertsen and Marek Potociar. It took quite several weeks then until Oracle came up with a first discussion topic for the expers: The Client API.

There are several RESTful clients out there already, for example there is one bundled with Jersey. The problem is that those do not fulfil a particular normative API, so the applications using them are necessarily bound to one particular JAX-RS implementation. For in-house projects this might be fine; for ISVs it is not, as it breaks the WORA principle. So, the idea to have all stakeholders (speak: vendors and key users) gathered around a single desk is brilliant to define an API that satisfies all of them, but obviously is a job not to be done whilst lunch break. It took more weeks to get many of those into the team, particularly those thad had some trouble with Sun / Oracle in the past. But in the end it is good to see that besides Oracles, RedHat and others meanwhile Apache is back on board - if not officially, but at least physically.

The Client API was discussed very intensively and from a lot of different angles. There had been several proposals how such an API should look like, what it shall work like, and so on. The discussion is not yet at its end. In fact, it is still going and and it is good that it is going on: As long as all vendors keep talking to each other, chances are good that in the end users can expect to get something really useful and portable. I don't want to get into much details as long as talks are running, but what people can expect is that the API will be rather fluent, allows the implementing engine to work as efficient (say: fast) as possible (e. g. by reusing many parts in the calling chain that possibly are rather expensive to recreate each time), and will allow to enable features in a portable way (like local caching, if provided by the particular engine). Also, support for asynchronous execution and non-blocking calls are under discussion (and I expect both to be definitively in the final API in a portable way), which allows to implement much more effective client applications compared to the current, proprietary clients. Looking back at the features we discussed so far, and at the code one had to write in the past, the client API will become something to really look forward at.

An essential, but often missing, part of REST is HATEOAS (or: support for hypermedia). The discussions about that just have started, but are on a good way. Oracle came up with the interesting view that there are different types of links: Those that change application state, and those that "just" describe structure. Both have to be supported in the best possible way, but as both depend on totally different circumstances, that way has to be different. Structural link are simply static: An order references contained items by some kind of embedded URL of header. This reference is to be expressed as an URL in the representative state and the technology to provide that is under current discussion. State changes are more complex. They follow business rules and the representative links to not link to other business objects but are actions to imply changes to the model's state (like adding more items). These imply more complex API needed, so the discussion about that just started and will go on a while. For both type of links there is not yet any decision made, but it is clear that both should be representable both, as embedded URLs or as Link headers. While the first is typical in XML entities, the latter is useful for any kind of entitiy, including binaries like images.

I will try to update this thread when the expert group comes up with notable news. Meanwhile, I'd be glad if you'd post your comments on the current situation. Is that going in the direction you need? Is there anything you want to tell the experts?


An overview of all my publications can be found on my web siteHead Crashing Informatics (

Sometimes I wonder why rather good technology suddenly dies. Does anybody remember InfoBus? JavaBeans? Swing? Java?

All of those had been brilliant technologies, enabling programmers doing things really easily. But at one day, news about those technologies just stopped. People tend to say that those technologies "died". Well, what does that mean, and is that true?

Let's start with InfoBus. It made it pretty simple to forward messages within a software system (within a VM or across a network) and the trick was that sender and receiver didn't know each other. So ther was just some "bus" and every component could send messages to the bus or react to messages found on the bus. Very useful, especially when dealing with plug-in extendable systems. But then, somebody decided that JMS and OSGi are hip and InfoBus is dead. Actually I didn't find a good reason for that, since JMS is just an API for drivers wrapping existing MOM products, while InfoBus was working on a much higher (and simpler to use) level. And OSGi is way too complex compared to InfoBus. So who decided that it is "dead"? I would love to use it today, but beside outdated web sites there seems to be no support anymore. JavaBeans, same game. Also Swing. Very useful technology, just some day told to be "dead" by someone apparently having the power to do so. In fact, we just produced a new product (QUIPSY Control Plan, see built on Java 6 and both, JavaBeans (especially property change listeners) and Swing (especially supported by jGoodies), had been exptremely useful and made my day. I understand that OSGi might be a superior technology than JavaBeans, but it is far more complex (and I don't needed the features). Also I understand that Sun had the (possibly not so brilliant) idea to provide JavaFX as a competitor to Flash and Silverlight. But this shouldn't be an excuse the cut investments into Swing, as actually the mass of software still is fat client based, SWT is not a standard, so Swing still is used heavily. For years lots of efferts had been spend into a new product line, which now officially is cancelled. Did Sun really not see that HTML 5 + JavaScript 5 + CSS 3 will be the dead of that complete type of Software? Both, Adobe and Microsoft also committed to a strong support of HTML 5 instead of further putting efforts into proprietary stuff. While Adobe and Microsoft both not just built a platform but also invested in a successful toolset, Sun was so busy with the platform itself that they forgot about the toolset (the same happened before with the Java IDE, that's whay everybody is using Eclipse and only a minority prefers NetBeas, which just came out years too late). But while Adobe and Microsoft now can just add HTML 5 output drivers to their existing toolset and drop their runtime without any real harm, Oracle had to cancel JavaFX completely since they just do not have any widely used tools - and the future rich client platform itself will be HTML 5, no doubt about that. But that is no excuse to cut Swing development until HTML 5 is finally there. And, that is no excuse to cut Swing at all, as even with HTML 5 for many years, possibly decades, there will be a huge mass of existing and still maintained and further developed Swing based applications.

So I hope that Oracle will do better in future and understand what treasure they actually obtained from Sun. I expect Swing to be not replaced by but just to be safely extended with technology obtained from the JavaFX line. And I expect Oracle to provide better tools to support people in using their technology. If Java doesn't learn where to fit into the future's technology stack made up by HTML and JavaScript, there possibly will soon be another "dead" technology: The one with the cup and duke and it's gravestone. And the reason will not be that the technology was "bad" or "not useful". The reason will solely be ignorance about what's going on outside. Needing several years to incorporate the core benefits provided by languages like Scala and Closure unfortunately is not looking very promising. Hope that they get the curve soon.

An overview of all my publications can be found on my web siteHead Crashing Informatics (

Markus KARG

Generic Range Class Blog

Posted by Markus KARG Jan 1, 2011

Often code has a bad smell, then it gets time to replace custom lines by common patterns. Sometimes it even makes sense to even replace a single line of code by a class just wrapping that single line (which actually increases code size), if that makes readers better understand what the code does. Unfortunately often such patterns are publicly known but do not exist as ready to use classes in the JRE, so one needs to write them again and again. To not make people having to type again and again, I typically upload mine to the web, so everybody is free to share it. One example is the Range pattern (see Martin FOWLER's web site for a deeper introduction to that pattern).

The Range Pattern

In short, a Range is something that is described by an upper and lower bound. Sometimes, a range is open, i. e. it only has one bound or even none at all. Typically, ranges are used to check single values against them, like "Is my birthday while I am on holidays?" (the holidays are a range described by the first and last date, the birthday is a single date). Or, "Is this car in the wanted price range (neither too crappy, i. e. cheap, nor too expensive)?" (the price range is described by the lowest sensible price and the maximum amount you want to pay, the actual price of a specific car is a single price). So, a range is a pattern, and it is independent of the value's type. It fits nicely to implement it as a generic class, as it hell to write that if...&&...||...&&... again and again, particularly with possibly open ranges (open ranges make things rather complex). Sad but true, there is no such class in the JRE, and Oracle did not pick up my RFE to add one so far.

The Generic Range<T> Class

So I just wrote a class that fills this gap. Since it is generic, you can safely use it with any information type you like - Integers, Dates, Prices, anything that implements the Comparable maker interface to indicate that it can be compared against a boundary. The class is able to deal with open ranges by passing null as a limit, but it doesn't allow to check a null value against its borders (since the result wouldn't be intuitive). If you need a Range class for your GPL'ed project, just download it from my web site:

What Next?

If you have any ideas what the Range class should be able to do (like being Comparable on its own, intersect or merge other Range instances, etc.), just add a comment. If you have a finished algorithm or test case, this would be even better. I will update the class, so everybody can share the benefit.



  1. Version 1.0.1: Improved java docs regarding open ranges.
  3. Version 1.0.2: Fixed bug with Range<T>.contains(T) returning wrong results due to assumption that Comparable<T>.compareTo(T) would return only -1 and 1, while it actually can be -N and N.
  5. Version 1.1.0: Merged anonymous author's contribution Range<T>.contains(Range<T>) to check whether a range is within another range. This is useful e. g. if your like to know whether a meeting is within someone's free time in a calendar (the meeting entry in the calendar has a Range<Date, Date>, also has the free time entry). Also uploaded unit test for Range<T> class to support more contributions.
  7. Version 1.2.0: Merged anonymous author's contribution Range<T>.overlaps(Range<T>) to check whether a range overlaps (intersects, touches) another range. This is useful e. g. if you like to know whether a meeting will run over the start of another meeting in a calendar (the meeting entries in the calendar have Range<Date, Date>s).
  9. Version 1.2.1: Fixed bug with Range<T>(T, T) not reporting interchanged limits due to assumption that Comparable<T>.compareTo(T) would return only -1 and 1, while it actually can be -N and N.
  11. Version 1.2.2: Uploaded to Maven Central, so using the class is as simple as adding the following dependency to a project's POM: <dependency><groupId>eu.headcrashing.treasure-chest</groupId><artifactId>RangeClass</artifactId><version>[1.2.2, 2)</version></dependency>

An overview of all my publications, presentations, code snippets or complete products, can be found on my personal web site Head Crashing Informatics.

Meanwhile I am looking back to more than 25 years of programming, and more than a decade I spent in a very sensible area where quality (in the sense of zero failures) plays a big role. So call me "sensible" for quality. For long years "we" (i. e. developers) had hard work to do using simple command lines tools like vi etc., but meanwhile there are great, even free, tools making our lives much easier. So one should think that the freed time was spent into quality. The reverse is the case. The more I look at professional software products, the more I see, sorry to be rude, simply: crap!

The question is: Why is that? Having some insight into several bigger and smaller stakeholders, including my own participations to some Open Source projects lead by my self, Sun, Oracle, Bull, Microsoft, and some more, and looking at the work we are doing in my company and its affiliates and subsidiaries, gives me some feeling where the root of the evil is located: In the organization of the project.

Gee! I can tell you how I hated to attend to those boring, mandatory "business organization" classes at college back in the 90ies. But hey, looking back from now I need to say: Those profs where just right! Fullstop. Quality is directlyrelated to your project's organization (which is the way you organize people's way of working, not the way you imagine the resulting artefacts). If you don't organize for quality, you won't get quality. Thinking you already organized quality? Let's see...: You can write thousands of tests. You can do pair-programming. You can do Scrum. You can to Kanban. You won't get quality from that. Throw all that away. It has not at least to do with quality you'll actually get in the end. You only will get quality in the end, if you actively plan quality from the start. That means, you have to enact measures which will result in quality getting actively produced while designing. That's the reason why systems like ISO 9000 etc. enforcing FMEA (Failure Mode and Effects Analysis) and other methodologies the be applies at start and regularly whilst the complete project instead of enforcing just "testing". Not just doing TDD while writing code. That's far too late. Not while testing (that's just to find bugs you already did). Not whilst the beta phase (that's to find bugs you where too lazy to find on your own). Not by fixing reported bugs (that's just to wipe up the bugs your beta testers had been too lazy to report and which will break your neck otherwise). Quality starts at the very beginning of everything. And it doesn't start in your typing hands. It starts in your brain.

To gain quality, you first need to accept that a program is not "good" if it hardly does what it shall do. A program is "good" if you cannot crash it even if you want to. Let's make an example: I just downloaded latest JDK, GlassFish, Eclipse (including GlassFish Plugin). Within one hour I managed to screw it completely. All I did was using it. I not at all had in mind to break it. It just happened. I just used the wizards to create an EAR, EJB incl. EJB-Client, APP Client, deployed / redeployed few times. Crash. Bang. Boom. Now it's so screwed that neither Eclipse will be able to compile a freshly set up, empty project anymore, nor GlassFish will deploy X.EAR since it cannot find Y.EAR (which I undeployed and then deleted the projects, but it seems to have more than one redundant information about that which I cannot find). I tried to clean up, but gave up after another hour. And no, again, I had not done anything to crash it by intention. Is that "good" software? Well. Think of people wanting to crash it by intention (like hackers and unsatisfied employees) and then answer then.  Don't get me wrong, I appreciate the work the GlassFish team does. I just don't think that form a quality perspective it would be "good" software. It's just "cheap" software from a big stakeholder. But "good" isfar different (and would be much more expensive).

So there we are at the key point. Quality is expensive. Yes, I know, everybody goes round and round telling that quality would spare money. What a hoax. Quality does not spare money. Quality costs money. Lots of money. Everybody in the CAQ crowd ("Computer Aided Quality Assurance") talks about quality in the end will provide increased revenue. That is wrong. Quality will typically lower revenue - at least as long as you are not obligated to correct failures on your own costs (which is only the case in some countries of the world, so one can guess what the most global companies will act like). In fact, even in countries where you are obligated to correct failures on you own (like Germany), big stakeholders just sit back and wait for the one customer that dares to complain. Will he get a fix for free? Nope. I tried for months to get one from Sun Microsystems, Microsoft, Hewlett-Packerd, and others. And what did I get? If I would pay the fix, I would get it. So why should any of those vendors ever care for quality in the end, if they get paid to fix their own faults? In fact, they all tell about quality, but regarding to organizing for quality, they don't care in the end. Maybe the boss cares, but the lots of managers do not. They care for short term revenue which is better the worse the product is. Sad, but true. BTW, the funny thing is, that companies that everybody knows for horrible bugs, improved quality. Seems they meanwhile understood the benefit of long term revenue. But exactly that companies that provided quality for long time, now producing crap more and more. Strange, isn't it?

The reason is simple and let's tell it clearly: Off-Shoring. I really don't want to blame globalization, but in fact, it is nearly impossible to organize people located in different continents to work in a way that will result in something that is worth being called "quality". How should that ever work? Sorry to say that, but Sun, Siemens and others did not move to India because American or German engineers where too dumb. They were too expensive. It's not that American or German engineers where too expensive. They just wanted to make even more money. Certainly everybody told them that quality will be affected. But who cares if customers have no choice? From a colleague I know for sure that his company (which is a global player) for more than five years only got pure crap from Bangalore. And nobody cared. Not even the customers. Because they had no choice. Sink or swim. Example: I wanted to buy a car radio that is able to switch from album to album while showing the cover ("Cover Flow"). Well, first I thought my local dealer is dumb when he told me that even with a 350 US$ specimen from Pioneer, Alpine or Kenwood, I have to go through plain text menus to select the album. No, this not a hoax. This was christmas 2010, and I meanwhile got letters of confirmation from those vendors that the dealer is not dumb: You cannot switch to the next album with a single click. Definitively. That is not quality. That is crap by design. Ten years back the user experience was in the middle of the car radio development. Ten years back the development was located in Germany and the USA. Today the development is in India and China and I doubt that the majority of that engineers actually could afford to buy their own product (or just hardly). Who goes through menus while driving? They just didn't think. And why? Because their target is to make money. Not to provide quality. Just one example. BTW, the excuse of the vendors was not "we are too dumb" but it was "car radio development these days is bound to some monetary boundaries". We know what that means. If they would pay more for their engineers, they could just move back to the USA or Germany. "But it was a strategic decision and will not get changed, even if it will not result in any revenue at all." (citing a development engineer).

So what to do? Go back to the roots! First, you have to reduce product development to a few core people, located in a first world country. That people have to be experts. They have to be well paid (the major reason why lots of Sun people quit with Oracle was about reduced income, BTW). Those people have to have enough time and equipment and silence. Don't bother them with revenue, release dates, and so on. Just wait until the software is done. No need for thousands of engineers in India. No need for "taming the masses" things like XP, Scrum and Kanban. Just let them sit in a quiet chamber and wait and do not disturb. And, do neither ask your channel partners nor customers what they want to get. Ask professors what the future looks like and do that. Don't go for hypes. Just provide the best possible quality. That must be your target, independent of the actual product. And: Release Late, Release Rarely. No need for another release every month. A release once a year is well enough. Don't beat around the bush on lots of conferences. No need for "community". Just work and present it when it's done. That's the way to get quality in the end. Everything else just is marketing show.

I know that 99% will hate me now for saying this simple truth. But that won't change the facts. Quality is not provided by asking the masses. It is provided by few experts having time and peace.



An overview of all my publications, presentations, code snippets or complete products can be found on my personal web site Head Crashing Informatics.

Markus KARG

Excited about cite Blog

Posted by Markus KARG Nov 12, 2010

There are times in career when you get excited about having an experience for the first time. I well remember how I got excited about seeing my first self-coded shell node popping up in the Windows Explorer (a.k.a custom shell namespace). A bit of excitement I noticed seeing my first reader's comment printed in iX. And I was really excited about receiving my first printed articles in JavaMagazin and iX. Or when published my first online article on their well-known site. I really got excited when I was asked by theJAX management to speak there, which was an honour back then in the first days of that conference. I got excited when I got told that I was nominated as the CEO for an affiliate. And certainly I was excited when I was awarded the "GAP" (GlassFish Community Award).

So today once again was one of these days when I got excited about a new experience. This time, it was about someone citing and discussing a blog article of mine (Even when it was just in a blog rollup. Hey, they picked up MY blog entry for that!). And it was not just someone. It was JavaWorld. So I not only was excited, but also proud. A bit, at least.

While obviously all of these is nothing compared to being nominated for an Oscar(R) or a Nobel Prize, it actually is really fun to see myself getting excited still. Ain't this part of that abstract idea of "worth living for"? I do think so. Excitement drives live (at least mine) and development of people, companies, products and markets. So I do hope that I will get excited again soon. Let's see what comes next. :-)

Regards, Markus.

An overview of all my different publications and products can be found on my personal web site Head Crashing Informatics (