1 2 3 Previous Next


45 posts
One day I found myself in the situation that I had to write a unit test which checks whether my code is annotated in a particular way. I wondered how one could do that without doing an integration test that actually processes that annotations. My first idea was to use the Reflection API, which in fact worked, but was not looking smart. In fact, I wanted to have a Hamcrest matcher instead, since tests are much more readable that way. So before turning my code into a matcher (which is easy but has potential for code duplication with other projects), I did as I always do: I googled for some open source. And luckily, such a project existed: Hamcrest Reflection. The project, founded by Hamcrest author Nat PRYCE, was in a very beta state: Almost done, not published, no downloads, no site, no build scripts, few bugs, in short: rather dead. What a good chance for me to contribute to open source! So I just adopted the project to get it to live again, allowing people to download and use it easily, and to make it pretty simple for others to contribute. Or in other words: I adopted a zombie project to reanimate it. Here is what I did: 1. Build and publish to Maven Central unchanged I needed really quick a working JAR in Maven Central to finish my own project. So I checked out the code from Nat's SVN repo, then checked that the original source worked correctly (which it did, but due to a wrong test assumption I had to @Ingore a test), packed a JAR by hand, wrote a minimalistic POM, and published everything on Maven Central. For that, I applied for another project hosting and synchronization at Sonatype OSS Maven Respository (https://docs.sonatype.org/display/Repository/Sonatype+OSS+Maven+Repository+Usage+Guide), which syncs to Maven Central rather frequently. It is the fastest way to deploy open source to Maven Central, and it allows to publish even non-mavenized projects. In my case, I used mvn deploy:deploy-file to simply upload the manually built JAR, waited for some hours until the next sync was done, and was able to include Hamcrest Reflection in my own project by simply adding a Maven dependency entry. Yippee! 2. Request owner status at original author Next I wanted to pimp the web site and upload some fixes. So I needed at least the contributor state, but hey, this is a zombie project, so I directly proposed to take ownership. I asked Nat PRICE by some concise email, and few hours later I was the project owner. Cool. 3. Publish initial web site To get a zombie back into life again, people must learn what the stuff is good for, where to get it and how to use it. So the next step was to upload some initial web site, containing really only that minimum information. It contained the download link from Maven Central, and the GAV coordinates, so everybody was able to obtain and use it. Great. 4. Start using tracker As soon as people are starting to use the software, they will find bugs or have ideas for new features. So they need a way to tell you. And typically you want to prevent to tell lots of people "I already know this". Typically, this is done using an issue tracker: People can check it before posting another bug report. As Nat hosted the project on Google Code, there was a tracker set up already, but unused so far. So I just added the sole known bug to the tracker to pevent more and more people telling me the same problem. Strike. 5. Providing a build script Know that I had a public list of open bugs, I wanted to be able to merge fixes. For that, I needed to get rid of building JARs by hand. So I needed a build script. In my case, it was simple. As I had an initial POM already from the Maven Central upload, I just needed to extend it with few lines so that it actually builds the JAR and runs the test. Mainly changes were adding the existing source paths and dependencies, and after few minutes I had mvn clear test run perfectly. Yes! 6. Mavenization As Maven is a de-facto standard (and due to its declarative style my absolute favorite) I wanted to Mavenize the project, which effectively results in more people being faster able to contribute, due to CoC. In fact this was as simple as moving files in the default place, getting rid of Nat's self-invented folder structure. Now everybody familiar with Maven would find things where expected. Nice. 7. Code Cleanup One thing which happens always as soon as you accept contributions from other people is formatting mismatch. Someone has set his line length to 80 while you work with 160, you both do autoformat, and you're screwed, as the diff won't be concise to read anymore. That was the case here, where Nat obviously had used blanks while I used tabs, and I love to have parameters and variables final where Nat did not care. So I used Eclipse's automatic code cleanup and formatting to get all the missing finals, this's, tabs, and so on. This is a one-time burden for SVN, but after that, all commits will solely contain of the real changes only, as I am the only active committer and have saved an Eclipse profile for this project. Fine. 8. Tell people how to contribute To wake up a zombie, you need help of others, as a living project is hard to drive alone. But how shall people know how they can help? So as the project was in an at-most simple way to get an build, I added the absolute essential information about contributing to the single-page web site on Google Code. Now I just need to wait for contributions and nobody will every again need to ask me by email how he can help. Good. 9. Fixing bugs The sole bug was rather simple to fix, so I removed the @Ignore and added the fix. Now the way is free for an initial automated release. Yay! 10. Releasing Certainly it is not very smart to upload files by hand to Maven Central. In fact, I want to get this done simply by mvn release:prepare release:perform. This was the most simple thing to do: Marking as SNAPSHOT, committing, running Maven, done. So here it is: Hamcrest Release 0.1-3, fully automatic built, signed and published on Maven Central. 11. What next? Well, for me, the project does what I need, so I just wait for feature requests and bug reports. Or for someone who has great plans with it and wants to take over ownership and possibly drive more automation, like using a public CI service. For me, I think my job is done: Maven Hamcrest turned from a half-baked zombie to a fully functional project infrastructure. Time to look out for the next zombie! 12. I want you for zombie reanimation! Why not joining me? If you know a zombie which you like to be maintained again, just follow my example and adopt that project. It is simpler as you think. The complete steps above just demanded few hours of my spare time. If I can do it, you can do it, too! :-)
As this JavaRanch article by Mark Spritzler proofs, there seem to be some people that like to have a generic visitor pattern, so I decided to open source mine (LGPL), which lies around here on my disk for some time. Have fun using it, it is as simple as linking to:
Regards Markus
A complete history of all my publications can be found on my web site Head Crashing Informatics.
Swing is not dead, still. While a whole lot of evangelists try to talk it dead, it is still part of the JRE. While SWT is not, still. And while JavaFX is not, still. Dispite all hypes and rumors. It is not even declared to be deprecated or obsolete. So in fact, there is no other real alternative to Swing as long as the GUI must work solely with JRE means (I won't say AWT is an alternative). And as long as that is the case, I (and lots of others) will stick with Swing. At least until JavaFX is completely open sourced and part of the official JRE specification. So if you're one of those "few" people (like me) that "still" work with Swing, you're propably also a user of JGoodies. Karsten Lentzsch's excellent open source libraries (developed here on java.net!) extend Swing by validation, beans binding, and other great features (things supposed to be part of Swing for years, but still not done in the JRE. Shame on you, Swing team!). I do not know the reason, but several years ago it was stopped to publish the JGoodies libraries in Maven Central, so it was a rather pain to always find, download and link against the latest release. So I asked Karsten to push into Maven Central again, but he phoned me and said he fears the stress of dealing with GPG and Maven (what I can really understand, as GPG and particularly its integration into Maven and the upload of certificates is really a pain). Ok, so I agreed to sign and upload in his name, as for me it makes no difference whether I upload it into my own Nexus instance or into Sonatype's open source instance. So here it is: Today I uploaded the current release of the libraries into The Central Repository. You can find all of the libraries (and for the first time, JGoodies Common!) when searching for the group ID "com.jgoodies". As long as I am using Swing, be assured that I will always continue with signing and uploading (unless Karsten won't do on his own). Promised! --- A complete history of all my publications can be found on my web site Head Crashing Informatics.
I hate adding lots of huge multi-JAR all-purpose common libraries to rather small projects! Huge footprint just for a single class is a side effect of many popular frameworks, unfortunately, due to rather coarse-grained modularity. So I started to publish some of my commons (LPGL'ed) code as single-class self-contained artifacts on The Maven Central Repository. You simple need a Range<T> class (as the JRE still doesn't contain one and Apache's is numeric-only)? No problem. Here it comes:   <dependency>     <groupId>eu.headcrashing.treasure-chest</groupId>     <artifactId>RangeClass</artifactId>     <version>[1.2.2, 2)</version>   </dependency> Have fun with it, and stay tuned for more of my Maven Central uploads... Next to come is JGoodies (yes, Swing is still not dead!). Regards Markus
An overview of all my publications can be found on my web site Head Crashing Informatics.
It eventually happened that I had to ensure that a class of mine is annotated in a particular way (I didn't want to bind the whole framework that uses the annotation just to ensure this single issue, as this was a unit test but not an integration test). So I wrote my own Hamcrest matcher with few pieces of reflection inside. Short time later I noticed that Hamcrest co-owner Nat PRYCE already did the same and published his stuff on Google Code in a rather disregarded project named hamcrest-reflection, so I decided to get rid of my own hack and use his library instead. Unfortunately it was not found in Maven Central so far, so there was a good opportunity to give back a micro-contribution to the Open Source Community: I contributed a POM and uploaded the library to oss.sonatype.org, so few minutes later it was sync'ed to Maven Central. Now everybody can easily replace his own stuff (like I did) by a rather well-tested common piece of open source, simply by adding the following dependency to his own POM:
        <version>[0.1, 1)</version>
This is a simple example how open source helps to reduce the amount of own code (and in turn reduces the amount of possible bugs), and how easy it is to contribute to open source (everybody can write a POM for a library and upload it to oss.sonatype.org). Support Maven Central: Adopt an open source project and upload the artifacts to oss.sonatype.org!
A collection of all my publications can be found on my personal web site Head Crashing Informatics.

JAXB Singletons Made Easy Blog

Posted by mkarg Jan 13, 2012

You want JAXB to unmarshal singletons? You already spent lots of time coding rather complex workarounds applying XmlAdapters and afterUnmarshal callbacks? The solution is astonishingly simple. Possibly so simple that nobody in the JAXB team ever thought it would be necessary to put the word "singleton" somewhere next to the JavaDocs for this... Anyways, here is the solution:

import javax.xml.bind.annotation.*;
@XmlRootElement @XmlType(factoryMethod="createMySingleton")
public class MySingleton {
  private MySingleton() {}
  public boolean equals(Object obj) {
    return obj instanceof MySingleton;
  public int hashCode() {
    return 0;
  public static final MySingleton MY_SINGLETON = new MySingleton();
  private static MySingleton createMySingleton() {
    return MY_SINGLETON;

Hope this is of any use for you.

A collection of all my blog entries can be found on my web site Head Crashing Informatics.

JAX-RS 2.0: A first interim report

It's been a few months already that the expert group of JSR 339 started discussion about the details of JAX-RS 2.0. The target defined by spec lead Oracle are clear: Java EE 7 shall have a RESTful API that augments current JAX-RS 1.1 API by (among others) a Client API, HATEOAS support and asynchronous invocations. So what's the status with state?

As three corner stones of the RESTful team at Sun, Marc Hadley, Paul Sandoz and Roberto Chinnici, left the team quite at the time of JAX-RS 2.0 project planning (Marc left before, Paul abd Roberto soon after) Oracle needed some time to find adequate replacements. So the first thing that happened in the project was the installation of new project leads, Santiago Pericas-Geertsen and Marek Potociar. It took quite several weeks then until Oracle came up with a first discussion topic for the expers: The Client API.

There are several RESTful clients out there already, for example there is one bundled with Jersey. The problem is that those do not fulfil a particular normative API, so the applications using them are necessarily bound to one particular JAX-RS implementation. For in-house projects this might be fine; for ISVs it is not, as it breaks the WORA principle. So, the idea to have all stakeholders (speak: vendors and key users) gathered around a single desk is brilliant to define an API that satisfies all of them, but obviously is a job not to be done whilst lunch break. It took more weeks to get many of those into the team, particularly those thad had some trouble with Sun / Oracle in the past. But in the end it is good to see that besides Oracles, RedHat and others meanwhile Apache is back on board - if not officially, but at least physically.

The Client API was discussed very intensively and from a lot of different angles. There had been several proposals how such an API should look like, what it shall work like, and so on. The discussion is not yet at its end. In fact, it is still going and and it is good that it is going on: As long as all vendors keep talking to each other, chances are good that in the end users can expect to get something really useful and portable. I don't want to get into much details as long as talks are running, but what people can expect is that the API will be rather fluent, allows the implementing engine to work as efficient (say: fast) as possible (e. g. by reusing many parts in the calling chain that possibly are rather expensive to recreate each time), and will allow to enable features in a portable way (like local caching, if provided by the particular engine). Also, support for asynchronous execution and non-blocking calls are under discussion (and I expect both to be definitively in the final API in a portable way), which allows to implement much more effective client applications compared to the current, proprietary clients. Looking back at the features we discussed so far, and at the code one had to write in the past, the client API will become something to really look forward at.

An essential, but often missing, part of REST is HATEOAS (or: support for hypermedia). The discussions about that just have started, but are on a good way. Oracle came up with the interesting view that there are different types of links: Those that change application state, and those that "just" describe structure. Both have to be supported in the best possible way, but as both depend on totally different circumstances, that way has to be different. Structural link are simply static: An order references contained items by some kind of embedded URL of header. This reference is to be expressed as an URL in the representative state and the technology to provide that is under current discussion. State changes are more complex. They follow business rules and the representative links to not link to other business objects but are actions to imply changes to the model's state (like adding more items). These imply more complex API needed, so the discussion about that just started and will go on a while. For both type of links there is not yet any decision made, but it is clear that both should be representable both, as embedded URLs or as Link headers. While the first is typical in XML entities, the latter is useful for any kind of entitiy, including binaries like images.

I will try to update this thread when the expert group comes up with notable news. Meanwhile, I'd be glad if you'd post your comments on the current situation. Is that going in the direction you need? Is there anything you want to tell the experts?


An overview of all my publications can be found on my web site Head Crashing Informatics (http://www.headcrashing.eu).

Sometimes I wonder why rather good technology suddenly dies. Does anybody remember InfoBus? JavaBeans? Swing? Java?

All of those had been brilliant technologies, enabling programmers doing things really easily. But at one day, news about those technologies just stopped. People tend to say that those technologies "died". Well, what does that mean, and is that true?

Let's start with InfoBus. It made it pretty simple to forward messages within a software system (within a VM or across a network) and the trick was that sender and receiver didn't know each other. So ther was just some "bus" and every component could send messages to the bus or react to messages found on the bus. Very useful, especially when dealing with plug-in extendable systems. But then, somebody decided that JMS and OSGi are hip and InfoBus is dead. Actually I didn't find a good reason for that, since JMS is just an API for drivers wrapping existing MOM products, while InfoBus was working on a much higher (and simpler to use) level. And OSGi is way too complex compared to InfoBus. So who decided that it is "dead"? I would love to use it today, but beside outdated web sites there seems to be no support anymore.  JavaBeans, same game. Also Swing. Very useful technology, just some day told to be "dead" by someone apparently having the power to do so. In fact, we just produced a new product (QUIPSY Control Plan, see http://www.quipsy.de/en/caq-software/products/advanced-quality-planning/quipsy-cpl.html) built on Java 6 and both, JavaBeans (especially property change listeners) and Swing (especially supported by jGoodies), had been exptremely useful and made my day. I understand that OSGi might be a superior technology than JavaBeans, but it is far more complex (and I don't needed the features). Also I understand that Sun had the (possibly not so brilliant) idea to provide JavaFX as a competitor to Flash and Silverlight. But this shouldn't be an excuse the cut investments into Swing, as actually the mass of software still is fat client based, SWT is not a standard, so Swing still is used heavily. For years lots of efferts had been spend into a new product line, which now officially is cancelled. Did Sun really not see that HTML 5 + JavaScript 5 + CSS 3 will be the dead of that complete type of Software? Both, Adobe and Microsoft also committed to a strong support of HTML 5 instead of further putting efforts into proprietary stuff. While Adobe and Microsoft both not just built a platform but also invested in a successful toolset, Sun was so busy with the platform itself that they forgot about the toolset (the same happened before with the Java IDE, that's whay everybody is using Eclipse and only a minority prefers NetBeas, which just came out years too late). But while Adobe and Microsoft now can just add HTML 5 output drivers to their existing toolset and drop their runtime without any real harm, Oracle had to cancel JavaFX completely since they just do not have any widely used tools - and the future rich client platform itself will be HTML 5, no doubt about that. But that is no excuse to cut Swing development until HTML 5 is finally there. And, that is no excuse to cut Swing at all, as even with HTML 5 for many years, possibly decades, there will be a huge mass of existing and still maintained and further developed Swing based applications.

So I hope that Oracle will do better in future and understand what treasure they actually obtained from Sun. I expect Swing to be not replaced by but just to be safely extended with technology obtained from the JavaFX line. And I expect Oracle to provide better tools to support people in using their technology. If Java doesn't learn where to fit into the future's technology stack made up by HTML and JavaScript, there possibly will soon be another "dead" technology: The one with the cup and duke and it's gravestone. And the reason will not be that the technology was "bad" or "not useful". The reason will solely be ignorance about what's going on outside. Needing several years to incorporate the core benefits provided by languages like Scala and Closure unfortunately is not looking very promising. Hope that they get the curve soon.

An overview of all my publications can be found on my web site Head Crashing Informatics (http://www.headcrashing.eu).


Generic Range Class Blog

Posted by mkarg Dec 31, 2010

Often code has a bad smell, then it gets time to replace custom lines by common patterns. Sometimes it even makes sense to even replace a single line of code by a class just wrapping that single line (which actually increases code size), if that makes readers better understand what the code does. Unfortunately often such patterns are publicly known but do not exist as ready to use classes in the JRE, so one needs to write them again and again. To not make people having to type again and again, I typically upload mine to the web, so everybody is free to share it. One example is the Range pattern (see Martin FOWLER's web site for a deeper introduction to that pattern).

The Range Pattern

In short, a Range is something that is described by an upper and lower bound. Sometimes, a range is open, i. e. it only has one bound or even none at all. Typically, ranges are used to check single values against them, like "Is my birthday while I am on holidays?" (the holidays are a range described by the first and last date, the birthday is a single date). Or, "Is this car in the wanted price range (neither too crappy, i. e. cheap, nor too expensive)?" (the price range is described by the lowest sensible price and the maximum amount you want to pay, the actual price of a specific car is a single price). So, a range is a pattern, and it is independent of the value's type. It fits nicely to implement it as a generic class, as it hell to write that if...&&...||...&&... again and again, particularly with possibly open ranges (open ranges make things rather complex). Sad but true, there is no such class in the JRE, and Oracle did not pick up my RFE to add one so far.

The Generic Range<T> Class

So I just wrote a class that fills this gap. Since it is generic, you can safely use it with any information type you like - Integers, Dates, Prices, anything that implements the Comparable maker interface to indicate that it can be compared against a boundary. The class is able to deal with open ranges by passing null as a limit, but it doesn't allow to check a null value against its borders (since the result wouldn't be intuitive). If you need a Range class for your GPL'ed project, just download it from my web site: http://www.headcrashing.eu.

What Next?

If you have any ideas what the Range class should be able to do (like being Comparable on its own, intersect or merge other Range instances, etc.), just add a comment. If you have a finished algorithm or test case, this would be even better. I will update the class, so everybody can share the benefit.



  1. Version 1.0.1: Improved java docs regarding open ranges.
  3. Version 1.0.2: Fixed bug with Range<T>.contains(T) returning wrong results due to assumption that Comparable<T>.compareTo(T) would return only -1 and 1, while it actually can be -N and N.
  5. Version 1.1.0: Merged anonymous author's contribution Range<T>.contains(Range<T>) to check whether a range is within another range. This is useful e. g. if your like to know whether a meeting is within someone's free time in a calendar (the meeting entry in the calendar has a Range<Date, Date>, also has the free time entry). Also uploaded unit test for Range<T> class to support more contributions.
  7. Version 1.2.0: Merged anonymous author's contribution Range<T>.overlaps(Range<T>) to check whether a range overlaps (intersects, touches) another range. This is useful e. g. if you like to know whether a meeting will run over the start of another meeting in a calendar (the meeting entries in the calendar have Range<Date, Date>s).
  9. Version 1.2.1: Fixed bug with Range<T>(T, T) not reporting interchanged limits due to assumption that Comparable<T>.compareTo(T) would return only -1 and 1, while it actually can be -N and N.
  11. Version 1.2.2: Uploaded to Maven Central, so using the class is as simple as adding the following dependency to a project's POM: <dependency><groupId>eu.headcrashing.treasure-chest</groupId><artifactId>RangeClass</artifactId><version>[1.2.2, 2)</version></dependency>

An overview of all my publications, presentations, code snippets or complete products, can be found on my personal web site Head Crashing Informatics.


Meanwhile I am looking back to more than 25 years of programming, and more than a decade I spent in a very sensible area where quality (in the sense of zero failures) plays a big role. So call me "sensible" for quality. For long years "we" (i. e. developers) had hard work to do using simple command lines tools like vi etc., but meanwhile there are great, even free, tools making our lives much easier. So one should think that the freed time was spent into quality. The reverse is the case. The more I look at professional software products, the more I see, sorry to be rude, simply: crap!

The question is: Why is that? Having some insight into several bigger and smaller stakeholders, including my own participations to some Open Source projects lead by my self, Sun, Oracle, Bull, Microsoft, and some more, and looking at the work we are doing in my company and its affiliates and subsidiaries, gives me some feeling where the root of the evil is located: In the organization of the project.

Gee! I can tell you how I hated to attend to those boring, mandatory "business organization" classes at college back in the 90ies. But hey, looking back from now I need to say: Those profs where just right! Fullstop. Quality is directly related to your project's organization (which is the way you organize people's way of working, not the way you imagine the resulting artefacts). If you don't organize for quality, you won't get quality. Thinking you already organized quality? Let's see...: You can write thousands of tests. You can do pair-programming. You can do Scrum. You can to Kanban. You won't get quality from that. Throw all that away. It has not at least to do with quality you'll actually get in the end. You only will get quality in the end, if you actively plan quality from the start. That means, you have to enact measures which will result in quality getting actively produced while designing. That's the reason why systems like ISO 9000 etc. enforcing FMEA (Failure Mode and Effects Analysis) and other methodologies the be applies at start and regularly whilst the complete project instead of enforcing just "testing". Not just doing TDD while writing code. That's far too late. Not while testing (that's just to find bugs you already did). Not whilst the beta phase (that's to find bugs you where too lazy to find on your own). Not by fixing reported bugs (that's just to wipe up the bugs your beta testers had been too lazy to report and which will break your neck otherwise). Quality starts at the very beginning of everything. And it doesn't start in your typing hands. It starts in your brain.

To gain quality, you first need to accept that a program is not "good" if it hardly does what it shall do. A program is "good" if you cannot crash it even if you want to. Let's make an example: I just downloaded latest JDK, GlassFish, Eclipse (including GlassFish Plugin). Within one hour I managed to screw it completely. All I did was using it. I not at all had in mind to break it. It just happened. I just used the wizards to create an EAR, EJB incl. EJB-Client, APP Client, deployed / redeployed few times. Crash. Bang. Boom. Now it's so screwed that neither Eclipse will be able to compile a freshly set up, empty project anymore, nor GlassFish will deploy X.EAR since it cannot find Y.EAR (which I undeployed and then deleted the projects, but it seems to have more than one redundant information about that which I cannot find). I tried to clean up, but gave up after another hour. And no, again, I had not done anything to crash it by intention. Is that "good" software? Well. Think of people wanting to crash it by intention (like hackers and unsatisfied employees) and then answer then.  Don't get me wrong, I appreciate the work the GlassFish team does. I just don't think that form a quality perspective it would be "good" software. It's just "cheap" software from a big stakeholder. But "good" is far different (and would be much more expensive).

So there we are at the key point. Quality is expensive. Yes, I know, everybody goes round and round telling that quality would spare money. What a hoax. Quality does not spare money. Quality costs money. Lots of money. Everybody in the CAQ crowd ("Computer Aided Quality Assurance") talks about quality in the end will provide increased revenue. That is wrong. Quality will typically lower revenue - at least as long as you are not obligated to correct failures on your own costs (which is only the case in some countries of the world, so one can guess what the most global companies will act like). In fact, even in countries where you are obligated to correct failures on you own (like Germany), big stakeholders just sit back and wait for the one customer that dares to complain. Will he get a fix for free? Nope. I tried for months to get one from Sun Microsystems, Microsoft, Hewlett-Packerd, and others. And what did I get? If I would pay the fix, I would get it. So why should any of those vendors ever care for quality in the end, if they get paid to fix their own faults? In fact, they all tell about quality, but regarding to organizing for quality, they don't care in the end. Maybe the boss cares, but the lots of managers do not. They care for short term revenue which is better the worse the product is. Sad, but true. BTW, the funny thing is, that companies that everybody knows for horrible bugs, improved quality. Seems they meanwhile understood the benefit of long term revenue. But exactly that companies that provided quality for long time, now producing crap more and more. Strange, isn't it?

The reason is simple and let's tell it clearly: Off-Shoring. I really don't want to blame globalization, but in fact, it is nearly impossible to organize people located in different continents to work in a way that will result in something that is worth being called "quality". How should that ever work? Sorry to say that, but Sun, Siemens and others did not move to India because American or German engineers where too dumb. They were too expensive. It's not that American or German engineers where too expensive. They just wanted to make even more money. Certainly everybody told them that quality will be affected. But who cares if customers have no choice? From a colleague I know for sure that his company (which is a global player) for more than five years only got pure crap from Bangalore. And nobody cared. Not even the customers. Because they had no choice. Sink or swim. Example: I wanted to buy a car radio that is able to switch from album to album while showing the cover ("Cover Flow"). Well, first I thought my local dealer is dumb when he told me that even with a 350 US$ specimen from Pioneer, Alpine or Kenwood, I have to go through plain text menus to select the album. No, this not a hoax. This was christmas 2010, and I meanwhile got letters of confirmation from those vendors that the dealer is not dumb: You cannot switch to the next album with a single click. Definitively. That is not quality. That is crap by design. Ten years back the user experience was in the middle of the car radio development. Ten years back the development was located in Germany and the USA. Today the development is in India and China and I doubt that the majority of that engineers actually could afford to buy their own product (or just hardly). Who goes through menus while driving? They just didn't think. And why? Because their target is to make money. Not to provide quality. Just one example. BTW, the excuse of the vendors was not "we are too dumb" but it was "car radio development these days is bound to some monetary boundaries". We know what that means. If they would pay more for their engineers, they could just move back to the USA or Germany. "But it was a strategic decision and will not get changed, even if it will not result in any revenue at all." (citing a development engineer).

So what to do? Go back to the roots! First, you have to reduce product development to a few core people, located in a first world country. That people have to be experts. They have to be well paid (the major reason why lots of Sun people quit with Oracle was about reduced income, BTW). Those people have to have enough time and equipment and silence. Don't bother them with revenue, release dates, and so on. Just wait until the software is done. No need for thousands of engineers in India. No need for "taming the masses" things like XP, Scrum and Kanban. Just let them sit in a quiet chamber and wait and do not disturb. And, do neither ask your channel partners nor customers what they want to get. Ask professors what the future looks like and do that. Don't go for hypes. Just provide the best possible quality. That must be your target, independent of the actual product. And: Release Late, Release Rarely. No need for another release every month. A release once a year is well enough. Don't beat around the bush on lots of conferences. No need for "community". Just work and present it when it's done. That's the way to get quality in the end. Everything else just is marketing show.

I know that 99% will hate me now for saying this simple truth. But that won't change the facts. Quality is not provided by asking the masses. It is provided by few experts having time and peace.



An overview of all my publications, presentations, code snippets or complete products can be found on my personal web site Head Crashing Informatics.


After more than a decade in the Java universe, today I had just enough of remembering where my executable JARs are located and typing all the lengthy path names, so I finally taught Windows to deal with Java archives just the same way as it deals with it's native executables EXE and CMD. The trick is so simple that I actually do not understand why the JRE installer isn't applying it automatically to prevent everybody from reinventing the wheel.

Nobody wants to type so much

Two of my most needed programs when coding on Windows, even in the Eclipse era, are the Command Line Interface (CLI) and the Text Editor. As programmers always are in a hurry, and as I am faster in typing than in clicking, I typically start those by entering their respective binary image names at the start line (thanks Microsoft for not completely dropping it even in Windows 7). After decades of Windows, I meanwhile know that those images are "%WINDIR%\System32\cmd.exe" and "%WINDIR%\System32\notepad.exe". In fact, this was not always the case. I can remember that before Windows NT it was not cmd.exe but COMMAND.COM (which actually still is existing even on Windows 7). But actually, I never typed the complete path, and I typically do not type the extension. Actually I don't care where to find the program, and I don't care whether the image is an EXE, COM, CMD or whatever. Windows knows where to find them, even if I am in a different current working directory, even if I omit the file extension. So now I want that magic for JARs, too. I want that my Java applications is found and started, without giving the absolute path, without giving the .jar file extension.

Getting rid of the file extension

For this to work, the first step is to tell Windows to consider a particular file extension as searchable. This is as easy as adding the extension ".jar" to the list of searchable extensions, which is stored in the PATHEXT environment variable. At the command line, this can be done temporarily (i. e. for just the current CLI session) using SET PATHEXT=%PATHEXT%;.JAR, but to make it persistent for future uses, it makes sense to instead set this in the Windows registry (e. g. using the "Extended Systems Settings" GUI). This was half the rent already. Now Windows knows that you will omit this extension from now on.

Getting rid of the path

The second step is to tell Windows to consider a particular path as searchable. This is as easy as appending the location to the PATH environment variable, which can be done in several ways, e. g. by using the command line for a temporary change using SET PATH=%PATH%;C:\WhereMyJarIsLocated or using the mentioned GUI. That's it. Now Windows will search that place(s) for my JARs. Great.

No magic included, unfortunately

As I wrote, it's just two little tweaks. No tricks. No Magic. All the rest is done by the operating system's intrinsic fmunctionality, plus a trick applied by the JRE installer: The installer already was kind enough to register the "javaw.exe" program as the executor for JAR files. So Windows knows what to do with our JAR once it was found. In the very early ages of Java, one had to do that manually, which was not complicated, but just another more GUI click to do.

But the world is not perfect yet. When running our executable JAR, what Windows actually is executing is not our JAR but the Java VM Launcher (i. e. %WINDIR%\System32\javaw.exe). That launcher is interpreting (or compiling and then executing) the content of our JAR (for those who forgot). So, the operating system's list of processes does not contain our JAR. It just contains javaw.exe, once for each started executable JAR. This is rather annoying, as one cannot easily find out which PID in fact is executing which JAR. You certainly can configure Task Manager to display the complete (and rather lengthy) command line invoked to run this process, but it is just not as smart as real EXEs, which are directly named as the process name. Sad but true, there is no simple help for this. Using a hard or soft link is not enough to rename the process (it still will show the target's name, not the source's name). In fact, to solve that, one would have to write a wrapper (or copy javaw.exe, or use one of those fancy wrappers available on the web). It would just be nice if javaw would create such a copy the fly and hand over execution to the newly created copy, but in fact I doubt that Oracle will do that any soon...

Regards, Markus.

An overview of all my different publications and products can be found on my personal web site Head Crashing Informatics (http://www.headcrashing.eu).



Excited about cite Blog

Posted by mkarg Nov 12, 2010

There are times in career when you get excited about having an experience for the first time. I well remember how I got excited about seeing my first self-coded shell node popping up in the Windows Explorer (a.k.a custom shell namespace). A bit of excitement I noticed seeing my first reader's comment printed in iX. And I was really excited about receiving my first printed articles in JavaMagazin and iX. Or when heise.de published my first online article on their well-known site. I really got excited when I was asked by the JAX management to speak there, which was an honour back then in the first days of that conference. I got excited when I got told that I was nominated as the CEO for an affiliate. And certainly I was excited when I was awarded the "GAP" (GlassFish Community Award).

So today once again was one of these days when I got excited about a new experience. This time, it was about someone citing and discussing a blog article of mine (Even when it was just in a blog rollup. Hey, they picked up MY blog entry for that!). And it was not just someone. It was JavaWorld. So I not only was excited, but also proud. A bit, at least.

While obviously all of these is nothing compared to being nominated for an Oscar(R) or a Nobel Prize, it actually is really fun to see myself getting excited still. Ain't this part of that abstract idea of "worth living for"? I do think so. Excitement drives live (at least mine) and development of people, companies, products and markets. So I do hope that I will get excited again soon. Let's see what comes next. :-)

Regards, Markus.

An overview of all my different publications and products can be found on my personal web site Head Crashing Informatics (http://www.headcrashing.eu).

When the iPhone came to market, Sun Microsystems announced that there soon will be Java for the iPhone. They got stopped by Apple's licence terms, which ban both, interpreted languages and code written in other language than C, C++, Objective C and JavaScript. Lately I read in the news that Apple change the licence terms, and found a promising statement on Apple's web site, so now interpreted code actually is allowed. So when will Oracle deliver Java SE for the iPhone? And is that Apple's surrender to the increase of Android's market share?

A complete overview of all my postings can be found in my web site Head Crashing Informatics (http://www.headcrashing.eu).

Filter Blog

By date: