Skip navigation

What it means to speak German fluently and to be able of C++

Several years ago one of our key coders moved from the south of Germany (where our HQ is located at the Black Forest) to the cold and rainy north, so we had to to find a suitable substitute. After screening lots of applications, we picked few to invite for an interview. It declared the candidate's ability to speak German and C++. So she was the first one getting invited into our office rooms to fight her thesis of what she declaratively would be able of.

You must lie, sometimes.

We had been somewhat shocked that the person was actually speaking Russian fluently but her German was rather, well, a mess. It was just hard to in fact at least interview her about core details like where she was coming from and what she graduated in, without falling back to English or drawing sketches. Her German was actually so bad that she had problems to understand many of our simple questions, and we had problems to understand her answers. Asked for the reason why she declared to be able to speak German, she told us that she understood that without this declaration it was obviously impossible to get invited for the interview (which was true in fact and was very frankly), and that the fact that we understood this justification would be an actual proof that she is able to speak German - at leastsomehwat. As we couldn't abstain to agree in both, we hired her and she is working for us as a programmer for several years now. It seems, one must lie (at least in details) sometimes.

Good enough for daily use.

Years later I passed her desk by incidence and looked on her screen. I noticed that her C++ code actually looked very much like well-structured C but less like plusplus. So at a cup of coffee I started a discussion upon several aspects of the difference between C and C++ to find out about the reason, as I always thought that she is much better in C++ than I am, as I turned to Java more than a decade ago leaving C++ behind. It turned out that we never noticed that she in fact knew only some of them and applied just those few regularly, but others had been not known at all or had not not getting ever used, since they where misunderstood by her (like "seldomly" used things like exception handling or reference types as she titled it). From an OOP point of view, her code was just a complete mess and showed clearly that she did not understand the key idea of OOP actually and what she did was C-with-classes but not C++. So I asked her why she never told us that she knows C++ only a bit so we could have sent her to a C++ training? She said that she actually didn't know that she only knew half of C++ and that it is not OOP what she is doing every day. When asking her whether she didn't notice that her code looks completely different than mine, she answered that she did notice that mine is much better to read and much more stable, and applies much more patterns and so on, but she just had not the idea to copy my style as she thought what she did would be "good enough to get the everyday tasks done". I didn't know what to answer, actually. It seems, one must not really understand something completely, to get the daily work done more or less successfully.

What I learned from this anecdote is that in the real world sometimes you must lie about what you are or can do, and that it is not an actual problem that you declare things that you cannot fulfill. You possibly won't reach your target without lying. And it often is not necessary to actually be able to do the declared things to reach the target. So false declarations are more common than one might think or confess, and we all are used to it and live with it without a real problem.

A Pack of RESTful Lies

These days lots of applications claim to be RESTful. RESTfulness is the overall buzzword for virtually each new service. If you write something that can be connected to another computer, call it RESTful. This is at least what happens "out there" every day. Sometimes without a better knowledge that it is untrue (thanks to marketing teams applying the abovementioned idea of "Somewhat German is German!".). But more oftenwith a better knowledge that it is at least notcompletely true (thanks to engineering teams applying the abovementioned idea of "Good enough for everyday use."). What a pack of RESTful lies!

Certainly, marketing is a hard job, and telling "not thewhole truth" is part of it. Whether this implies that it is useful to put exactly the "REST" badge on your package while in fact you do nothing else but simple GET and PUT, is doubtful. Your customers will learn some day, and maybe they might opt to cancel the contract (and justifiably so!) in case they relied onreal RESTfulness, since they actually need it? What then have you gained but solely costs? At least my company so far abstained from using the word "REST" in marketing overhastitly, due to this possible risk induced by the technical fact of missing HATEOAS.

Also, yes, it is certainly very pragmatic to only implement those parts of an idea that you need right now. But possibly tomorrow you will notice that your system would be much easier to finish if it would be completely RESTful right from the start (possibly since a new use case has to be implemented)? I often saw people collapse on scarce time trying to add the missing bricks to a nearly finished application after they notice that now "unforseeably" came up the need to have the rest of REST, too. Damned, if one would ever have told them before! Well... didn't Fielding do exactly that back in the year 2000? Yes, he did. You just won't hear.

So while it might look good from a first glimpse to use the word "RESTful" even if it is no completely, doing sowill imply problems. Whether those are severe or not, is up to your personal decision. For me, I would never call a system "RESTful" if it actually is only in part.

The Lesser RESTful Application

At this point I'd like to point out that there was an interesting discussion on the rest-discuss mailing list on Yahoo this week about how to title such applications then with another buzzword instead of "RESTful", more or less for the sole sake of marketing, as nobody actually would know the actual technical content of such a "less-RESTful" application. I doubt that neither people will agree upon another buzzword as-smart-as "RESTful" nor on at least not to write "RESTful" on their application anymore, as it is just too tempting for marketing guys to stick a well-known badge on the package instead of a less-well-known-but-technically-correct one. Let's see what they come up with finally.

Resist the Beginnings!

Also there was another discussion on users@jersey.dev.java.net this week, kicked off by myself, about the fact that a proposed Hypermedia API extension to JAX-RS (Java API for RESTful WebServices) possibly allows people to write less-RESTful applications or (in theory) non-RESTful ones. As it isthe Java API for REST (and not just some API for REST written in Java), I opted not to adopt such an extension, as people would write RPC-style applications and justify the REST badge on it by the fact that it was written with JAX-RS which "by definition" result in RESTfulness. The discussion moved on to the more general question, whether the Java API for RESTful WebServices shall enforce RESTful style, and I was more or less horrified by the "official declaration" of the spec leads' goals: Their target is "just" to support RESTful style if wanted by the user of the library, but they won't enforceit. Moreover they like to move the responsibility and such the decision whether or not an application built on this framework is RESTful to the developer using their library and like the idea toactively support non-RESTful approaches. Wow. So in the end JAX-RS develops to become "Servlets API on steroids", far beyond the core definition of REST. So what comes next, an OpenGL API that allows to not do graphics at all? If I would have that freedom in interpretation of a project's goal at my employer, I would be really, really glad. In fact, he would stop the project since I missed the target.

Since I am not the project lead of JAX-RS but just some small comitter, and since there are more people that love to use buzzwords and pragmatic solutions much more than real RESTfulness, we have to accept this "open interpretation" of the specification title. Whether this is sensible, will gain a real benefit or possibly has an actual negative side effect as supposed by me, will be shown by the future. Currently it seems I am the only one having a problem with putting the REST badge on such an API, but let's see what happens. Maybe my worries are all wrong and all programmers and especially the RESTful beginners will all completely and correctly understand that they have to resist the beginnings on their own and the API will provide no guidance in the RESTful direction. If not, we'll possibly soon find another JSR on the hive of APIs that started with a good idea but just went in the wrong direction.


Note: An overview of all publications, including blogs and printed magazine articles, can be found on my web site Head Crashing Informatics.

RESTless about RESTful

These days there is much discussion about REST and HATEOAS, and many people feel urged to reinterpret what HATEOAS means or what Roy Fielding's often-cited dissertation allegedly would say intheir understanding, and what HATEOAS should be implemented like therefore. While I first felt amused about this "dispute about nothing" (just ask Mr Fielding if you don't understand what his dissertation tells us; no need toguess), the longer I follow those (in part ridiculously wrong) assumptions and myths, the moremyself feels urged to stop those and shout: "Guys,before discussing your ideas, first learn whatMr Fielding's idea was!" There is nothing tointerprete or construe. His words are clear and unambiguous. Just read them, if necessary twice. It tells us everything we like to know. Really.

Flashback

In his dissertationRoy Thomas Fielding explained RESTful architecture (actually it seems that it even introduced the word REST), including hypermedia as the engine of application state (HATEOAS):

"The next control state of an application residesin the representation of the firstrequested resource, … The application state is controlled and stored by the user agent… anticipate changes to that state (e.g., link maps and prefetching of representations) … The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations."

Okay, so what does that mean and why is most of that wrong what currently is discussed as proposed implementations of HATEOAS?

To understand Fielding's above explanation, we have to remember what his dissertation was about. Fielding was a contributor to theHTTPstandard. In his research he discovered that the method of operation of the world wide web can be abstracted to a general architecture he called REpresentational State Transfer (REST). The thesis behind REST is: Since the WWW is working and scaling so perfectly, while REST is the WWW's architecture, REST itself will be working and scaling well in other domains too, possibly outside of the WWW. In fact he is true, which is why we all are so crazy about REST these days. In detail he identified four key factors that REST is comprising:

"REST is defined by four interface constraints:identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state."

Speaking in techniques of the WWW (which is the implementation mostly used to apply the abstract idea of REST to real, physical applications), those four core ideas actually would be the combination of: Using URIs to transferMIME-typed documents with GET / PUT / POST / DELETE (like their counterparts SELECT / UPDATE / INSERT / DELETE would apply the same idea to SQL) and …? It is the last ellipsis this blog entry (and HATEOAS) is about.

  WWW := URI + HTTP + MIME + Hypermedia
 

What are HATEOAS and Hypermedia?

HATEOAS is the short form of "hypermedia as the engine of state", as we learn from the disseration. But what does it mean? Let's start with "state". "State" means the current status of the sum of information found in the system at a particular point in time. For example, if we have an order, it will have (at least) two states: Either it is sent to the customer, or it is not (certainly "state" is neither restricted to a single variable nor a particular type like boolean; typically state is a complex construct of several informations). So what is an "engine of state"? As the example shows, most objects typically have not statically one state for an infinite time, but will change its state from time to time. An order was not sent to the customer, then got sent, so its new state now is "sent" now. Ittransitioned it's state due to an action. The move from one state to another is called "state transition" and the part of the system that controls the state transitions (i. e. applies a rule set defining what action will result in which state transition, e. g. "if current state is 'new' and action is 'wear' then new state is 'used'") is called a state engine.

So now that we know what a state engine is, let's look at hypermedia:

Hypermedia is used as a logical extension of the term hypertext in which graphics, audio, video, plain text and hyperlinks intertwine to create a generally non-linear medium of information. (from WIKIPEDIA)

Or in clear words: If two or more documents are related by a link, this media is called hypermedia. But this is not what hypermedia in the inner sense of the WWW means, so let's once more cite Mr Fielding's dissertation:

"The model application is therefore an engine that moves from one state to the next by examining and choosing from among the alternative state transitions in the current set of representations."

The bold "in" is essential to understand what actually is HATEOAS and what is not:Only if the alternative state transitions are foundin the representations (i. e.in the MIME-typed documents that actually renderstate, e. g. XML, HTML or PDF documents, audio files or video streams) -but not aside or just near of them- then HATEOAS is true. Why? Because exactly that is what the word HATEOAS itself tells:

  HATEOAS := Hypermedia as the engine of state

Hypermedia as the state of transtition. It is neither "the transport protocol as the state of transition", nor is it "somethingbeside the representation as the state of transation". It is clearly hypermedia and nothing else and even more clearly it is exactly the representation. And the representation is only the MIME-typed document but not any headers, URIs, parameters, or whatever people currently are discussing. Unfortunately Mr Fielding used two divided sentences to explain the concept. It would be much clearer and free of discussion if he would have written what he actually meant to say: 

  HATEOAS := hypermedia documents as the state of state

Why didn't he do so? Let's again check his background: He analyzed the WWW, which comprises mostly of HTML files. HTML files are hypermedia documents. They contain links (relations) to other documents by <a> elements. I really can understand that he never would have imagined that anybody would ever have the really strange idea to cut away the links out from the hypermedia documents and store them elsewhere and still call that document hypermedia. It would break the whole idea of the WWW if you would remove all <a> elements from all HTML files and store them somewhere else. But that is exactly what people currently are discussing (not for HTML but for different formats)! Moreover, I suspect that he was just used to call even one single HTML file hypermedia due to its theoretical possibility to point to a second one by an <a> element. Or in other words: Fielding's "hypermedia" in fact is not any different part of the system but solely the document. This is why he wrote in the above cite explicitly that the state transitions are foundin the representations. It was just absolutely clear that it makes no sense to have the links outside, as it is not common in the WWW.

Update: BTW, yes, it is RESTful to put links in the "Link" header when using HTTP, as Mr Fielding told me. But don't expect any REST client to take care of that, least of them will, unless there is a good reason (like the entity being an image), just as a browser ignores any <LINK>s in HTML unless it is a relation it likes to handle (like RSS feeds). So it is valid, but of potential risk to do so.

What to learn from Mr Fielding?

There is only and exactly one valid implementation of HATEOAS: Having links inside of the document (or, with care, in the "Link" HTTP header, if HTTP is used).

Just like HTML files link to each other with <a> tags, or just like XML files link to each other with XLink, all state transfer has to be done solely as a reaction to a link found inside of a representation (i. e. inside of a MIME-typed document). Any other technology, like passing the state, possible actions or links outside of the representation, e. g. in HTTP headers etc., is by definition notHATEOAS.

Moreover, the weird idea of having explicit URIs solely for state transition without passing any document in or getting any document back, is not HATEOAS. Looking once more at the concept of the WWW, there typically are no such "pure action links". The typical state transfer in HTML is done by either passing a modified version of the document, or by passing a form containing information to be merged into the new state. Butnever will it be HATEOAS to invoke a link without passing new state. Why?

Once more, the dissertation is providing an absolutely clear an unambiguous definition:

"The application state is controlled and stored by the user agent …  freeing the server from the scalability problems of storing state "

When you are trying to execute a typical "solution" currently widely discussed

  
  POST http://resource/id/the-action-to-execute

then this will ask the server to do the state transition. This is in diametral contrast to the above cite of the dissertation which clearly says that it is not the server but solely the client that storesand controls state and thus is explicitlynot HATEOAS. It just makes no sense to call a special URI to trigger a server side action if the client already has switched to the new state. And it shall bethe client that does the switch but not the server. You can just call the "normal" URI of the resource and pass the already state-transisioned version of the resource. By doing so, the server will implicitly learn the new state. No need to tell the server which action was responsible for that (if that would be needed from a business aspect, then it must be part of the uploaded document but not of the used transfer protocol)!

So how to do HATEOAS the right way?

In short: Let your client read the possible actions as URIs out of the received document, set up the modified document and then invoke those link.

No clear yet? Let's make an example. We have an order that is not shipped, and now we want to issue shipping. So the client GETs the order (e. g. as an XML document) and inspects its content. Inside it finds XLinks for various actions. One of them is for shipping. The client puts together the needed information from shipping and POSTs that to the XLink's target. In RESTful terms, what we do is creating a new instance of a shipping instruction by uploading an XML containing shipping details (typically containing the order document itself as a child element, or more simple, its URI as a reference). How does our client know what of the contained XLink URIs the one for shipping is? This is a case of definition. XLink for example comes with the role attribute, so we could defined that it must be the one with the role set to "ship".

It's just similar in case your client is a WWW browser and your document format is HTML: You download the order as a HTML file, containing <form> links. You click on the button that has the title "ship" which performs an action of PUT, containing the shipping details you filled in manually. How did you now which button you must press? You don't. You just guessed well or you had clear instructions.

So to come back to XML, it is just a question of clear instructions, which means, definitions: Your client just needs to know that the XLink to search for is called "ship". There is no technical solution. At least this piece of information must exist. If man does not know that the english word for sending something away is "ship", he wouldn't find the button, too.

And other media types? Well, what will man do when receiving a media file that he has no player for? Nothing! Same for machines. The client needs to be aware of the used media types. It is impossible for the client machine to deal with unknown media types, just as it is impossible for the browser. A good idea would be to use an abstract API that allows to plug in media handlers, just as browsers do.

Tell the truth!

It's not the case that people would not know about all what I wrote above. Most of the people participating in the discussionshave read and well understood the content of the dissertation. But what they have not understood or what they just won't believe (in opposition to Fielding's thesis that an idea that works in the WWW will work everywhere) is that Fielding is just right and that all the problems they had in the past was not caused by REST or HATEOAS but often by not 100% correctly applying it. Also we all are used to apply imperative programming, which means, RPC-style programming. We all are used to call methods from the client and wait for the server to react. This is what we did since the early days of programming and Client/Server, and this is what looks just so easy and simple to do even in RESTful days. But this is neither RESTful not HATEOAS and it is not scalable, so this might be useful and easy to do in many programming projects, but you'll end up with a HTTP based RPC application, not with a RESTful / HATEOAS one.

If you want to gain all the benefits of REST, you need to apply all four constraints but not just three of them. And this clearly implies real HATEOAS in the meaning of the dissertation, not in the interpretation of anybody else besides Mr Fielding himself, and explicitly it doesn't mean HTTP based RPC. If you do not believe that this is the right way, write your own dissertation and fight the thesis "HATEOAS is not feasible". But meanwhile, please stop claiming that it would be HATEOAS to have URIs named like verbs or it would be HATEOAS to pass state or possible actions as HTTP headers, and whatever strange idea you might find on the web declared a HATEOAS.This is not HATEOAS. It is even neither Hypermedia nor RESTful. It is just some use of HTTP that possibly makes your life (currently) easier. Name it as it is, but don't name it HATEOAS.

And please, don't ask "How to do server sided workflows and transactions then?". The question is invalid, as it wouldn't be RESTful to have server sided workflows. Again, read the dissertation, which says that it is the client that modifies and stores state - so it is the client's job to run the workflow, either by doing modifications to a document directly or by asking stateless servers to send back modified documents (single transaction steps) while tracking internally in the client what the next step would be, and how to undo it in case of a failing transaction. So thereare workflows and transactions, but the sole control ison the client side. Every attempt to control thisserver sided wouldn't be RESTful by definition. If you are unable to turn the workflow control from server to the client (I cannot understand why, actually), then don't do REST: Youwill fail.

Note: A list of all my published articles (printed and online) can be found on my web site Head Crashing Informatics (www.headcrashing.eu).

Back in the early 80ies "of the past millenium" (As journalists call it these days - don't you feel as old as I do when reading that phrase? For me it is just "Childhood" and feels not so far ago. At least not a Millenium ago.), when I was a young boy, I teached myself BASIC programming on my father's Sinclair ZX Spectrum 48K and started coding small arcade games (what else will ten year old boys do with a micro computer? The web was not invented back then.). That wonder machine unfortunately had everything but not the possibility to do pixel or vector graphics in pure BASIC. You had to learn Assemblerfor that. But first, with ten years I did not understand what that is good for, and second, I just had no Assembler software - so I had to stick with BASIC. As arcade games don't make any fun when consisting solely of alphanumerical characters (not even back in those ages), I had to find a solution to get smart effects.

 

 

 

What I "invented" (later I learned that it was the default solution to the problem) was to use normal characters, but modify their glyph (a glyph is the graphical representation of a character, like the three bars of an 'A' is the graphical representation of that character, while 65 is the ASCII representation of the same). This could be done easily (hardly couldn't believe that the trick is actually described on the web). Using this trick, I was able to provide pleasant game graphics without the need to learn Assembler. I just has to type lots of zeros and ones into the machine using its meanwhile legendary rubber keyboard (Don't tell my father - it actually washis rubber keyboard - but in fact I once used it as an eraser, what worked pretty well -- much better than the idea I once had with ten or twelve years to directly attach a small light bulb, which effectively killed the Z80 CPU. Possibly the cause why my web site is called Head Crashing Informatics [www.headcrashing.eu]).

 

 

 

I even kept that trick when I later actually did learn Assembler and moved over to the more popular and powerful Commodore C64, which came with much better graphics support in its BASIC dialect (Frankly spoken, I did not move to the C64 because of the improved BASIC but because of the plethora of computer games available for that machine: Yes, I was part of that long-haired sneakers generation hanging around with my pals playing video games for hours). The trick was typical in the games industry and still worked well, even in conjunction with more sophisticated approaches like sprites.

 

 

 

When I got older, I forgot about computer games and did more "serious" programming. I bought "a real PC"  in around 1990 and wrote business applications, studied informatics, and since make a living from developing "serious" software. I never needed to replace glyphs in a font so far.

 

 

 

So what the heck has that to do with Java? Read on.

 

 

 

Some months back a customer told me that he needs to type in "ISO 1101" characters into a text field. Well, actually I had no clue what "ISO 1101" is and what the customers problem is. I expect you neither do, so let me explain. Think of the case that you want people to check whether a produced part is actually round (but not elliptic), or actually even (but not wavy). You could write the English words in the task description, but there will be two problems. First, not everybody can read (even in the so-called "First World"). Second, not everybody would understand what "round" will mean unless you write "round in contrast to elliptic". So the clever guys at ISO defined symbols for "roundness" (?), "eveness" (?) and other geometric words (don't wonder if you do not see them here). In industrial design and production those symbols are just as common as "male" symbol (?) is common to everybody seeking for a restroom. As the symbol has to be used together with alphanumeric characters in running text, there was a need to have a font containing "ISO 1101" characters.

 

 

 

We did not expect that "MS Sans Serif" would contain this characters (actually some European citizen are happy to they find their particular umlauts in fonts, so chances are low to find such specific symbols). So what to do? The customer came up with the information that he bought a special font containing only that symbols, so we had to add a second text field (since that other font did not contain any latin characters, typographic symbols or numbers). While it was a strange solution, it actually worked and the customer was happy with that.

 

 

 

Another idea we had was to do what I did in childhood: Copy the special "ISO 1101" characters into the "MS Sans Serif" font. Unfortunately, first, this is not allowed since it would infgringe Microsoft's copyright, and second, there is not enought place in the Microsoft font to host all the new symbols withouth discarding any other possibly useful character. It was about that moment when I remembered that I had exactly the same problem on my ZX Spectrum twenty years before, and I noticed that the actual cause of the problem is not the missing glyph but more the fact that there are just eight bits to select one one them. So when you want to keep the original 256 characters, you just have no code left over to select any additional glypths. History is repeating, damned!

 

 

 

As we are writing all new software solely in Java, we had the idea to write the complete software from scratch, replacing the existing code by Java. As Java is basing on Unicode™, and as Unicode™'s target is to contain all symbols ever invented by mankind, there should be all "ISO 1101" symbols found in a Unicode™ font. Actually Unicode™ really contains all of them, as can be checked on rainer-seitel.onlinehome.de/unicode-de.html#vorhandene(sorry, German only). You can imagine that we were really happy, as a switch to Java was planned anyway, so the solution would be contained for free.

 

 

 

Write Once, Read Nowhere.

 

 

 

Have you ever tried to type in "ISO 1101" characters into a Java program (or into any other Unicode™-enabled software)? Try it, here are some: ? ? ? ?. Don't be disappointed if you see either nothing or just cryptic placeholders or only few but not four symbols here. This is what many readers will do not running a Windows® machine.

 

 

 

The problem is that the Java standard declares a "Write Once Run Anywhere" paradigma which only covers the "runnability" but does not defined what actually has to be seen on the screen: Java does not enforce that you will really see the actual glyphs defined by Unicode, it only enforces that the code representation ("the integer value") is to be processed unchanged. Neither does Unicode™. There is no law that says that an application that is able to process Unicode™ or claims Unicode™ compatibility is also able to render all glyphs on the screen, nor that a Unicode™ font must contain all glyphs. As a result, virtually every font only a very limited subset of Unicode™ glphys - typically only the most often used ones (it would be just too expensive to add cuneiform or egyptic hierogylphs to every font on earth). You'll have to search for long time to find a font containing all "ISO 1101" characters, and you'll have to search for even longer time to find a JRE that comes with that font bundled.

 

 

 

The end of the story.

 

 

 

So we're back where we started more than twenty years ago: If you want to have a smart GUI showing pleasant symbols but not only "ABC" and "123", you have to tweak your existing fonts with manually added glypths. Sad, but true. BTW, why ever, many fonts contain three symbols for different types of snowflakes: ???. Maybe different types of snowflakes are more essential for mankind that geometric tolerances.

 

 

 

Apropos snowflakes. As I am already disappointed and frustrated now, let's invest some time in Sisyphos work: Winter is back in the black forest, so I'll grab my snow shovel and try to dig for my car. I am rather sure that I left it somewhere here this morning...

 

 

 

Regards

 

Markus

 

 

 

Note: A list of all published articles (printed and online) is available on my web site Head Crashing Informatics (http://www.headcrashing.eu).

 

The winter photo actually shows me at a walk through the black forest today. Photo used by courtesy of Stefanie of inviticon (http://www.inviticon.eu).

So we're back in the stone age of computing where people have to save their work every other minute to not lose everything.

 

Today I lost a really pleasant blog article (and with it, two hours of work) by pressing the "Preview" button in java.net's everything-but-superb editor: It told me that I was logged off (why ever, since my connection still created traffic), threw away the so far authored content, and let me start from scratch. Thank you dear anonymous genius, who ever decided to automatically log off people while JavaScript actually still records user activity...

 

If you won't have to type again everything from scratch a second time (as I do have to do now), always write your complete blog entry outside of java.net's graphical editor and just paste it when you're finished - or press the Save button every other minute, just as we all had been used to back in the computing stone age around ten years ago.

For many years I am using XSLT now for a lot of tasks in both, development and runtime environments: Source generation, creating HTML from XML data, or even rendering SVG vector graphics from XML finance data. But what really bothered me was that the XSLT transformer contained in Java (even in Java 6's latest release) was just able to do XSLT 1.0 but not XSLT 2.0. XSLT (and XPath) 2.0 comes with such a plethora of features that makes coding so much easier, like calling XSLT-written functions from XPath, "real" loops (instead of recursive calls) or dealing with sequences and many more. I couldn't wait any longer to get it, so the question was: What to do?

 

I knew that for several years there was that SAXON product around, and it was free for open source use. Okay, since my only need is Open Source, the licence won't be not a problem. But I was uncertain about technical constraints: Will I have to change my code? Will my XSLT still work? Do I have to rewrite half of my app? I didn't liked the idea that I have to rewrite lots of code places or ask all users to overhaul their XSLT documents.

 

Nothing at all. Today, after months of senseless waiting for a XSLT 2.0 update to Java 6 (suspended for long time while waiting for the Oracle deal), I didn't want to wait any longer! So I just downloaded the latest Open Source release of SAXON 9.2 from SourceForge, copied the sole saxon9he.jar to the JRE's lib/ext folder, and restarted my app... And it worked perfectly just out of the box!

 

There's nothing to change or to configure, just drop the JAR in the right folder and you're done: Great! How could deployment be any easier?

 

So don't wait any longer - download SAXON and enjoy XSLT 2.0's benefits.