Skip navigation
1 2 3 Previous Next

fabriziogiudici

349 posts

Lombok and AspectJ are two very powerful Java tools, working in a similar way for some respects. In fact, they operate on the bytecode (Lombok is an annotation processor, generating boilerplate code when some annotations are found and aspects are a way to add behaviour to classes). This similarity unfortunately is also the source for a problem. If you configure them with the defaults in a Maven build (for Lombok, just putting the .jar into the classpath, for AspectJ adding the specific aspectj-maven-pluginin the build workflow), you'll experience the loss of all Lombok generated code.

The problem is that iajc, the AspectJ compiler, by default launches javac without enabling Lombok for some reason. So, what's happening is:

  1. javac compiles the code enabling Lombok;
  2. iajc kicks in and recompiles everything, throwing the previously generated bytecode and not enabling Lombok when creating the new bytecode.

Trying to bind the aspectj-maven-pluginto the process-classes phase of Maven (which happens just after compile) doens't change anything, as iajc still overwrites the previously generated bytecode. We have to prevent this in order to solve the problem.

The fun thing is that iajc can work in a different way, by ignoring source files and post-processing the bytecode already generated by javac. Unfortunately, the aspectj-maven-plugin has got a bug for which this approach doesn't work; basically, you can't properly pass the required option (-inpath) to the iajc compiler.

 

 

The workaround is to call iajc "manually", that is not passing through aspectj-maven-plugin. A possible solution is to use the maven-antrun-plugin that makes it possible to embed an Ant run attached to a specific Maven aphase. 

Before getting into the details, let me stress the fact that while this is clearly a workaround, using Ant inside Maven is not necessarily a bad thing. Some people screams about that because this would be a "mixed way to do things". It is not true. I'm not jeopardizing the Maven way of working - it's just business as usual. Ant does only a specific and limited task in a specific phase of the Maven workflow. I'm just using Ant as a scripting language to write a sort of own Maven plugin of mine. I could do this by coding a real Maven plugin in Java code, or by embedding a Groovy script. It's just that I find myself more proficient with Ant scripting rather than with Groovy - and of course, developing and releasing a Java-based plugin would be a waste of time.

Now, the gory details:

<profile>
    <id>aspectj</id>
    <activation>
        <file>
            <exists>src/config/activate-aspectj-profile</exists>
        </file>
    </activation>
    <build>
        <plugins>
        <!-- Will no more needed when http://jira.codehaus.org/browse/MASPECTJ-9 is available -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-antrun-plugin</artifactId>
                <dependencies>
                    <dependency>
                        <groupId>org.aspectj</groupId>
                        <artifactId>aspectjrt</artifactId>
                        <version>${tft.aspectjrt.version}</version>
                    </dependency>
                    <dependency>
                        <groupId>org.aspectj</groupId>
                        <artifactId>aspectjtools</artifactId>
                        <version>${tft.aspectjrt.version}</version>
                    </dependency>
                </dependencies>
                <executions>
                    <execution>
                        <id>weave-classes</id>
                        <phase>process-classes</phase>
                        <goals>
                            <goal>run</goal>
                        </goals>
                        <configuration>
                            <target>
                                <taskdef resource="org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties"
                            classpathref="maven.plugin.classpath" />
                                <iajc
                            destDir="${project.build.directory}/classes"
                            inpath="${project.build.directory}/classes"
                            source="${tft.javac.source}"
                            target="${tft.javac.target}"
                            aspectPath="${org.springframework:spring-aspects:jar}"
                            classpathRef="maven.compile.classpath"
                            Xlint="ignore" />
                            </target>
                        </configuration>
                    </execution>
                    <execution>
                        <id>weave-test-classes</id>
                        <phase>process-test-classes</phase>
                        <goals>
                            <goal>run</goal>
                        </goals>
                        <configuration>
                            <target>
                                <taskdef resource="org/aspectj/tools/ant/taskdefs/aspectjTaskdefs.properties"
                            classpathref="maven.plugin.classpath" />
                                <iajc
                            destDir="${project.build.directory}/test-classes"
                            inpath="${project.build.directory}/test-classes"
                            source="${tft.javac.source}"
                            target="${tft.javac.target}"
                            aspectPath="${org.springframework:spring-aspects:jar}"
                            classpathRef="maven.compile.classpath"
                            Xlint="ignore" />
                            </target>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</profile>

From the user's perspective, we're just binding a plugin to a phase (actually two phases: process-classes andprocess-test-classes, that are invoked just after the compiler has compiled the main code and the test code) and a developer using this configuration, once he has understood what's happening, can fire and forget it.

As you can see, the configuration is wrapped into a profile, which is activated by a file. By creating the file src/config/activate-aspectj-profilein my modules needing AspectJ I'm enabling all the needed stuff. This allows me to put the above POM fragment in a single place, my superPOM, as I described in my previous blog post., avoding to copy-paste the workaround in multiple places. When the aspectj-maven-plugin is fixed I'll just release a new superPOM containing the default AspectJ configuration and my projects won't need any change.

 

I have to thank people in the Lombok mailing list for suggesting how to solve the Lombok+AspectJ incompatibility problem with Ant, from which I just added the Maven integration part.

While we're more or less all well-informed people, if you dare to read general comments related to the Oracle-Google affair, not only by casual readers, but also by supposed-to-be-professionals writing in generic newspapers and news sites you'll find lots of wrong or at least badly written assertions. One of the most recurring misconceptions is the use of the Apache license header, as it has been pointed out by Apache in an official blog post and by Stephen Couleborne in many comments in the blogosphere (the latest one here). Indeed the problem seems to be that the Apache license header has been naively copied by somebody who committed to Android (as I said before, there's also the problem in the package name of the infamous class, that seemed to refer to Harmony).
Ok, big fault at Google. But, what about us? Are we applying the correct license headers to our code? Are we checking them?

For instance, if you download the original Apache License you can read at the bottom:

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.



Note that this is a very simple but complete how-to and explains that you should add a specific Copyright notice with your name / reference. Paraphrasing Cay, you shouldn't just "slap the Apache license header" to your files. For instance, this is my standard header:

/***********************************************************************************************************************
 *
 * blueBill Mobile - open source birdwatching
 * ==========================================
 *
 * Copyright (C) 2009, 2010 by Tidalwave s.a.s. (http://www.tidalwave.it)
 * http://bluebill.tidalwave.it/mobile/
 *
 ***********************************************************************************************************************
 *
 * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
 * the License. You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
 * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
 * specific language governing permissions and limitations under the License.
 *
 ***********************************************************************************************************************
 *
 * $Id: ControlFlow.java,v 67dcb737b74e 2010/06/09 01:20:14 fabrizio $
 *
 **********************************************************************************************************************/


Note that nobody would have any doubt that this code is mine and not by the Apache Software Foundation. Is this a correct way to write a license header?

Are you handling your headers with the needed care? There are even Ant and Maven based tools to perform an automatic check that every file is properly equipped with a header (of course, it's up to you to define it); as far as I know, at least some of the official Apache projects include this check in the automated tests (something that we should all do... it's easy).

When somebody steals your credit card:

Hell, where's my credit card? They can drain my account! I'll phone immediately to my bank and block it!


When somebody steals your phone-as-a-credit-card:

Hell, where's my phone? They can drain my account! I'll phone... oh, FUCK!

 

:o)

PS I'm not demonizing the new feature from Android. Somebody will find it useful. I won't.


 

Maven fundamental principle is to translate your esperience and best practices into machine-readable configuration so to enforce your best practices in your build process. As Wikipedia says:

A maven (also mavin) is a trusted expert in a particular field, who seeks to pass knowledge on to others. The word maven comes from the Hebrew, via Yiddish, and means one who understands, based on an accumulation of knowledge.

That's one of the reasons superPOMs are for: to accumulate your plugin configurations and practices in a reusable way by means of inheritance. In fact, my personal superPOM is getting quite large (1400+ lines) and it's valuable to refer to it from all my projects without cut & paste.

But one limitation is given by the fact that Maven only supports single inheritance, such as Java. Let's suppose I have some specific configuration for very different scenarios/tools such as:

  • working with webapps and Jetty or Tomcat
  • using AspectJ
  • using Android
  • having a specific, customized release process

While all my projects always share the customized release process, some aren't using Jetty, or Android, or AspectJ. This on one side means that Jetty, Android or AspectJ stuff shouldn't be in the superpom; but since I can't do multiple inheritance, how do I reuse my knowledge?

The answer is: with profiles. A profile is a subsection of a POM that usually is ignored, but can be made part of the effective POM by means of, say, a command-line switch. For instance, this is my Jetty configuration:

<profile>
    <id>jetty</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.mortbay.jetty</groupId>
                <artifactId>maven-jetty-plugin</artifactId>
                <configuration>
                    <stopPort>${tft.jetty.stopPort}</stopPort>
                    <stopKey>${tft.jetty.stopKey}</stopKey>
                    <scanIntervalSeconds>${tft.jetty.scanIntervalSeconds}</scanIntervalSeconds>
                    <webAppConfig>
                        <contextPath>${tft.webapp.contextPath}</contextPath>
                        <baseResource implementation="org.mortbay.resource.ResourceCollection">
      <!-- Workaround for Maven/Jetty issue http://jira.codehaus.org/browse/JETTY-680 -->
      <!-- <resources>src/main/webapp,${project.build.directory}/${project.build.finalName}</resources> -->
                            <resourcesAsCSV>src/main/webapp,${project.build.directory}/${project.build.finalName}</resourcesAsCSV>
                        </baseResource>
                    </webAppConfig>
                </configuration>
            </plugin>
        </plugins>
    </build>
</profile>

I've left the comments about a bug workaround (copied from the web) because they are part of the problem. As part of your knowledge of Maven and plugins there are also bug workarounds (I'll illustrate a moderately complex one in the next blog post). As those workaround usually evolve and - hopefully - sooner or later disappear, you don't want to copy & paste them multiple times around. 

Back to the profile shown above, it can be enabled by running Maven with the -Pjetty command-line switch. Now, the problem is that I want it to be always enabled in some modules and not in others (while Jetty support could be only needed in some tasks, such as testing, making the use of a command line switch acceptable, think of AspectJ: it's required during the compilation phase of modules using it, or you'll produce invalid artifacts). Now, the command line activates a profile for thewhole build, so if you're launching a build from the root of a complex, multi-module project, the profile will be enabled even when you don't need it. It's a all-or-nothing-at-all kind of thing.

Profiles can be enabled in other ways too. For instance, by checking the existence of a system property. Unfortunately, that's a system property, not a property in the POM itself (that's why the profile can control the values of POM properties, thus it can't be controlled by them at the same time). And we're back again at the beginning, as system properties can't be singularily enabled or disabled for each module.

Fortunately, profiles can be also activated when Maven finds a certain file. That's the solution. My Jetty profile actually contains this section:

    <activation>
        <file>
            <exists>src/config/activate-jetty-profile</exists>
        </file>
    </activation>
Now, I just need to create an empty file with the name src/config/activate-jetty-profile for modules when I want Jetty support. Of course, the Tomcat, AspectJ etc... profiles work in the same way, just with different activating file names. By creating the right files, I can activate profiles in any configuration I like.
Even though I'm still working with inheritance as it's the only tool that Maven offers me to reuse stuff in POMs (*), this approach offers me a sort of composition feature, as I can compose as many pieces of POMs as I want. Composition is better than inheritance, even in POMs.
This makes me live happy until Maven 3.1 is released, bringing POM "mixins" out-of-the-box.

 

(*) Actually there's a form of composition that can be done between POMs (by means of the "import" scope), but it only relates to the dependencyManagementsection

Here's my tentative agenda for Jazoon 2010. For some slots, there are still conflicts among equally interesting talks, so I'll probably decide at the last minute.


Tuesday, 1 June 2010

Wednesday, 2 June 2010

Testing for features is not enough: you should always test also for performance. For instance, a test could assert that a given task is executed under a certain time (which is not always easy, and if you run tests under CI, that is on a server which performs multiple tasks, things can be slower or faster depending on available CPU). Testing for performance is important because sometime you can degrade the performance of your code during a refactoring, or just updating a library or the Java runtime.

A manual performance test is usually easier (and in simpler scenarios can be enough) because you can make sure that the computer you're running it on is always in a well-known state. MySolidBlue application computes batches of MD5 fingerprints of files for integrity verification purposes and I usually test it against my 400GB directory of photos.

SolidBlue is optimized and designed around an actor system. In particular, a class named FileLoaderActorreceives messages notifying that a certain file has been discovered in a scan and reads its contents, firing another notification.

The code is optimized for having a single instance of the actor (magnetic discs usually perform better when you do a single operation a time) that tries to spend most of its time by loading from the disk, thus saturating the I/O channel. MD5 is computed by other actors in separate threads. So far I've been able to achieve 90% of the speed of my disk, which sounds good.

To my surprise, when I ported the application to OpenJDK 7u4 for Mac OS X, I've noticed a dramatic performance hit (roughly 10x slower). For the limited test I've tried, the same happens with the Oracle JDK 7u4.

Thanks to the help from the macosx-port-dev(@)openjdk.java.net mailing list, I've found that the problem is a regression of JDK 7 that, fortunately, can be worked around.

I perform I/O by means of NIO channels:

FileInputStream fis = new FileInputStream(file);
ByteBuffer byteBuffer = fis.getChannel().map(READ_ONLY, 0, file.lenght()).load();

This usually is the faster method for I/O, but it also gives the advantage that mapped byte buffers don't consume heap memory: file contents are instead mapped to virtual memory. This means that I can process very large files (e.g. a 500MB video), perhaps many at the same time, without having to allocate a large heap. This can happen because methods for computing MD5 can directly operate on a ByteBuffer, including a mapped one.

The load()method, according to the javadoc, "makes the best effort" to have file contents in physical memory. This happens by accessing a sequence of single bytes, each for a physical page (a bunch of consecutive bytes whose size is defined by the operating system). This approach also allows for a very fast I/O since disks are faster when you perform sequential I/O.

Note this: the task was performed by native code in JDK 6, which was translated to Java code in JDK 7:

Unsafe unsafe = Unsafe.getUnsafe();
int ps = Bits.pageSize();
int count = Bits.pageCount(length);
long a = mappingAddress(offset);
for (int i=0; i < count; i++) {
    unsafe.getByte(a);
    a += ps;
}

What I've discovered is that with OpenJDK 7u4 (at least on Mac OS X, but I think it's a general problem) load() doesn't do anything: a quick test reveals that the method returns immediately and no disk activity is recorded. From a functional point of view the application still works because the data in theByteBufferis loaded on demand by the MD5 code. But the MD5 library probably doesn't access data sequentially, thus triggering a very inefficient sequence of page loads.

It turns out that the "culprit" is HotSpot, which is able to detect that the accessed bytes aren't really used. Thus it doesn't generate native code for it! Of course, it's not a bug of HotSpot, which is just doing an advanced performance optimization; it's the code in the load()method that needs to be fixed.

A way to work around the problem is to add this command line option:

-XX:CompileCommand=exclude,java/nio/MappedByteBuffer,load

which instructs HotSpot not to compile the load() method to native code. Fun, isn't it? In this strange scenario disabling HotSpot makes for faster code :-)

 

You can reproduce the problem by testing the following code without and with the -XX option described above.

import java.util.Random;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import static java.nio.channels.FileChannel.MapMode.READ_ONLY;

/***********************************************************************************************************************
 *
 * A self-contained test file to exercise an OpenJDK 7u4 bug:
 * 
 * http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7168505.
 * 
 * @author  fritz
 * @version $Id: IoPerformanceTest.java,v 21d49fdefe53 2012/05/14 12:52:11 fabrizio $
 *
 **********************************************************************************************************************/
public class IoPerformanceTest 
  {
    private final static double MEGA =  1024 * 1024;
    private final static int MIN_FILE_SIZE =  10 * 1000 * 1000;
    private final static int MAX_FILE_SIZE = 100 * 1000 * 1000;
    
    private File testFileFolder;
    
    public static void main (final String ... args)
      throws Exception 
      {
        final IoPerformanceTest test = new IoPerformanceTest();
        test.createTestFiles();
        test.test();
      }
            
    private void createTestFiles() 
      throws IOException
      {
        System.err.println("Creating test files...");
        testFileFolder = new File(System.getProperty("java.io.tmpdir"));
        testFileFolder.mkdirs();
        final Random random = new Random(342345426536L);
        
        for (int f = 0; f < 20; f++)
          {
            final File file = new File(testFileFolder, "testfile" + f);
            System.err.println(">>>> creating " + file.getAbsolutePath());
            int size = MIN_FILE_SIZE + random.nextInt(MAX_FILE_SIZE - MIN_FILE_SIZE);
            final byte[] buffer = new byte[size];
            random.nextBytes(buffer);
            final FileOutputStream fos = new FileOutputStream(file);
            fos.write(buffer);
            fos.close();
          }
      }
    
    public void test()
      throws Exception
      { 
        final long startTime = System.currentTimeMillis();
        long size = 0;
        
        for (int f = 0; f < 20; f++)
          {
            final File file = new File(testFileFolder, "testfile" + f).getAbsoluteFile();
            final FileInputStream fis = new FileInputStream(file);
            final ByteBuffer byteBuffer = nioRead(fis, (int)file.length());
            fis.close();
            size += file.length();
          }
        
        final long time = System.currentTimeMillis() - startTime;
        System.err.printf("Read %d MB, speed %d MB/sec\n", (int)(size / MEGA), (int)(((size / MEGA) / (time / 1000.0))));
      }
    
    private ByteBuffer nioRead (final FileInputStream fis, final int length) 
      throws IOException
      {
        return fis.getChannel().map(READ_ONLY, 0, length).load();
      }
    
    private ByteBuffer ioRead (final FileInputStream fis, final int length) 
      throws IOException
      {
        final byte[] bytes = new byte[length];
        fis.read(bytes);
        return ByteBuffer.wrap(bytes);
      }
  }

As I said in my previous post, a few months ago I've put under my CI a number of projects of mine to be tested in parallel with JDK 6 and JDK 7. After a few minor issues, they were ok and have been working even in production under (Open)JDK 7 since the end of the past year. Given that now Oracle has started releasing JDK 7 for Mac OS X too and that the latest NetBeans 7.1.2 works fine under Mac OS X and JDK 7, I'm going to start dropping support for JDK 6 and gradually move all my projects to JDK 7. This means to set -source 1.7 and -target 1.7, so I'll be able to start using the new Java 7 features.

But yesterday night the new -source and -target options gave me a strange result: all of a sudden, some tests failed with funny messages such as:

Failed tests:   setUp(it.tidalwave.northernwind.core.impl.filter.NodeLinkMacroFilterTest):
Expecting a stackmap frame at branch target 94 in method
it.tidalwave.northernwind.core.impl.filter.MacroFilter.filter(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;
at offset 22   setupFixture(it.tidalwave.northernwind.core.impl.filter.XsltMacroFilterTest):
Expecting a stackmap frame at branch target 21 in method it.tidalwave.northernwind.core.impl.filter.XsltMacroFilter.filter(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;
at offset 6   setupFixture(it.tidalwave.northernwind.core.impl.model.DefaultSiteProviderTest):
Expecting a stackmap frame at branch target 14 in method it.tidalwave.northernwind.core.impl.model.DefaultSite.r(Ljava/lang/String;)Ljava/lang/String;
at offset 6
...

Turns out they are error from the bytecode verifier. Now, before starting the migration to JDK 7 you should read the document prepared by Oracle: at section 4.1.1 there's something about the new bytecode format and bytecode verifier. Trying to keep the things simple, the new bytecode verifier performs stricter checks and relies upon changed data format in the bytecode. The VM in JDK 7 uses the new verifier for code compiled in Java 7 mode, but falls back to the old one for code compiled in Java 6 mode. Thus, in theory there should be no problem.

Problems actually arise if you're using bytecode manipulating tools, such as AspectJ in static weaving mode - I do - that haven't been updated yet. They basically read the bytecode, tagged as Java 7 bytecode, and perform changes in Java 6 mode, saving the results still tagged in Java 7 mode. Thus, the VM in JDK 7 sees Java 7 bytecode and activates the new Java 7 verifier, which fails (or can fail) when it meet the bytecode manipulated in Java 6 mode.

It seems complicated, but the simple solution is to force the use of the old verifier in JDK 7 by adding this VM runtime option: -XX:-UseSplitVerifier.

When AspectJ is updated (AspectJ 1.7.0.M1 is the first release targeted at JDK 7, but it's still a milestone so it's probably not a good idea to try it in production) everything will be fine again and there will be no more need for the -XX option.

With the first release ever of a Java VM by Oracle for Mac OS X, a long, long time problem has probably been solved. Now we have a single producer of VM, Oracle, that can produce bits for all the major operating systems (Windows, Linux, Mac OS X) and release in the same moment. In the past Apple was entirely responsible for bits on Mac OS X with a chronical delay with respect to the other operating systems.

This is also the first official, non early-access release of a JDK 7 for Mac OS X and makes it possible to get aligned to Java 7 for our projects. Java 6 is going to end its life in a few months and my intention is to have all my projects relying on Java 7 within this Summer. Some of them has been already put under CI with parallel Java 6 and Java 7 builds since a few time, as well as running in production with JDK 7. Now that I can compile them in Java 7 also on my Mac OS X laptop (NetBeans 7.1.2 also runs fine with JDK 7 on Mac OS X), I can start dropping support for Java 6, setting -source and -target 1.7. Which also means that I can start using the new Java 7 features.

The process is not necessarily without problems, and I'll blog about one that I faced with (and solved) yesterday in my next post.

Java is great. But sometimes you get caught in a trap of mud and you don't see it coming.

Context: I'm working with XSLT for manipulating XHTML and it works great. Well, 99% of it works great. The remainder 1% is problematic. Xalan (the XSLT processor in the JDK) knows how to produce output in HTML and XML modes, by means of the <xsl:output .../> directive. HTML mode doesn't work, because XHTML is not HTML. XML should work, because XHTML is XML. Really? Not, indeed. XHTML differs from XML for some details in serialization. In particular, while empty elements in XML are always serialized with a shortcut (e.g. <element/>), this is not the case for XHTML. Some elements, such as <br/> must be serialized with shortcuts. Some must not, such as <a>, <p>, <script>, <textarea>, etc... This means that an empty anchor must be always serialized with <a></a>.

Such empty elements are frequent when a JavaScript library is used, since they are usually placeholders to be dinamically populated. Even recent browsers screw up things when they see a <script href="..."/> element, and some JavaScript tools which manipulate the DOM screw up things when they see something such as <a/>.

In spite of this, Xalan is not able to deal with proper XHTML serialization. There are a number of blog posts from annoyed people and an official Xalan bug opened ... in 2004 and never fixed. Clearly at the Xalan community they think that manipulating XHTML is a niche activity (Sun has been bashed for years because of relevant bugs not fixed after a long time, but clearly they weren't alone).

Trying to patch the internal serialization classes of Xalan didn't work, as they are tightly coupled with a lot of stuff in the com.sun.* packages. Apache used to provide a number of serializes for XML, but they were deprecated in favour or TrAX (that is, Xalan), or LSSerializer. Unfortunately the latter doesn't seem to provide any flexibility in XML serialization and can't be used for proper XHTML serialization.

So, the only solution I've found so far is to resume an old deprecated class named XHTMLSerializer which does almost everything good (the missing parts can be easily fixed by subclassing). Actually, looking at the source, specific care to the XHTML issues were paid, demonstrating that among the Xalan authors the problem was well understood. Somebody was probably too quick on the trigger when he decided to deprecate some stuff without ensuring that all the features had found their way to the new classes.

Done? Not at all. XHTMLSerializer works fine with JDK6 but miserably fails with JDK7. It seems that the JRE7 misses a resource used by some of the inner classes. Probably these classes have not been tested in JDK7 since they are deprecated (but, then, what's the point in including them in the runtime?).

Second attempt, and I copied three other classes from JDK6 into my application, together with the missing resource. Fortunately they are not tightly coupled with other stuff and they can live on their own. 

Hours wasted for such a silly thing.

If this can be helpful to you, the details are filed in my project's issues NW-96 and NW-99(they include links to patches).

gt;.

I think that Maven is a great tool for development. But it can be used for more. For instance, I've just prepared things so that starting from a clean room you can try out my lightweight CMS,NorthernWind, by just invoking a couple of commands. They will install and run NorthernWind, an embedded server and an example site. More information at the NorthernWind blog.

fabriziogiudici

Actors in Java Blog

Posted by fabriziogiudici Jan 4, 2012

Recently there has been a renewed interest into the Actor programming model. The Actor Model actually comes from the '70s, but as far as I'm aware it has been used only in a very limited subset of industrial projects outside the area of telecoms. Erlang(not by chance developed at Ericsson, a telecom industry), which is a language whose concurrency model is actor oriented, is getting some attention but it's definitely a niche kind of a language. More interesting is the fact that the Scala language developed its own actor-based platform, Akka, which is also available to Java (and, generally speaking, to JVM-based languages). While there might (a-hum) be a lot of hype about Scala and I'm pretty oriented to apply a big low-pass filter on trendy and cool stuff, actors have definitely their place in the world when you're involved with complex concurrency. Once upon a time complex concurrency was a matter only for specific industrial segments (e.g. the previously cited telecom industry), but since things such as multi-cores and "the cloud" are invading our battlefields it's better to get prepared.
 

BTW, cloud and multi-core apart, I've already seen a lot of production code with scattered synchronized sections that "perhaps shouldn't be there, but the thing works and it's better to let them stay". Note that this argument is the typical smell of lack of testing, but concurrency is such a complex thing to test that even when the customer is a good guy and achieves a very high coverage, often he can't be sure. The risk is to have some unprotected section that sooner or later will lead to some inconsistency or raise an exception, up to the scary possibility that a deadlock occurs. The better way to deal with concurrency is to pick a programming model that avoids problems by costruction, and actors can be a good choice.
 

I've started doing some coding in the area and while I'm still far from a conclusion it's high time I posted some considerations.
 


Actors and Akka
 

So, what are actors about? These are the basic principles:
 

  1. No shared (mutable) data. Each actor manages its own slice of data and there's nothing shared with others. Thus no need forsynchronized.
     
  2. Lightweight processes. Threads are still good, since they scale well; but, as per the previous point, they must be isolated.
     
  3. Communicate through Asynchronous Messages. What is traditionally done by invoking a method from an object to another now is done by sending an asynchronous message. The sender is never blocked, and the receiver gets all inbound messages enqueued into a 'mailbox', from which they will be consumed one at a time per each actor instance.
     
  4. Messages are immutable. This should be inferred by the previous points (otherwise bye bye "no shared mutable data"), but it's a good advice to stress the point.
     

With actors the contract of a computational entity is no more represented by an interface (that is an enumeration of methods) and behaviour, but by the set of messages that the actor can receive and send, and behaviour. Copying the first example in the Java API for Akka, a simple actor can be:

import akka.actor.UntypedActor;
import akka.event.EventHandler;

public class SampleUntypedActor extends UntypedActor
  {
    public void onReceive (Object message) throws Exception
      {
        if (message instanceof String)
          {
            getContext().replyUnsafe("Hello " + message);
          }
        else
          {
            throw new IllegalArgumentException("Unknown message: " + message);
          }
      }    
  }

Actors, being managed objects (and possibly distributed on a network), can't be directly accessed, but must be handled by references. For instance, to send a message to the previous actor:

import static akka.actor.Actors.*;
final ActorRef myActor = actorOf(SampleUntypedActor.class);
actor.start();
...
actor.tell("world");

Now Akka has a considerable value in being tested and optimized - performance is one of the reasons for which you might want to use actors and it's not easy to achieve and Akka is known to scale to impressive numbers. But while messages can be delivered in different fashions, unfortunately Akka seems not to be supporting publish & subscribe which is my preferred way to go (I've seen references to extensions and possibly this feature could be introduced in future). This is a first showstopper for me. Furthermore, basic concepts are simple but, as it unfortunately often occurs, things get more complicated as you start digging into details. Also, you note the total lack of syntactic sugar, as the incoming message is a vanilla Object and you need a cascade of if (message instanceof ...) to deal with multiple messages. At least, for the things Akka calls untyped actors: there are typed actors too, which can be even defined again by an interface to mimic method calling, that Akka will manipulate through AOP to transform method invocation in message passing. An approach that I don't like, because I think that in a message-passing software the approach must be evident in source code, not masked under something that pretend to be method invocation; furthermore, method invocation semantics are incompatible with publish & subscribe. So, I'd say that with Akka either you have no sugar or you're going to sink into honey.
 

Anyway, just before publishing this blog post, I've learned thatAkka 2.0 is going to have publish & subscribe. Excellent news. In the meantime?
 


Understanding what I want
 

In the meantime, I've started sketching some code on my own, to understand how I'd like to use actors. The idea is to learn things and then try to converge to Akka, eventually with a thin layer of syntactic sugar. There's enough work so far that's worth while a blog post, both for my personal need of writing down things and to discuss with others.
 

Since things must be given a real context I've started thinking of desktop applications, in particular two simple tools that I need: SolidBlue computes the fingerprint of the files contained in a directory (I use it for periodically checking the sanity of my photos and related backups) and blueArgyle blueShades provides a GUI forArgyll, a tool for color management, suited to my needs. Both are being implemented with theNetBeans Platform.
 

So let's focus on SolidBlue and its main feature. The scenario is:
 

  1. You select a directory.
  2. The application scans the directory recursively.
  3. For each discovered file, contents are loaded and the MD5 fingerprint is computed.
  4. Each MD5 computed fingerprint is stored into a flat file.
  5. A UI monitors the workflow displaying the current status on the screen.

The flow can be parallelized. For instance, directory recursive inspection can be performed by multiple threads, as well as the MD5 computation, which only involves the CPU. On the other hand, loading from the disk is something that it's better to serialize (that is, reading only one file at a time) in order to maximize the disk throughput (which it's the true bottleneck of the application). Indeed, things are a bit more complex than I've just said, but I'll be back on the issue later.
 


Primary message flows in SolidBlue
 

Note: in the following code samples, I'm making use of Lombok annotations for some plumbing. This is not only great for everyday's work, but it also makes code samples much easier to read. Also, I'm only listing significant import statements, as well as skipping some non relevant code chunks.
 

Everything starts by sending to the system a message asking for a scan:
 

FileScanRequestMessage.forFolder("/Volume/Media/Photos")
                      .withFilter(withExtensions("NEF", "JPG", "TIF"))
                      .send();

As you can see, there's no target actor, but according to the "publish and subscribe" approach the message just sends itself. The listing for FileScanRequestMessage is below:
 

package it.tidalwave.integritychecker;

import it.tidalwave.actor.MessageSupport;
import it.tidalwave.actor.annotation.Message;

@Message @Immutable @RequiredArgsConstructor(access=PRIVATE) @EqualsAndHashCode @ToString(callSuper=false)
public class FileScanRequestMessage extends MessageSupport
  {
    public static interface Filter
      {       
        public boolean accepts (@Nonnull FileObject fileObject); 
      }
     
    @Nonnull @Getter
    private final FileObject folder; 
   
    @Nonnull @Getter
    private final Filter filter; 
   
    @Nonnull
    public static FileScanRequestMessage forFolder (final @Nonnull FileObject fileObject)
      {
        return new FileScanRequestMessage(fileObject, Filter.ANY);
      }   
   
    @Nonnull
    public FileScanRequestMessage withFilter (final @Nonnull Filter filter)
      { 
        return new FileScanRequestMessage(folder, filter); 
      }
  }


Somewhere there's a FileObjectDiscoveryActor that listens to that message:

package it.tidalwave.integritychecker.impl;

import it.tidalwave.actor.annotation.Actor;
import it.tidalwave.actor.annotation.ListensTo;

@Actor @ThreadSafe
public class FileObjectDiscoveryActor
  {
    public void onScanRequested (final @ListensTo FileScanRequestMessage message)
      {
        for (final FileObject child : message.getFolder().getChildren())
          {
            if (message.getFilter().accepts(child))
              {
                (child.isFolder() ? FileScanRequestMessage.forFile(child).withFilter(message.getFilter())
                                  : FileDiscoveredMessage.forFile(child)).send();
              }
          }
      }
  }

You can see here my syntactic sugar: the actor is a POJO and methods designated to receive messages are annotated by@ListensTo. Multiple methods can listen to multiple messages. The annotation is similar to @Observes by CDI (JSR-299) - actually I'd love to reuse it, but it seems to have different semantics (e.g. by default it can activate managed objects that are the target of an inbound message; furthermore there are multiple delivery options related to transactionality that at the moment I'm not interested into).
 

Recursive directory navigation is implemented by sending anotherFileScanRequestMessage. When a regular file is detected, a FileDiscoveredMessage is fired (there's no need to see its listing, as it's straightforward).
 

Inbound messages are first enqueued and then dispatched in a FIFO fashion (with exceptions explained later). If the actor is not thread safe, there's the guarantee that it can be engaged by a single thread at any time (thus, messages are processed one per time). In this way there's no need to havesynchronized sections. If the actor is thread safe (the default), as FileDiscoveredMessage that's even stateless, it can be engaged by multiple threads at the same time. The thread safety property is specified as an attribute of@Actor (unfortunately @ThreadSafe can't help since its retention is compile time).
 

The second actor is the one that reads data from files:
 

package it.tidalwave.integritychecker.impl;

import it.tidalwave.actor.annotation.Actor;
import it.tidalwave.actor.annotation.ListensTo;

@Actor(initialPriority=Thread.MAX_PRIORITY) @ThreadSafe
public class FileLoaderActor
  {
    public void onFileDiscovered (final @ListensTo FileDiscoveredMessage message)
      {
        final FileObject fileObject = message.getFileObject();

        try
          {
            final File file = FileUtil.toFile(fileObject);
            final @Cleanup RandomAccessFile randomAccessFile = new RandomAccessFile(file, "r");
            final MappedByteBuffer byteBuffer = randomAccessFile.getChannel().map(READ_ONLY, 0, file.length()); 
            byteBuffer.load();
            randomAccessFile.close();
            new FileContentsAvailableMessage(fileObject, byteBuffer).send();
          }
        catch (Exception e)
          {
            FileDamageDetectedMessage.forFile(fileObject).withCause(e).send();
          }
      }
  }

It uses memory-mapped I/O for speed and fires aFileContentsAvailableMessage carrying the bytes, or aFileDamageDetectedMessage in case of error.
 

The next stage is the actor that computes the fingerprint:
 

package it.tidalwave.integritychecker.impl;

import it.tidalwave.actor.annotation.Actor;
import it.tidalwave.actor.annotation.ListensTo;

@Actor @ThreadSafe
public class FingerprintComputerActor
  {
    public void onFileContentsAvailable (final @ListensTo FileContentsAvailableMessage message)
      throws NoSuchAlgorithmException
      {
        final String algorithm = "MD5";
        final MessageDigest digestComputer = MessageDigest.getInstance(algorithm);
        digestComputer.update(message.getContents());
        final Fingerprint fingerprint = new Fingerprint(algorithm, toString(digestComputer.digest()));
        FingerprintComputedMessage.forFile(message.getFileObject()).withFingerprint(fingerprint).send();
      }
  }

Again, a FingerprintComputedMessage message carrying the result is fired. The last stage is the persistence actor:
 

package it.tidalwave.integritychecker.impl;

import it.tidalwave.actor.CollaborationStartedMessage;
import it.tidalwave.actor.MessageSupport;
import it.tidalwave.actor.annotation.Actor;
import it.tidalwave.actor.annotation.ListensTo;
import it.tidalwave.actor.annotation.Message;
import it.tidalwave.actor.annotation.OriginatedBy;

@Actor(threadSafe=false) @NotThreadSafe
public class PersistenceManagerActor
  {
    @Message(outOfBand=true) @ToString
    static class FlushRequestMessage extends MessageSupport
      {
      }
   
    private final Map<String, Object> map = new TreeMap<String, Object>();
   
    private File persistenceFile;
   
    private boolean flushPending;
   
    public void onScanStarted (final @ListensTo CollaborationStartedMessage cMessage,
                               final @OriginatedBy FileScanRequestMessage message)
      {
        final File folder = FileUtil.toFile(message.getFolder());       
        final String fileName = new SimpleDateFormat("'fingerprints-'yyyyMMdd_HHmm'.txt'").format(new Date());
        persistenceFile = new File(folder, fileName);
        map.clear();
      }
   
    public void onFileDiscovered (final @ListensTo FileDiscoveredMessage message)
      {
        final String name = message.getFileObject().getNameExt();
       
        if (!map.containsKey(name)) // message sequentiality is not guaranteed
          {
            map.put(name, "unavailable"); 
            requestFlush();
          }
      }

    public void onFingerprintComputed (final @ListensTo FingerprintComputedMessage message)
      {
        final String name = message.getFileObject().getNameExt();
        final Fingerprint fingerprint = message.getFingerprint();
        map.put(name, fingerprint);
        requestFlush();
      }
   
    private void onFlushRequested (final @ListensTo FlushRequestMessage message)
      throws InterruptedException, IOException
      {
        flushPending = false;
        // writes data to file
     }
   
    private void requestFlush()
      {
        if (!flushPending)
          {
            new FlushRequestMessage().sendLater(5, TimeUnit.SECONDS);
            flushPending = true;
          }
      }
  }

The basic idea is not to write to disk every piece of data as it's generated, since it might be expensive (well, perhaps writing to a relational database wouldn't be, but for a simple tool such as SolidBlue it makes more sense to write to a flat file). So, data are first put into an in-memory map (a record is generated as soon as a file has been discovered and later overwritten with the final result) that is periodically flushed to disk. So, the actor is stateful, hence it's not thread safe (there must be only one instance). Whenever the map is changed, it's marked 'dirty' and a request to flush it to the disk is generated. The flush will happen within 5 seconds, but it's not a good idea to use a Java Timer as I'd run into the threading problems that I want to avoid. Instead, a private FlushRequestMessage is sent back to itself with some delay, triggering the write to the disk. This message is annotated as 'out-of-band', it means that must delivered with high priority (in practice, it's placed at the head of the incoming message queue rather than the tail). Since the runtime guarantees that only a single message can be processed by any actor instance at any time, there's no need to protect the map, or theflushPending flag, by synchronized sections.

The onScanStarted() method is interesting, because it introduces the concept of collaborations.

 

Collaborations
 

Actually, asynchronicity is excellent for massively concurrent system (as well as, not to say, to model things that are asynchronous in nature, and there are many). But it's nice to know, in some way, that a sequence of things that happened in response to a message of yours has been completed. In my code I've called this concept "Collaboration":
 

  1. Any message is always part of a Collaboration.
  2. If the thread creating a certain message is not bound to a Collaboration, a new Collaboration is created and the message will be considered its originator.
  3. The originator message will be delivered in threads bound to its Collaboration. This means that any further message, created as a consequence of the reception of the originator message, will share its Collaboration.
     
  4. A Collaboration keeps tracks of all the related messages and threads by means of reference counting.
     
  5. When there are no more pending messages or working threads for a Collaboration, it is considered completed.
  6. Two special messages, CollaborationStartedMessageand CollaborationTerminatedMessage, are fired by the runtime to notify of the creation and completion of collaborations.

When a message is sent, you can get itsCollaboration object:
 

final Collaboration collaboration = new MyMessage().send();

and any listening method can get the Collaborationas well:
 

public void onMyMessage (final @ListensTo MyMessage message)
  {
    final Collaboration collaboration = message.getCollaboration();
    ...
  }

By construction, thus, a Collaboration is attached to all the threads that are directly or indirectly triggered by the originating message. This could be used for managing regular transactions (for instance, attaching ajavax.transaction.UserTransaction to theCollaboration). But actors seem to be better dealing with Software Transactional Memory, so I drop this argument for now.
 

It's even possible to synchronously waiting for a Collaboration to complete:
 

collaboration.waitForCompletion();
 

even though I think it's only useful for tests.  More interesting is the case in which a Collaboration includes bidirectional interactions with a user: for instance, making a dialog box to pop up and wait for a selection.
In this circumstances, a Collaboration can be suspended (this means it is not considered to be terminated even though there are no pending messages or working threads) and later resumed. More details in a further post.
 

Collaborations is something that I don't expect to find in platforms such as Akka, since I'm not sure it's feasible to have them working in an efficient way in a distributed context (I mean: maybe yes, maybe not, I haven't studied the problem yet). But they are a nice feature to have if possible and perhaps it's possible to implement it on top of an existing actor platform.
 

Back to the method onScanStarted(), it is clear now that it listens to the CollaborationStartedMessagemessages originated by a FileScanRequestMessage to initialize the storage of the fingerprints, which must be located in the top scanned directory.
 

In a similar way, theIntegrityCheckerPresentationControllerActor listens to the various messages being delivered, including those notifying the beginning and the end of the Collaboration, to update the user interface:
 

package it.tidalwave.integritychecker.ui.impl;

import it.tidalwave.actor.MessageSupport;
import it.tidalwave.actor.Collaboration;
import it.tidalwave.actor.CollaborationCompletedMessage;
import it.tidalwave.actor.CollaborationStartedMessage;
import it.tidalwave.actor.annotation.Actor;
import it.tidalwave.actor.annotation.ListensTo;
import it.tidalwave.actor.annotation.Message;
import it.tidalwave.actor.annotation.OriginatedBy;

@Actor(threadSafe=false) @NotThreadSafe
public class IntegrityCheckerPresentationControllerActor
  {
    private static final double K10 = 1000;
    private static final double M10 = 1000000;
   
    @Message(outOfBand=true, daemon=true) @ToString
    private static class RefreshPresentationMessage extends MessageSupport
      {
      }
           
    private final IntegrityCheckerPresentationBuilder presentationBuilder = Locator.find(IntegrityCheckerPresentationBuilder.class);
   
    private final IntegrityCheckerPresentation presentation = presentationBuilder.createPresentation();
   
    private final Statistics statistics = new
Statistics();
   
    private boolean refreshing = false;
   
    private int totalFileCount;
   
    private long totalDataSize;
   
    private int processedFileCount;
   
    private long processedDataSize;
   
    public void onScanStarted (final @ListensTo CollaborationStartedMessage cMessage,
                               final @OriginatedBy FileScanRequestMessage message)
      {
        totalFileCount = 0;
        totalDataSize = 0;
        processedFileCount = 0;
        processedDataSize = 0;
        refreshing = true;
        new RefreshPresentationMessage().send();
      }
   
    public void onScanCompleted (final @ListensTo CollaborationCompletedMessage cMessage,
                                 final @OriginatedBy FileScanRequestMessage message)
      {
        refreshing = false;
        presentation.setProgressLabel("done");
        new RefreshPresentationMessage().send();
      }
   
    public void onFileDiscovered (final @ListensTo FileDiscoveredMessage message)
      {
        totalFileCount++;
        totalDataSize += message.gnbsp;nbsp;nbsp;nbsp; nbsp; }etFileObject().getSize();
      }
   
    public void onFingerprintComputed (final @ListensTo FingerprintComputedMessage message)
      {
        processedFileCount++;
        processedDataSize += message.getFileObject().getSize();
        final Collaboration collaboration = message.getCollaboration();
        final long elapsedTime = collaboration.getDuration().getMillis();
        final double speed = (processedDataSize / M10) / (elapsedTime / K10);
        final int eta = (int)(((totalDataSize - processedDataSize) / M10) / speed);
        ... // and update the statistics object
      }
   
    private void updatePresentation (final @ListensTo RefreshPresentationMessage message)
      {
        presentation.updateStatistics(statistics);
       
        if (refreshing)
          {
            new RefreshPresentationMessage().sendLater(1, TimeUnit.SECONDS);
          }
      }
  }

Actually another advantage of message-based designs is that a user interface can be easily updated without having to know a large number of listeners.
 


Activation
 

Actors are managed objects and must be never directly accessed. To accomplish this, they are declared by means of activators, which typically are grouped together such as in:
 

package it.tidalwave.integritychecker.impl;

import org.openide.util.lookup.ServiceProvider;
import it.tidalwave.actor.spi.ActorGroupActivator;
import static it.tidalwave.actor.spi.ActorActivator.*;

@ServiceProvider(service=IntegrityCheckerActivator.class)
public class IntegrityCheckerActivator extends ActorGroupActivator
  {
    public IntegrityCheckerActivator()
      {
        add(activatorFor(FileObjectDiscoveryActor.class).withPoolSize(8));
        add(activatorFor(FileLoaderActor.class).withPoolSize(1));
        add(activatorFor(FingerprintComputerActor.class).withPoolSize(4));
        add(activatorFor(PersistenceManagerActor.class).withPoolSize(1));
      }
  }

Activators specify the pool size of each actor. Given that, it's possible to activate and deactivate the group:

Locator.find(IntegrityCheckerActivator.class).activate();
Locator.find(IntegrityCheckerActivator.class).deactivate();

 

For people not used to the NetBeans Platform,@ServiceProvider is a compile-time annotations that generates a service declaration intoMETA-INF/services, while Locator is a facility that retrieves the reference at runtime.
 

Actors can have methods annotated with@PostConstruct and @PreDestroy, which will be called just before activation and just after deactivation, of course in a thread-safe way. There's the guarantee that messages can't be delivered before activation is completed and after deactivation.
 


Simpler than Akka... for how long?
 

This works, I enjoy the fact that I can avoidsynchronized blocks and so far I don't miss the fact that that thing is clearly much less performant than Akka. Everything fits into less than 40k of jar file and it's much simpler than Akka (whose complexity is justified by the fact that it offers many more features). So what? Distributing to a network is something that we can't ignore in a cloud oriented world. So it's likely that going on I'll see the need for more features: often you discover that you need new features as you go on. Smell of reinventing the wheel ahead!
 

For instance, consider the FileLoadActor. I've previously said that experimental evidence demonstrates that it makes sense (at least with large files) to have it instantiated in a single instance, thus serializing data access to the disk. This makes sense given the physical nature of a magnetic disk. But this happens with a single disk: if you have more than one, it probably makes sense to have one instance per disk. Having a Solid State Drive (SSD) changes the rules. This means that you must have some more complex way to dispatch messages, a form of routing. Akka offersit, and it would be unwise to rewrite this feature from scratch.
 

So now I'm going to start phase two, that is to reimplement this stuff upon Akka (using the 2.0 milestone that's already available). In a few weeks I'll let you know how the story goes on.

The code that I described in this blog post are available at the Mercurial repository http://bitbucket.org/tidalwave/solidblue-src, tag 1.0-ALPHA-4.
 

In a further post I'll talk about how Collaborations are used by blueArgyll, which features more complex interactions with the user. See you later.

I might be wrong, but Google just published what appears to be, at least to my knowledge (I could have missed some other in the past) the first interactive Doodle. It's in honor of Stanis?aw Lem, sci-fi writer of Solaris fame (please, if you have time, watch the Tarkovskij's movie rather than - or in addition to - the Soderberg one, even though Tarkovskij is harder to follow) and at a certain point during an animation a robot appears. It challenges you with a math quiz and you can interact with it to go on. Then a further challenge appears... but at this point I have to go back to a meeting. ;-)

Monday evening I was back home from Devoxx '11. Excellent conference and excellent people, I'm so happy I was back to it after two years. I'm full of sensations and things to think about for the next weeks. Also for the non technical part of the journey: I drove for more than 2.800 kilometers full of beauty, from the magnificent Brugge, to the chatedral of Reims, passing through the Lac du Der-Chantecoq filled with cranes, and the ever beautiful Bourgogne: this made for 1536 photo shoots and videos, whose postprocessing will keep me busy for some time. 

I even survived a disk crash just the evening before my talk - fortunately I recently fitted two disks into my laptop, and the backup was fine. I'll post more about this later, just in case the setup tips are interesting for somebody.

This long introduction :o) just to say that my slides about "Bulding Android apps with Maven" are available at SlideShare.

fabriziogiudici

Back to Devoxx! Blog

Posted by fabriziogiudici Nov 15, 2011

2011, and I'm back to my favourite conference. In the past two years I attended JavaOne and Jazoon, but for different reasons I wasn't able to go to Antwerpen. A number of things have changed in the meantime. Sun is no more here, but I already absorbed the shock at JavaOne 2010. Devoxx is held in November since a couple of years (previously it was in December). For my desire to match a photographic trip to France this is definitely a plus: the landscape looks better one month in advance and it seems that the weather is definitely better too (or maybe is it just a chance?): four days for the trip to Antwerpen and four days of excellent weather in a row (with just a few hours of fog through Picardie). 

Now, the conference. As usual, everything is smooth and well organized. The speakers' dinner, yesterday, was excellent to meet (again) some people whom you usually interact with only by means of the internet (the food was fine too, BTW). There are differences, through the years. My first Devoxx's (JavaPolis at the time) were packed with italians, especially JUG leaders. For a number of reasons, this is no more happening. I also recall a large pack of brazilians, lead by Bruno - well, this morning Stephan Janssen is trying to deceive me, as he's wearing the large brazilian flag on his back (I suppose I'll figure out why now that he's starting the introductory talk).

The technical thing for getting in this year is a wristband, a plastic band that is wrapped around your wrist carrying on identification information. I suppose it's better for the management - what I don't like is that you should keep it all the time until the conference is over. Well, it's waterproof (no problems with the shower), but I really hate to have something at my wrist when I go to sleep. Fortunately, with a little trick, it's possible to remove and re-attach it (working around a design of the junction that is supposed to self-destroy when detached). 

Yesterday I also met some guys from the NetBeans Dream Team (and Geertjan of course). In front of a beer offered at the Oracle stand there has been a quick discussion about Swing, JavaFX 2.0, HTML 5 and the tablets. More about this in a upcoming blog post. 

So big is my excitement that I even managed in waking up at the proper time this morning and I'm ready to attend the first keynote speech (usually I miss those at the first time slot). Some numbers: Devoxx celebrates its tenth birthday with 3350 attendees from 40 countries; 150 sessions, 200 hours and 170+ Rock Star speakers. 60 JUGs endorsed the event. Google is for the first time a sponsor. And an announcement: we have a spin off in Paris: Devoxx France! 18th to 20th April 2012. As far as I understand, there will be a relevant percentage of presentations held in french language. 

Enough for this morning. Oh, of course, let's not forget my speech: "Building Android apps with Maven", which is scheduled tomorrow afternoon (Nov 17) at 14:00. 

PS Ok, the brasilian flag is in honour to Bruno, that wasn't able to be there. Bruno is even depicted in the large "map of the world" that is this year's official graphics of the conference (it has been on the home page of the conference for months). The graphics itself, BTW, is something allegoric with a lot of details inside, much in the same way the flemish famous painter Hieronymus Bosch made his masterworks. Only a bit less worrying.  

The past summer I announced that I was going to move all my websites to a new CMS, and actually blueBill has been moved since that. Unfortunately I had some trivial problems while migrating the others, and then a number of accidents, the latest one being a major flood in the town where I live, stole me the time to complete the operation. Thus, you'll see that blueMarine and forceTen at the moment are broken (pointing to an error page or a Jetty welcome page) and jrawio has got some broken links and missing CSS/media. Please bear with me for still some time, until I fix everything.