Skip navigation


8 posts

This blog describes how to build both, a 64-bit and a 32-bit version of OpenJDK 8 on a plain, vanilla WindowsXP 64-bit operating system using only free (as in free beer) tools.

There are several tutorials out there which explain in more or less detail how to build OpenJDK on Windows. The problem with most of them is that they are either outdated (will happen with this blog as well:) or they use compilers which either aren't available anymore or which are not free. This blog will describe how to build both, a 64-bit and a 32-bit version of OpenJDK 8 on a plain, vanilla WindowsXP 64-bit operating system using only free (as in free beer) tools.

Unfortunately, building OpenJDK on Windows is still far away from being straightforward as can be seen from the regular desperate help requests on the OpenJDK mailing lists. This has many reasons and I neither want to discuss them nor do I want to blame anybody for this fact. Instead I hope this post can help to improve the OpenJDK build documentation and perhaps even the build process itself.

Getting started

As stated above, I started with a fresh, 64-bit WindowsXP installation. In fact I used a VmWare-Image in Oracle VirtualBox 4.1.4 on 64-bit Ubuntu 10.04 and in the VmWare Player 3.1.4 on 64-bit Windows 7. The first step was to install the free Microsoft C/C++ compilers and various dependencies required by OpenJDK. If not mentioned otherwise, I've installed all the packages mentioned here in the default location that was suggested by the respective installers.

VisualC++ 2010 Express

Download and install the free VisualC++ 2010 Express compilers from: Notice that this will require the installation of the "Windows Imaging Component (64-bit)" which is not present in a clean XP installation, but the VisualC++ installer will point you to the right URL from where you can get it (just for any case, here's where I finally downloaded it from:

Windows SDK for Windows 7 and .Net

Unfortunately, the "VisualC++ 2010 Express" package only contains 32-bit compilers. So if we want to build a 64-bit JVM we have to additionally install the "Windows SDK for Windows 7 and .Net" from (Notice that the Windows SDK now also contains the Itanium cross compiler. So should there be a native IA64 port in the OpenJDK anytime soon, it will be possible to build it with this setup as well:)

Microsoft DirectX 9.0 SDK

Another build requirement is the "Microsoft DirectX 9.0 SDK header files and libraries". It can be easily installed from be sure to check the corresponding Microsoft DirectX section in the OpenJDK Readme in the case the required version changed.


To get the OpenJDK source code, we need a Mercurial client. TortoiseHg is an easy to install Mercurial distribution for Windows which can be downloaded from The installation step will updated the system path automatically.

Java 7 JDK

As bootstrap JDK for a JDK8 build we need at least a JDK7. So download and install a Java 7 JDK from Notice that it is very important to install the JDK into a directory WITHOUT spaces (for this blog I'll use c:\OpenJDK\jdk1.7.0_01)!!! It is not necessary to install an extra JRE as part of the JDK installation! After the installation add the path to the java executable to the PATH environment variable (underStart->Control Panel->System->Advanced->Environment Variables->System Variables->).


The Ant build system is needed for parts of the JDK class library build. It can be downloaded (as .zip file) from simply unpacked into a directory which contains no white space (for this blog I use c:\OpenJDK\apache-ant-1.8.2). Afterwards, the path to the ant executable (c:\OpenJDK\apache-ant-1.8.2\bin) should be added to the system wide PATH environment variable as explained in the previous step.


Now comes the little tricky part of the setup. Because the JDK was initially developed in a Unix kind of environment, its build still relies on that. In order to build it on Windows we have to emulate such an environment with the help of the Cygwin system.

Cygwin can be installed by launching the from the Cygwin home page. Choose the default base installation plus the packages described in the OpenJDK Readme (ar.exe, make.exe, m4.exe, cpio.exe, gawk.exe,file.exe, zip.exe, unzip.exe, free.exe, ). Notice that they can be easily located by typing their names into the Search field of the Cygwin Setup program.

In addition we also have to install the GCC-base package to compile GNU make in the next step.

GNU make

Update (2012-02-09): Cygwin now comes with GNU make 3.82.90 which should compile OpenJDK just fine, but of course you can still comile the newest version yourself because There's an issue with GNU make 3.81 and the new 3.82.90 which is included in the current Cygwin distribution. It has problems with drive letters in Windows path names and to make a long story short - it will not work with the current OpenJDK build. For some unknown reasons the Cygwin GNU makemaintainer refuses to has integrate the new version 3.82 of GNU make into the standard Cygwin distribution, but it is configured in such a way that it still doesn't support drive letters in path names. Fortunately it is however very easy to compile a private version ofGNU make 3.82 which supports drive letters.

Download and unpack GNU make 3.82 from this tutorial I'll put it to c:\OpenJDK\make-3.82). Then execute the following commands in a Windows command shell:

cd /cygdrive/c/OpenJDK/make-3.82

The last command will build c:\OpenJDK\make-3.82\make.exe.


The OpenJDK build depends on the FreeType library. Unfortunately there are now binary development packages available for Windows but building FreeType is not hard at all.

Download FreeType from, extract it to c:\OpenJDK\freetype-2.4.7and double click on c:\OpenJDK\freetype-2.4.7\builds\win32\vc2010\freetype.vcxprojto open the FreeType project in "VisualC++ Express 2010".

From the projects properties do the following:

  • Configuration Manager -> Active Solution Manager -> Type or select the new Platform -> x64
  • Configuration -> Release Multithreaded
  • Platform -> x64
  • Output Directory -> rename ".\..\..\..\objs\win32\vc2010\" to ".\..\..\..\objs\win64\vc2010\"
  • Intermediate Directory -> rename ".\..\..\..\objs\release_mt\" to ".\..\..\..\objs\release_mt_64\"
  • Target Name -> rename to "freetype"
  • Platform Toolset -> Windows7.1SDK

Then choose "Release Multithreaded"/"x64" in the menu bar and Build the project by choosing "Build" from the project menu. This will create the 64-bit freetype.lib in the corresponding output directory. Now change the "Configuration Type" to "dll" in the project properties and build again. Now the 64-bit freetype.dll will be build in the output directory.

For the 32-bit build we have to go back to the project properties and do the following changes:

  • Configuration -> Release Multithreaded
  • Platform -> win32
  • Target Name -> rename to "freetype"
  • Platform Toolset -> v100

Now we can choose "Release Multithreaded"/"Win32" in the menu bar and start the build to create the 32-bit freetype.lib in the output directory. After we've changed the "Configuration Type" to "dll" in the project properties the next build will finally create the 32-bit freetype.dll in the corresponding output directory.

PATH handling

Path handling is very sensitive and the root cause of many build problems! Actually we have to deal with three different path categories: the default Windows System PATH setting (which now also contains the path of the JDK and the Mercurial client), the path setting for the Microsoft compiler and build tools which will be set up right before the build (see next step) and finally the path to the Cygwin tools which are needed for the build.

In general, the system path part should be before the compiler path part which in turn should come before the Cygwin path. But there are a few exceptions - for now exactly two: the Cygwin find.exe utility should be found before the one from the Windows system path and our newly compiled make.exe should be find before the one from the Cygwin path.

To solve this problem I created an extra directory c:\OpenJDK\PATH_PREPEND to which I copied the respective executables:

copy c:\cygwin\bin\find.exe c:\OpenJDK\PATH_PREPEND
copy c:\OpenJDK\make-3.82\make.exe c:\OpenJDK\PATH_PREPEND

Before we can start the actual build, we have to set the correct compiler environment by calling SetEnv.cmd from the Windows SDK with the right parameters. For this task I created two shortcuts which start a command shell for a 64 and a 32-bit environment respectively:

C:\WINDOWS\system32\cmd.exe /E:ON /V:ON /K "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /Release /xp /x64
C:\WINDOWS\system32\cmd.exe /E:ON /V:ON /K "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /Release /xp /x86

In such command shell we can now start a bash-shell and export the final path settings:

$ export PATH=/cygdrive/c/OpenJDK/PATH_PREPEND:$PATH:/cygdrive/c/cygwin/bin

Downloading the OpenJDK sources

We will use Mercurial to get the latest (and greatest OpenJDK 8 sources from Notice that we have to set the http_proxyenvironment variable if your machine does not have direct access to the Internet.

$ export http_proxy=http://proxy:8080
$ hg clone

The previous command will only fetch the JDK 8 base directory along with some README and script files. In order to get the full blown source tree you can either clone the sub-repositories manually or better use the handy script:

$ cd jdk8
$ ./


The OpenJDK Mercurial repositories do not contain the JAXP and JAX_WS sources because they are developed in different projects. It is therefore necessary to download these sources and place them into a directory which is passed the the OpenJDK build trough the ALT_DROPS_DIR environment variable.

Of course it is necessary to get the right version of the sources for a successful build. The exact file names and download URLs can be found in the respective ANT build files:

mkdir c:\OpenJDK\ALT_DROPS_DIR
cd /cygdrive/c/OpenJDK/jdk8/
grep -E "|jaxp_src.master.bundle.url.base" jaxp/
grep -E "jaxws_src.master.bundle.url.base|||jaf_src.master.bundle.url.base" jaxws/


So I downloaded, and from the respective URLs and stored them into the newly created directory c:\OpenJDK\ALT_DROPS_DIR

Notice that it is also possible to let the make process automatically download these source drops by adding the ALLOW_DOWNLOADS=true parameter to the make command line. But first of all it is not recommended in the OpenJDK Readme and second it didn't work for me (probably just because I haven't properly configured the proxy settings for ANT.


The build system also needs to find the msvcr100.dll runtime library. Because it couldn't automatically detect it on my system and because I wanted to pass it's location to the build (through ALT_MSVCRNN_DLL_PATH) as a path without spaces I copied the DLL to c:\OpenJDK\ALT_DROPS_DIR as well:

cp /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ Visual\ Studio\ 10.0/Common7/Packages/Debugger/X64/msvcr100.dll \

Building the OpenJDK

This is finally the last step of this tutorial and building the OpenJDK is just two commands away! Before we start the actual build we, set the environment variable WINDOWSSDKDIR. Normally, this is already done by the compiler setup script SetEnv.cmd which we've called before, but on my machine, the script actually definedWindowsSDKDir. While this is no problem in a Windows command shell where environment variables are case insensitive, the GNU makefile used for the OpenJDK build only looks for WINDOWSSDKDIR and will not find WindowsSDKDir. (Unfortunately setting ALT_WINDOWSSDKDIR on the make command line doesn't seem to work either (at least for me). I therefore think this is definitely a point that should be fixed in the Makefiles!)

$ export WINDOWSSDKDIR=$WindowsSDKDir

Now we can ultimately fire up the make command along with several configuration variables. We want to build a 64-bit VM first so we set ARCH_DATA_MODEL to 64. We choose a different output directory so we set ALT_OUTPUTDIR to c:/OpenJDK/output_amd64. With ALT_FREETYPE_LIB_PATH and ALT_FREETYPE_HEADERS_PATH we specify where the FreeType libraries and headers are located (be careful and use the path to the 64-bit libraries that you've build before). ALT_BOOTDIR denotes the location of the bootstrap JDK and ALT_DROPS_DIR the directory where we've stored the JAXP and JAX_WS sources. Finally,ALT_MSVCRNN_DLL_PATH indicates where a copy of msvcr100.dll library can be found. This will be copied into the newly created JDK images.

The build can be speed up considerably by setting HOTSPOT_BUILD_JOBS (for the HotSpot build) and PARALLEL_COMPILE_JOBS (for the JDK build) to the number of CPUs if you are building on a multi core machine. In my experience it may even help to set the numbers slightly higher than the actual number of CPUs because on new machines the build time is usually IO and not CPU bound.

I also like to save the whole build logs into a file so I end the make command line with '2>&1 | tee c:/OpenJDK/output_amd64.log'. This will print the build logs to the console and save them to c:/OpenJDK/output_amd64.log at the same time. Also notice that I use path names with DOS-style drive letters but Unix-like forward slashes as path separators on the make command line. This is advised in the OpenJDK Readme and gave the best results for me.

$ make ARCH_DATA_MODEL=64 \
  ALT_OUTPUTDIR=c:/OpenJDK/output_amd64 \
  ALT_FREETYPE_LIB_PATH=c:/OpenJDK/freetype-2.4.7/objs/win64/vc2010 \
  ALT_FREETYPE_HEADERS_PATH=c:/OpenJDK/freetype-2.4.7/include \
  ALT_BOOTDIR=c:/OpenJDK/jdk1.7.0_01 \
  2>&1 | tee c:/OpenJDK/output_amd64.log

If you closely followed my instructions and if I've made no mistake while I was writing this down you should see this wonderful success message after a few hours:

##### Leaving jdk for target(s) sanity all docs images             #####
##### Build time 05:17:03 jdk for target(s) sanity all docs images #####

-- Build times ----------
Target all_product_build
Start 2011-10-27 09:15:08
End   2011-10-27 14:41:50
00:03:54 hotspot
00:01:28 jaxp
00:01:47 jaxws
05:17:03 jdk
00:01:13 langtools
05:26:43 TOTAL
make[1]: Leaving directory `/cygdrive/c/OpenJDK/jdk8'

Congratulations - you've successfully build OpenJDK 8 on Windows!!!

Now, building a 32-bit JDK is just a few commands away. First we have to start the right command shell with the settings for the 32-bit compiler and build tools (SetEnv.cmd /Release /xp /x86). Then we type (parameters which have changed with respect to the 64-bit build are printed in bold):

$ export PATH=/cygdrive/c/OpenJDK/PATH_PREPEND:$PATH:/cygdrive/c/cygwin/bin
$ export WINDOWSSDKDIR=$WindowsSDKDir
$ make ARCH_DATA_MODEL=32 \
  ALT_OUTPUTDIR=c:/OpenJDK/output_x86 \
  ALT_FREETYPE_LIB_PATH=c:/OpenJDK/freetype-2.4.7/objs/win32/vc2010 \
  ALT_FREETYPE_HEADERS_PATH=c:/OpenJDK/freetype-2.4.7/include \
  ALT_BOOTDIR=c:/OpenJDK/jdk1.7.0_01 \
  NO_DOCS=true \
  2>&1 | tee c:/OpenJDK/output_x86.log

Notice that it is a known problem that building the JavaDoc documentation during a 32-bit build on a 64-bit system (actually with a 64-bit bootstrap JDK) will fail. To workaround this issue we use the additional parameter NO_DOCS=true.

Updated Feb. 22nd 2011: after some very good feedback from (among others) Mark Wielaard, Florian Weimer, Roman Divacky and Chris Lattner himself, I decided to re-run my tests with new compiler versions (Clang trunk rev. 125563 and GCC 4.5.2) and an improved Clang configuration which now finally fully enables precompiled header support for the Clang build.

At the FOSEDM 2011 I've heared Chris Lattner's very nice "LLVM and Clang" keynote. The claims he made in his talk have been very impressing: he was speaking about Clang being a "production quality" "drop-in replacement" for GCC with superior code generation and improved compile speed. Already during the talk I decided that I would be interesting to prove his pretensions on the HotSpot VM which in generally is not known as the worlds most simple C++ project. Following you can find my experiences with Clang and a  small Clang patch new Clang patch for the OpenJDK if you want to do some experiments with Clang yourself.

GCC compatibility

GCC is the standard C/C++ compiler on Linux and available on virtually any Unix platform. Any serious challenger should therefore have at least a GCC compatibility mode to ease its adoption. Clang pretends to be fully GCC compatible, so I just created a new Clang configuration by changing some files and creating some new, Clang specific ones from their corresponding GCC counterparts:

> hg status -ma
M make/linux/makefiles/buildtree.make
M src/os_cpu/linux_x86/vm/os_linux_x86.cpp
M src/share/vm/adlc/output_c.cpp
M src/share/vm/utilities/globalDefinitions.hpp
A make/linux/makefiles/clang.make
A make/linux/platform_amd64.clang
A src/share/vm/utilities/globalDefinitions_clang.hpp

and started a new build (for a general description of the HotSpot build process see either the README-builds file or the more detailed but slightly outdated explanation in my previous blog):

> ALT_BOOTDIR=/share/software/Java/jdk1.6.0_20 \
  ALT_OUTPUTDIR=../output_x86_64_clang_dbg \
  make jvmg USE_CLANG=true

One of the very first observations is the real HUGE amount of warnings issued by the compiler. Don't get me wrong here - I really regard this as being a major feature of Clang, especially the clear and well-arranged fashion in which the warnings are presented (e.g. syntax colored, with macros nicely expanded). But for the current HotSpot code base this is really too much. Especially the issue "6889002: CHECK macros in return constructs lead to unreachable code" leads to a bunch of repeated warnings for every single compilation unit which make the compilation output nearly unreadable. So before I started to eliminate the warnings step by step I decided to turn the warnings off all together in order to get a first impression of the overall compatibility and performance:

> ALT_BOOTDIR=/share/software/Java/jdk1.6.0_20 \
  ALT_OUTPUTDIR=../output_x86_64_clang_dbg \

Except for the -fcheck-new option, Clang seems to understand all the other compiler options used during the HotSpot build process. For -fcheck-new a warning is issued advertising that the option will be ignored. So I just removed it from make/linux/makefiles/clang.make. I have also removed obvious workarounds for some older GCC versions in the new Clang files which were derived from their corresponding GCC counterparts. The following compiler options have been used in the dbg and opt build respectively:

dbg-options: -fPIC -fno-rtti -fno-exceptions -m64 -pipe -fno-omit-frame-pointer -g -MMD -MP -MF
opt-options: -fPIC -fno-rtti -fno-exceptions -m64 -pipe -fno-omit-frame-pointer -O3 -fno-strict-aliasing -MMD -MP -MF

Besides this, I only had to change the source code of two files to make the HotSpot compilable by Clang. The first change was necessary only because the ADLC part of the make does not honor the general warning settings of the HotSpot build and always runs with-Werror. Here's the small patch which prevents a warning because of an assignment being used as a Boolean value:

-- a/src/share/vm/adlc/output_c.cpp    Tue Nov 23 13:22:55 2010 -0800
+++ b/src/share/vm/adlc/output_c.cpp    Wed Feb 09 16:39:30 2011 +0100
@@ -3661,7 +3661,7 @@
     // Insert operands that are not in match-rule.
     // Only insert a DEF if the do_care flag is set
-    while ( comp = comp_list.post_match_iter() ) {
+    while ( (comp = comp_list.post_match_iter()) ) {
       // Check if we don't care about DEFs or KILLs that are not USEs
       if ( dont_care && (! comp->isa(Component::USE)) ) {

Updated Feb. 22nd 2011: I decided to leave the file output_c.cpp untouched and instead change the ADLC make file adlc.make to use the same warning flags like the main HotSpot make instead of using -Werrer.

--- a/make/linux/makefiles/adlc.make    Wed Feb 16 11:24:17 2011 +0100
+++ b/make/linux/makefiles/adlc.make    Tue Feb 22 12:59:37 2011 +0100
@@ -60,7 +60,7 @@
 # CFLAGS_WARN holds compiler options to suppress/enable warnings.
 # Compiler warnings are treated as errors
-CFLAGS_WARN = -Werror

The second change was necessary because of a strange inline assembler syntax which was used to assign the value of a register directly to a variable:

diff -r f95d63e2154a src/os_cpu/linux_x86/vm/os_linux_x86.cpp
--- a/src/os_cpu/linux_x86/vm/os_linux_x86.cpp  Tue Nov 23 13:22:55 2010 -0800
+++ b/src/os_cpu/linux_x86/vm/os_linux_x86.cpp  Wed Feb 09 16:45:40 2011 +0100
@@ -101,6 +101,10 @@
   register void *esp;
   __asm__("mov %%"SPELL_REG_SP", %0":"=r"(esp));
   return (address) ((char*)esp + sizeof(long)*2);
+#elif CLANG
+  intptr_t* esp;
+  __asm__ __volatile__ ("movq %%"SPELL_REG_SP", %0":"=r"(esp):);
+  return (address) esp;
   register void *esp __asm__ (SPELL_REG_SP);
   return (address) esp;
@@ -183,6 +187,9 @@
   register intptr_t **ebp;
   __asm__("mov %%"SPELL_REG_FP", %0":"=r"(ebp));
+#elif CLANG
+  intptr_t **ebp;
+  __asm__ __volatile__ ("movq %%"SPELL_REG_FP", %0":"=r"(ebp):);
   register intptr_t **ebp __asm__ (SPELL_REG_FP);

Updated Feb. 22nd 2011: to compile the newest HotSpot tip revision another small change was necessary to overcome a problem with a method name look-up of an non-dependent method name in dependent base classes (see M. Cline's C++ FAQ 35.19 for a nice explanation). This was wrongly accepted by GCC (see GCC bug 47752) but it will be correctly rejected by Clang. The problem is tracked as bug 7019689 and will be hopefully fixed soon in the HotSpot code base:

diff -r 55b9f498dbce -r c83e921b1bf7 src/share/vm/utilities/hashtable.hpp
--- a/src/share/vm/utilities/hashtable.hpp      Thu Feb 10 16:24:29 2011 -0800
+++ b/src/share/vm/utilities/hashtable.hpp      Wed Feb 16 11:09:16 2011 +0100
@@ -276,7 +276,7 @@
   int index_for(Symbol* name, Handle loader) {
-    return hash_to_index(compute_hash(name, loader));
+    return this->hash_to_index(compute_hash(name, loader));

In summary, the overall compatibility can be rated as very good. Taking into account that the newly build VM could successfully run the SPECjbb2005 * benchmark it seems that also the code generation went mostly well although more in depth tests are probably required to ensure full correctness (well - at least the same level of correctness known from GCC).

Compilation performance and code size

After the build succeeded, I started to do some benchmarking. I measured the time needed for full debug and opt builds with one and three parallel build threads respectively.  As you can see in table 1, the results are very clear: Clang 2.8 is always significantly (between two and three times) slower than GCC 4.4.3:

Table 1: Resulting code size and user (wall) time for a complete HotSpot server (C2) build compared to GCC 4.4.3
GCC 4.4.3 1GCC 4.5.2 1Clang 2.8 2Clang trunk 3Clang trunk 4GCC 4.4.3 1GCC 4.5.2 1Clang 2.8 2Clang trunk 3Clang trunk 4
opt5m04s4m55s97%10m45s212%3m10s63% 3m05s3m03s99%6m12s201%2m01s65% size5 

Honestly speaking these numbers where somehow disappointing for me - especially after Chris Lattner's talk at FOSDEM. I haven't done a more in depth research of the reasons but I suspect the shiny results presented at the conference are mainly based on the fact that they focus more on Objective-C than on C++ and they have been measured against older 4.0 and 4.2 version of GCC. This assumption was also confirmed after looking at the Clang Performance page.

Updated Feb. 22nd 2011: I've written the previous paragraph under the impression of my first measurements. It turned out however, that the Clang build was not using precompiled headers properly. This is because Clang is not fully GCC compatible with respect to precompiled header files. GCC transparently searches for a precompiled version of directly included header files whereas Clang only considers a precompiled version for headers which are included explicitly on the command line as prefix headers with the -includeoption (see the Precompiled Headers section of the Clang Users Manual). The HotSpot project uses a precompiled header file which is directly included in most of the source files, but for the reasons just mentioned, this has no effect with Clang - it just uses the bare header file instead of the precompiled version.

To successfully enable PCH support for Clang, I had to change the Clang configuration such that it emits corresponding "-include precompiled.hpp" compiler flags for the files (and only for them) which include precompiled.hpp directly. This didn't work correctly with Clang 2.8, where it led to strange errors during compilation, but with a brand new trunk version from SVN (rev. 125563) the problems were gone. As you can see in the columns labeled "Clang trunk3", this roughly doubled the compilation speed in the debug build and made the opt build more than tree times faster! Compared to GCC 4.4.3, this still ranks Clang at about 150% for the debug build, but already for the opt build Clang now considerably outperforms GCC and uses only 65% of the time required by GCC for the full build.

Another point that concerned me during the first measurements was the size of the resulting shared library. While the size was basically the same for the opt build, the Clang debug build produces a huge, ~700MB file which is nearly seven times larger compared to the results produced by GCC.  I haven't looked into this deeper either - perhaps some Clang/LLVM wizard can comment on this topic? It turned out that this was a known problem which can be partially worked around by using the -flimit-debug-info flag. As you can see in the columns labeled "Clang trunk4" this not only reduces the size of the resulting shared library by about 50%, it also makes the debug build up to 15% faster compered to the corresponding GCC build.

Runtime performance

After I had successfully compiled the HotSpot I decided to run some benchmarks to see what the code quality of the Clang generated HotSpot is. Because I know that for the SPEC JVM98 benchmark the VM spents most of the time (about ~98% if we have a proper warm-up phase) in compiled code, I decided to use SPECjbb2005 * which at least does a lot of garbage collection and the GC is implemented in C++ in the HotSpot VM.

For the tests I used an early access version of JDK 7 (b122) with a recent HotSpot 20 from The exact version of the JDK I used is:

> java -version
java version "1.7.0-ea"
Java(TM) SE Runtime Environment (build 1.7.0-ea-b122)
Java HotSpot(TM) 64-Bit Server VM (build 20.0-b03, mixed mode)
> java -Xinternalversion
Java HotSpot(TM) 64-Bit Server VM (20.0-b03) for linux-amd64 JRE (1.7.0-ea-b122),
built on Dec 16 2010 01:03:29 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)

As you can see, the original HotSpot was compiled with GCC 4.3.0 while I used 4.4.3 on my local machine. The SPECjbb2005 benchmark was configured to use 16 warehouses. I have compared the scores of the two versions compiled by me with GCC and Clang respectively with the score achieved by the original HotSpot version from the early access binary package:

Table 1: SPECjbb2005 score
 JDK 1.7.0-ea-b122, HotSpot 64-Bit Server VM (20.0-b03)
GCC 4.3.0GCC 4.4.3GCC 4.5.2Clang 2.8Clang trunk

Again, the Clang compiled code loses against its GCC counterpart. It is approximately 4% slower. One feature which was actively promoted at the FOSDEM presentation was link time optimization (LTO). Unfortunately I couldn't get this running with Clang 2.8 on my Linux box. I searched the web a bit and found the following interesting blog: "Using LLVM's link-time optimization on Ubuntu Karmic". However, it only describes how to get LTO working with llvm-gcc, which is a GCC front end based on LLVM. Clang itself only seems to support LTO on MacOS X out of the box.

Updated Feb. 22nd 2011: I also did performance measurements for the two new compiler version, but here the results didn't changed significantly, so I just add the new numbers here for reference. (Notice that the results oscillated +/-1% during benchmarking, so the actual differences shouldn't be taken too seriously.)


Updated Feb. 22nd 2011: While the overall GCC compatibility is excellent  the Clang compile times and and the compile times are impressive, the performance of the generated code is still lacking behind a recent GCC version. Nevertheless, Clang has an excellent C/C++ front end which produces very comprehensive warnings and error messages. If you are developing macro intensive C or heavily templateized C++ code, this feature alone can save you much more time than you loose trough longer compile times. Taking into consideration Clangs nice design and architecture and the fact that it must still be considered quite new, I think it may become a serious challenger for the good old GCC in the future.


Please note that theSPECjbb2005 results published on this page come from non-compliant benchmark runs and should be considered as published under the "Research and Academic Usage" paragraph of the "SPECjbb2005 Run and Reporting Rules"

Usually it's not big fun to be "supporter of the week" but recently, when I was on duty, I got this somehow unusual request on our support queue. If you're interested in Bytecode Instrumentation and Rewriting, Classloaders and Instrumentation Agents read on to hear the full story...

How everything started..

So what was the problem that required such an unusual solution? An engineer explained, that they delivered version 1.0 of their API with a method void foo(String arg) (seeListing 1):

Listing 1: The original, old API
package api;
public class API {
  public static void foo(String arg) {
    System.out.println("Now in : void");
    System.out.println("arg = " + arg);

Some time later, they delivered version 2.0 of the API where they accidently changed the signature of foo to void foo(String arg)(see Listing 2):

Listing 2: The new version of the API
package api;
public class API {
  public static Object foo(String arg) {
    System.out.println("Now in : Object");
    System.out.println("arg = " + arg);
    return null;

Unfortunately they didn't realize this just until a client complained that one of their applications didn't worked anymore because a third party library which they where using (and of which they had no source code!) was compiled against version 1.0 of the API. This was similar to the test program shown in Listing 3:

Listing 3: The test program (compiled against the old API)
import api.API;
public class Test {
  public static String callApiMethod() {
    System.out.println("Calling: void");"hello");
    System.out.println("Called : void");
    return "OK";
  public static void main(String[] args) {

If compiled and run against the old API the Test class will run as follows:

> javac -cp apiOld
> java  -cp apiOld Test
Calling: void
Now in : void
arg = hello
Called : void

However, if compiled against the old API shown in Listing 1 and run against the new API from Listing 2, it will produce a NoSuchMethodError:

> javac -cp apiOld
> java  -cp apiNew Test
Calling: void
Exception in thread "main" java.lang.NoSuchMethodError:;)V
    at Test.callApiMethod(
    at Test.main(

Unfortunately, at this point it was already impossible to revert the change in foo's signature, because there already existed a considerable number of new client libraries which were compiled against version 2.0 and depended on the new signature.

Our engineer now asked to "hack" the Java VM such that calls to the old version of foo get redirected to the new one if version 2.0 of the API is used. Hacking the VM for such a purpose is of course out of question. But they asked so nicely and I had already heard of bytecode instrumentation and rewriting for so many times in the past without ever having the time to try it out that I finally decided to help them out with the hack they requested.

Two possible solutions

There were two possible solutions I could think of: statically edit the offending class files and rewrite the calls to the old API with calls to the new one (remember that the client had no sources for the library which caused the problems). This solution had two drawbacks: first, it would result in two different libraries (one compatible with the old and one compatible with the new API) and second, it had to be manually repeated for each such library and it was unknown, what other libraries could cause this problem.

A better solution would be to dynamically rewrite the calls at runtime (i.e. at load time, to be more exact) only if needed (i.e. if a library which was compiled against the old API is running with the new one). This solution is more general, but it has the drawback of introducing a small performance penalty because all classes have to be scanned for calls to the old API method at load time.

I decided to use dynamic instrumentation, but then again there were (at least) two possibilities how this could be implemented. First, Java 5 introduced a new Instrumentation API which serves exactly our purpose, namely " instrument programs running on the JVM. The mechanism for instrumentation is modification of the byte-codes of methods". Second, there has always been the possibility to use a custom class loader which alters the bytecodes of classes while they are loaded. I'll detail both approaches here:

Using the Java Instrumentation API

The Java Instrumentation API is located in the java.lang.instrument package. In order to use it, we have to define a Java programming language agent which registers itself with the VM. During this registration, it receives an Instrumentation object as argument which among other things can be used to register class transformers (i.e. classes which implement the ClassFileTransformer interface) with the VM.

A Java agent can be loaded at VM startup with the special command line option -javaagent:jarpath[=options] where jarpath denotes the jar-file which contains the agent. The jar-file must contain a special attribute called Premain-Class in its manifest which specifies the agent class within the jar-file. Similar to the main method in a simple Java program, an agent class has to define a so called premain method with the following signature: public static void premain(String agentArgs, Instrumentation inst). This method will be called when the agent is registered at startup (before themain method) and gives the agent a chance to register class transformers with the instrumentation API. The following listing shows the Premain-Classclass of our instrumentation agent:

Listing 4: The instrumentation agent
package instrumentationAgent;
import java.lang.instrument.Instrumentation;

public class ChangeMethodCallAgent {
  public static void premain(String args, Instrumentation inst) {
    inst.addTransformer(new ChangeMethodCallTransformer());

A class file transformer has to implement the ClassFileTransformer interface which defines a single transform method. Thetransform takes quite some arguments from which we only need the classfileBufferwhich contains the class file as a byte buffer. The class transformer is now free to change the class definition as long as the returned byte buffer contains another valid class definition.Listing 5 shows our minimal ChangeMethodCallTransformer. It calls the real transformation method Transformer.transform which operates on the bytecodes and replaces calls to the old API method with calls to the new version of the method. The Transformerclass will be described in a later section of this article (see Listing 8).

Listing 5: Our class file transformer
package instrumentationAgent;

import bytecodeTransformer.Transformer;
import java.lang.instrument.ClassFileTransformer;
import java.lang.instrument.IllegalClassFormatException;

public class ChangeMethodCallTransformer implements ClassFileTransformer {
  public byte[] transform(ClassLoader loader, String className,
 classBeingRedefined, ProtectionDomain protectionDomain,
          byte[] classfileBuffer) throws IllegalClassFormatException {
    return Transformer.transform(classfileBuffer);

For the sake of completeness, Listing 6shows the manifest file which is used to create the instrumentation agent jar-file. ChangeMethodCallAgent is defined to be the premain class of the agent. Notice that we have to put asm-3.1.jar in the boot class path of the agent jar-file, because it is needed by our actual transform method.

Listing 6: The manifest file for our instrumentation agent
Manifest-Version: 1.0
Premain-Class: instrumentationAgent.ChangeMethodCallAgent
Boot-Class-Path: asm-3.1.jar

If we run our test application with the new instrumentation agent, we will not get an error anymore. You can see the output of this invocation in the following listing:

> java -cp apiNew:asm-3.1.jar:bytecodeTransformer.jar:. -javaagent:instrumentationAgent.jar Test
Calling: void
Now in : Object
arg = hello
Called : void

Using a custom class loader

Another possibility to take control over and alter the bytecodes of a class is to use a custom class loader. Dealing with class loaders is quite tricky and there are numerous publications which deal with this topic (e.g. References [2], [3], [4]). One important point is to find the right class loader in the hierarchy of class loaders which is responsible for the loading of the classes which we want to transform. Especially in Java EE scenarios which can have a lot of chained class loaders this may be not an easy task. But once this class loader is identified, the changes which have to be applied in order to make the necessary bytecode transformations are trivial.

For this example I will write a new system class loader. The system class loader is responsible for loading the application and it is the default delegation parent for new class loaders. If the system property java.system.class.loader is defined at VM startup then the value of that property is taken to be the name of the system class loader. It will be created with the default system class loader (which is a implementation-dependent instance of ClassLoader) as the delegation parent. The following listing shows our simple system class loader:

Listing 7: A simple system class loader
package systemClassLoader;

import bytecodeTransformer.Transformer;

public class SystemClassLoader extends ClassLoader {

  public SystemClassLoader(ClassLoader parent) {

  public Class
 loadClass(String name, boolean resolve) throws ClassNotFoundException {
    if (name.startsWith("java.")) {
      // Only bootstrap class loader can define classes in java.*
      return super.loadClass(name, resolve);
    try {
      ByteArrayOutputStream bs = new ByteArrayOutputStream();
      InputStream is = getResourceAsStream(name.replace('.', '/') + ".class");
      byte[] buf = new byte[512];
      int len;
      while ((len = > 0) {
        bs.write(buf, 0, len);
      byte[] bytes = Transformer.transform(bs.toByteArray());
      return defineClass(name, bytes, 0, bytes.length);
    } catch (Exception e) {
      return super.loadClass(name, resolve);

In fact we only have to extend the abstract class java.lang.ClassLoader and override the the loadClass method. Inside loadClass, we immediately bail out and return the output of the superclass version of loadClass, if the class name is in the java package, because only the bootstrap class loader is allowed to defined such classes. Otherwise we read the bytecodes of the requested class (again by using the superclass methods), transform them with our Transformer class (see Listing 8) and finally call defineClass with the transformed bytecodes to generate the class. The transformer, which will be presented in thenext section, takes care of intercepting all calls to the old API method and replaces it with calls to the method in the new API.

If we run our test application with the new system class loader, we will succeed again without any error. You can see the output of this invocation in the following listing:

> java -cp apiNew:asm-3.1.jar:bytecodeTransformer.jar:systemClassLoader.jar:. \
       -Djava.system.class.loader=systemClassLoader.SystemClassLoader Test
Calling: void
Now in : Object
arg = hello
Called : void

Finally: rewriting the bytecodes

After I have demonstrated two possibilities how bytecode instrumentation can be applied to a Java application, it is finally time to show how the actual rewriting takes place. This is fortunately quite easy today, because with ASM, BCEL and SERP to name just a few, there exist some quite elaborate frameworks for Java bytecode rewriting. As detailed by Jari Aarniala in his excellent paper "Instrumenting Java bytecode", ASM is the smallest and fastest out of these libraries, so I decided to use it for this project.

ASM's architecture is based on the visitor patternwhich makes it not only very fast, but also easy to extend. Listing 8 finally shows the Transfomer class which was used in the instrumentation agent (see Listing 5) and in our custom class loader (see Listing 7).

Listing 8: the bytecode transformer
package bytecodeTransformer;

import org.objectweb.asm.ClassReader;
import org.objectweb.asm.ClassWriter;

public class Transformer {
  public static byte[] transform(byte[] cl) {
    ClassWriter cw = new ClassWriter(ClassWriter.COMPUTE_FRAMES);
    ChangeMethodCallClassAdapter ca = new ChangeMethodCallClassAdapter(cw);
    ClassReader cr = new ClassReader(cl);
    cr.accept(ca, 0);
    return cw.toByteArray();

The public static transform method takes a byte array with a java class definition as input argument. These bytecodes are fed into an ASM ClassReader object which parses the bytecodes and allows a ClassVisitor object to visit the class. In our case, this class visitor is an object of type ChangeMethodCallClassAdapter which is derived from ClassAdapter. ClassAdapter is a convenience class visitor which delegates all visit calls to the class visitor object which it takes as argument in its constructor. In our case we delegate the various visit methods to a ClassWriter with the exception of thevisitMethod method (see Listing 9).

Listing 9: ChangeMethodCallClassAdapter, the class visitor
package bytecodeTransformer;

import org.objectweb.asm.ClassAdapter;
import org.objectweb.asm.ClassVisitor;
import org.objectweb.asm.MethodVisitor;

public class ChangeMethodCallClassAdapter extends ClassAdapter {

  public ChangeMethodCallClassAdapter(ClassVisitor cv) {

  public MethodVisitor visitMethod(int access, String name, String desc, 
                                   String signature, String[] exceptions) {
    MethodVisitor mv;
    mv = cv.visitMethod(access, name, desc, signature, exceptions);
    if (mv != null) {
      mv = new ChangeMethodCallAdapter(mv);
    return mv;

We are only interested in the methods of a class because method can only be called from within another method. Notice that static initializers are grouped together in the generated <clinit> method which will also be visited byvisitMethod. In the overridden method we get the MethodVisitor of the delegate (which is vanilla ClassWriter)) and return a newChangeMethodCallAdapter which is constructed with the same delegate.

The ChangeMethodCallAdapter is finally the place, where the bytecode rewriting will take place. Again,ChangeMethodCallAdapter expands the generic MethodAdapter which by default passes all bytecodes to its class writer delegate. The only exception here is the visitMethodInsn which will be called for every bytecode instruction that invokes a method.

Listing 10: ChangeMethodCallAdapter, the method visitor
package bytecodeTransformer;

import org.objectweb.asm.MethodAdapter;
import org.objectweb.asm.MethodVisitor;
import org.objectweb.asm.Opcodes;

public class ChangeMethodCallAdapter extends MethodAdapter {

  public ChangeMethodCallAdapter(MethodVisitor mv) {

  public void visitMethodInsn(int opcode, String owner, String name, String desc) {
    if ("api/API".equals(owner) && "foo".equals(name) && "(Ljava/lang/String;)V".equals(desc)) {
      mv.visitMethodInsn(opcode, owner, name, "(Ljava/lang/String;)Ljava/lang/Object;");
    } else {
      mv.visitMethodInsn(opcode, owner, name, desc);

In visitMethodInsn (see Listing 10), we look for methods named foo with a receiver object of type API and a signature equal to (Ljava/lang/String;)V (e.g. a String argument and a voidreturn value). These are exactly the calls to the old version offoo which we want to patch. To finally patch it, we call our delegate with the same receiver and method name, but with the changed signature. We also have to insert a newPOP bytecode after the call, because the new version of foo will return an Object which wouldn't be handled otherwise (because the following code doesn't expect foo to return a value because it was compiled against the old API (see Listing 1)). That's it - all the other calls and bytecode instructions will be copied verbatim by the class writer to the output byte array!


This article should by no means encourage you to be lazy with your API design and specification. It's always better to prevent problems as described in this article by good design and even better testing (e.g. signature tests of all publicly exposed methods). I also don't claim that the "hack" presented here is a good solution for the above problem - it was just fun to see what's possible today in Java with very little effort!

You can download the complete source code of this example together with a self explaining Ant file from here:


I want to thank Jari Aarniala for his very interesting, helpful and concise article "Instrumenting Java bytecode" which helped me a lot to get started with this topic!


[1] Instrumenting Java bytecode by Jari Aarniala
[2] Internals of Java Class Loading by Binildas Christudas
[3] Inside Class Loaders by Andreas Schaefer
[4] Managing Component Dependencies Using ClassLoaders by Don Schwarz
[5] ASM homepage
[6] ASM 3.0 A Java bytecode engineering library (tutorial in pdf format)
[7] BCEL The Byte Code Engineering Library
[8] SERP framework for manipulating Java bytecode
[9] the source code from this article

Here comes the second part of "HotSpot development on Linux with NetBeans". While the first part focused on building and running the different flavors (opt/debug, client/server JIT compiler, template/C++ interpreter) of the HotSpot VM on Linux/x86, this second part concludes with a short evaluation of NetBeans 6.0 as an development environment for HotSpot hacking.

Developing with NetBeans

Now that we've successfully built the OpenJDK and know how we can efficiently build and run the HotSpot, we are ready to start using NetBeans (NB for short) for HotSpot development. Be sure to use at least version 6.0 if you want to do C/C++ development because starting with 6.0, NetBeans comes with builtin and highly improved support for C/C++ projects that supersedes the old "cpp" and "cpplite" modules which formed the NetBeans C/C++ Developer Pack (CND) in the past. There's now a special C/C++ Bundle available for download, that's only 11mb in size and contains all the needed plugins for C/C++ development.

After installation, be sure you install the Mercurial plugin by selecting Tools ->Plugins ->Available Plugins ->Mercurial. You need to have Mercurial installed and available in your PATH for the Mercurial plugin to work (see Prerequisites - Mercurial). If your Mercurial executable isn't installed in a standard path or if you want to use a special Mercurial version with the NB Mercurial plugin you can configure this under Tools ->Options ->Versioning ->Mercurial ->Mercurial Executable Path.

You should note that if you are using NetBeans remotely (starting it on a remote host and setting the DISPLAY variable to your local machine), you'll probably want to set "-Dsun.java2d.pmoffscreen=false" in the NetBeans configuration file <NB_directory>/etc/netbeans.conf. Without this setting, NB was so slow for me that it was effectively unusable (e.g. opening the "File" menu took about 20 seconds - every time!).

Cloning with NetBeans

The nice thing about using NB for HotSpot development is that it has builtin Mercurial support, so we don't need to use additional command line tools for version control. Although we could start right away with the available HotSpot sources from the previously cloned OpenJDK (see Cloning the OpenJDK sources) and have Mercurial support for it, we will instead clone a brand new HotSpot repository from within NB to work with.

To achieve this we select Versioning ->Mercurial ->Clone Other.... In the appearing "Clone External Repository" wizard, we enter as the "Repository URL" and a local "Parent Directory" and "Clone Name" (usually hotspot) for the new clone (don't forget to adjust the proxy settings if necessary!). After clicking "Finish", NB will clone the HotSpot sources to  <Parent Directory>/<Clone Name>. For the rest of this blog I'll assume that the HotSpot sources have been cloned to /OpenJDK/jdk7/hotspot. Here's the content of the Mercurial Output window after a successful clone:

Mercurial Clone
adding changesets
adding manifests
adding file changes
2895 files updated, 0 files merged, 0 files removed, 0 files unresolved
The number of output lines is greater than 500; see message log for complete output
INFO Clone From:
INFO To:         /OpenJDK/jdk7/hotspot

INFO: End of Clone

Now that we have cloned a fresh repository, we should not forget to patch the sources with the patch file that has been previously described in order to work around some build problems.

Creating a HotSpot project

With the newly cloned HotSpot sources, we are ready to create a NetBeans project for HotSpot. We more or less follow the description of "Creating a C/C++ Project From Existing Code  " in the NB Quick Start tutorial, but as you'll see, there are quite some intricate issues you need to consider here, in order to get a working HotSpot project.

First we start the "New Project" wizard by going to File ->New Project. In the "New Project" wizard we select "C/C++ Project From Existing Code" as project type. In the next step, we select "Using an existing makefile" and enter /OpenJDK/jdk7/hotspot/make/Makefile as the project makefile. In the following "Build Actions" step, we have to specify the project's "Working Directory". The presetting /OpenJDK/jdk7/hotspot/make (i.e. the directory that contains the makefile we've just specified in the previous step) will not work here because of a NB bug (see Issue 125214). We have to use the HotSpot root directory (i.e. /OpenJDK/jdk7/hotspot in our case) in order to work around this problem.

Next we have to define the right "Build Command" which is a little bit tricky, because we've just selected the HotSpot root directory as our working directory. So the first thing we have to do in the "Build Command", will be to change into the make/ directory. As real build command we can use a command line similar to the ones we used in the Building the HotSpot section. And although the build output will be captured in the NB "Build Output Window", it may be still a good idea to additionally store it in a file. So here's a first version of the "Build Command":

cd make && LANG=C ALT_BOOTDIR=/share/software/jse/1.6.0/ ALT_OUTPUTDIR=../../hotspot_c2_debug \
make jvmg 2>&1 | tee ../../hotspot_c2_debug.log

But wait, there's another problem in NB with VERY long lines in the "Build Output Window" (see Issue 124796): if a line exceeds 32Kb, the output will stop and the build will freeze! Unfortunately, the HotSpot Makefile indeed produces a VERY HUGE line: the command line which is created by the makefiles to compile the Java sources of the Serviceability Agent may exceed 100Kb (depending on the absolute base path of your HotSpot sources). With the additional filter part which is needed to strip this huge line from the output, our build command looks as follows:

cd make && LANG=C ALT_BOOTDIR=/share/software/jse/1.6.0/ ALT_OUTPUTDIR=../../hotspot_c2_debug \
make jvmg 2>&1 | grep -v "javac \-source 1\.4" | tee ../../hotspot_c2_debug.log

The clean command is easier - we just have to call the "clean" target. Nevertheless we have to specify ALT_OUTPUTDIR= on the command line such that make knows which directory we want to clean:

cd make && ALT_OUTPUTDIR=../../hotspot_c2_debug make clean

We leave the "Build Result" field empty for now, because the directories which will contain the build results don't exist until now anyway and NB will complain about this fact. Later on, after we have build the project for the first time, it will be possible to adjust this setting easily.

The next step in the "New Project" wizard requires the specification of the folders which contain files that should be added to the project. Again, the "Working Directory" specified in a previous step is the predefined default setting for this entry. Because we have already set the "Working Directory" to point to the HotSpot root directory, this presetting is ok for now. Later on we can (and we will) add additional source folders for the files which were automatically generated by the HotSpot make to the project. We can also restrict the "File types" to ".c .cpp .h .hpp .ad", because these file types are probably the only ones we will be interested in (.ad is the extension for so called "Architecture Description" files which are not real C/C++ files, but contain C/C++ code which is used to automatically generate some other source files, so it is probably a good idea to have them in the project).

We will skip the "Code Assistance Configuration" for now and come back later to it. In the final project configuration step we'll have to choose a name for our project (i.e. hotspot_NB) and a directory (the "Project Location") where NetBeans will save its project folder. Because I don't like to clutter my version controlled directories with external files, I usually place the project directory parallel to the project's root directory (i.e. into /OpenJDK/jdk7 in this example). But that's up to the user. It may be for example a good idea (once the things have settled down) to place a generic NetBeans project directory (you can find such pre-configured projects for Solaris here) inside the HotSpot make directory. In such a case, NB could automatically detect and open the project during the cloning process (this is an option which can be activated with a check box in the Mercurial Cloning Wizard).

After we have created the new project, we can open the context menu of the project in the "Projects" view and select the "Build" action. This will start the HotSpot build and log all the build output to the "Build Output Window". The build command that was specified in the previous step will only build the server version of the HotSpot VM. If we want to build different versions of the VM (with C1 or C2 JIT compiler, with template or C++ interpreter, debug, optimized or product versions) we can create different project configurations for each version by selecting Set Configuration ->Manage Configurations... from the project's context menu. In the appearing "Project Properties" window, we press the Manage Configurations... button and get the "Configurations" window. Here we can rename our current configuration to "C2 debug" (because it builds a debug version of the HotSpot with the C2 server compiler) and make a copy of "C2 debug" which we rename to "C1 debug" (this will be the configuration for building a debug VM with the C1 client compiler). After we return to the "Project Properties" window by pressing OK, we can select the "C1 debug" configuration and adapt the build and clean commands accordingly (originally they are just copies of the corresponding commands from the "C2 debug" configuration). For building the "C1 debug" configuration, we could set them as follows:

cd make && LANG=C ALT_BOOTDIR=/share/software/jse/1.6.0/ ALT_OUTPUTDIR=../../hotspot_c1_debug \
make jvmg1 2>&1 | grep -v "javac \-source 1\.4" | tee ../../hotspot_c1_debug.log
cd make && ALT_OUTPUTDIR=../../hotspot_c1_debug make clean

With the knowledge from the Building the HotSpot section, it should be easy to create all the required, additional configurations accordingly. Notice however, that it may be a good idea to first complete the Code Assistance configuration described in the next section before cloning a configuration. This will save you a lot of time because most of the Code Assistance settings can be shared across configurations.

NetBeans Code Assistance

After we have successfully built the HotSpot project for the first time, it is necessary to refine the project settings. This is because the HotSpot is a quite unusual project in the sense that it has a different source structure than most open source C/C++ projects and therefore doesn't fit right away into a NetBeans project.

As you probably realized already, the HotSpot source tree contains sources for different operating systems and processor architectures. These different files often contain classes and functions with the same names for different platforms. This isn't a problem during the build because the build process figures out the current platform and only considers the needed files. But it considerably confuses the NB Code Assistance to have more than one class or method definition with the same name. Therefore, my advice is to remove the unused platform files from the project. This has the additional benefit of speeding up the parsing of a project at startup, because the project will contain considerably fewer files. If we build on Linux/x86, the following files and directories can be safely removed from the project (right click the corresponding file/directory items in the "Project View" and choose "Remove"):


In the past, the JDK contained the two distinct subdirectories i486 and amd64 for 32-bit and 64-bit x86 architectures respectively. Now, these two architectures have been merged into the single subdirectory x86. However, because the x86 subdirectory still contains files which are relevant for only one of the two architectures, we can also remove the following files from the project on a 32-bit Linux system:


Another peculiarity of HotSpot is the fact that it creates source files during the build process. These files are placed into the sub-folder <os>_<arch>_compiler<1|2>/generated of the specified output folder (i.e. /OpenJDK/jdk7/hotspot_c2_debug/linux_i486_compiler2/generated  in our example). Among others, these are mainly the files generated by the ADL compiler from the .ad file in the <os>_<arch>_compiler<1|2>/generated/adfiles subdirectory and the include files generated by MakeDeps from the "includeDB" files in the <os>_<arch>_compiler<1|2>/generated/incls  subdirectory. (Notice that the "includeDB" technique is another undocumented, HotSpot specific trickery that's beyond the scope of this introduction. You can find a minimalistic introduction at the top of hotspot/src/share/vm/includeDB_core)

In the next step, we will therefore add the to two subdirectories generated/adfiles/ and generated/incls/ from within the output folder to the sources of our project by selecting Add Existing Items from Folders... from the project's context menu. Don't forget to use .c .cpp .h .hpp .incl  as "File types" because the include files generated by MakeDeps all have .incl suffixes.  We'll also have to tell NetBeans to treat files with a .incl suffix like usual C/C++ header files by going to Tools ->Options ->Advanced Options ->IDE Configuration ->System ->Object Types ->C and C++ Header Data Objects and add  incl as extension in the Extension and MIME Types field.

Once we have added all the additional files to our project, we can start configuring the NetBeans Code Assistance. There are two main points that have to be done here: first we have to specify the directories where the Code Assistance parser should look for include files and second we have to define the preprocessor directives which should be considered during code parsing. For the first step we have to go to Project Properties ->Code Assistance ->C++ Compiler ->Include Directories. Unfortunately it is only possible to add one directory at a time from the file selection box and there are quite some directories we have to add:


After we've added the required include directories, we have to tell the Code Assistance parser which preprocessor definitions it should use when parsing the project. For the time being, I just took a look at the compilation command line of some arbitrary HotSpot files, copied all the definitions they contained and inserted them into the Preprocessor Definitions field:


Because I was not sure if Code Assistance would parse .h as C or as C++ files, I doubled the values for Include Directories and Preprocessor Definitions in the C Compiler  section. If you prefer typing instead of clicking, you can also edit the project's configurations file <NB_project_dir>/nbproject/configurations.xml by hand and simply duplicate the entries inside the <ccCompilerTool> tag for the <cCompilerTool> tag. As you can see, the enclosing <confs> tag contains all the relevant configuration settings for a project:

    <conf name="C2_debug" type="0">


          <buildCommand>cd make && 
            make jvmg 2>&1 | 
            grep -v "javac \-source 1\.4"  | tee ../../hotspot_c2_debug.log

          <cleanCommand>cd make && 
            ALT_OUTPUTDIR=../../hotspot_c2_debug make clean


            <preprocessor>LINUX _GNU_SOURCE IA32 ASSERT DEBUG COMPILER2


            <preprocessor>LINUX _GNU_SOURCE IA32 ASSERT DEBUG COMPILER2


Of course, manual changes to the project's configurations file should only be done if a project isn't active in a running NetBeans session! Notice that the parsing of a project like HotSpot may take a considerable amount of time - on my machine about 60 seconds. The progress of this operation is indicated by a small progress bar in the lower right corner of NB. Clicking on it will open a small window which logs the files that are currently processed by the Code Assistance parser. NetBeans parses a project every time the project is opened. Although this may take some time, I think this was the right decision, because this way there's no database file on disk that can get corrupted. If you should encounter problems with Code Assistance (see below), you'll just have to close and reopen the accountable project, to hopefully get it working again.

During my first configuration attempts, I sometimes got null pointer exceptions during code parsing (see issue 125611) and once Code Configuration didn't complete at all. However, once I figured out and set up the right Code Assistance settings (especially the right include paths and preprocessor directives), Code Assistance ran quite smooth. Especially the possibility of defining different project configurations with different preprocessor directives (opt vs. debug, template vs. C++ interpreter) and include paths and easily switching between these configurations while the source information is always accurate, is quite nice.

Debugging with NetBeans

After I had finally finished the "Code Assistance" configuration I was keen on trying the debugging support in NetBeans. I first went to the Project Properties ->Make and changed the Build Results entry to point to the gamma executable in the output directory (../hotspot_c2_debug/linux_i486_compiler2/jvmg/gamma in this example). Notice that this will always be a relative path with respect to your project directory if you choose the executable with the file selection box. The only way to enter an absolute path name here is by entering it manually into the text field (you'll have to do this if you choose to use another Run Directory than the default project directory in the next step).

In the Running category of the Project Properties, I entered "-XX:+TraceBytecodes -XX:StopInterpreterAt=1 -version" for the Arguments and added JAVA_HOME and LD_LIBRARY_PATH to the  Environment with the values as described in Running the HotSpot. If you don't set the Run Directory, the NB project directory will be used as a default. If you choose another Run Directory you need to set Build Results in the Make category such that it contains the absolute path to the executable, in order to run it successfully.

With these settings, it was possible to run the HotSpot by executing the Run Main Project action. This will open a fresh xterm window, dump the executed bytecodes (because of the -XX:+TraceBytecodes option) and finally print out the version of the VM and the Java Runtime Environment:

[4506] virtual void, jint, jint)
[4506]   385353     0  fast_aload_0
[4506]   385354     1  aload_1
[4506]   385355     2  iload_2
[4506]   385356     3  iload_3
[4506]   385357     4  invokespecial 5376 <writeBytes> <([BII)V>
OpenJDK Runtime Environment (build 1.7.0-internal-debug-dXXXXXX_04_jan_2008_11_27-b00)
[4506] virtual void, jint, jint)
[4506]   390343     0  fast_aload_0
[4506]   390344     1  aload_1
[4506]   390345     2  iload_2
[4506]   390346     3  iload_3
[4506]   390347     4  invokespecial 5376 <writeBytes> <([BII)V>
OpenJDK Server VM (build 12.0-b01-internal-jvmg, mixed mode)[4506]   390348     7  return
392702 bytecodes executed in 229.6s (0.002MHz)

As you can see, printing the OpenJDK version requires the execution of nearly 400.000 bytecodes, so think twice before before you heedlessly call java -version the next time:)

Now that we successfully run the HotSpot within NetBeans, we can set up a breakpoint and try the debugger. Please be sure to use at least gdb 6.6, otherwise the NetBeans gdb-plugin will abort the debugging session because of warnings which are thrown by older versions of gdb when debugging the HotSpot. Notice that you can not specify which gdb version to use in NetBeans. Instead you have to set the executable path (under Tools ->Options ->C/C++ ->Build Tools ->Current Path) such that the desired version of gdb will be detected first by NB.

The -XX:StopInterpreterAt=<n> option can be used to stop the HotSpot interpreter before the execution of the specified bytecode number. It is implemented by continuously counting each executed bytecode until the n-th bytecode is reached at which point the VM will call the global, empty breakpoint() function. This breakpoint() function is a hook for the debugger which can be used to intercept the execution of the specified bytecode by defining a debugger, breakpoint for it. We choose Run ->New Breakpoint... and enter breakpoint as the Function Name. Then we execute Debug Main Project to debug the project. This time, only the first bytecode will be printed in the output window and the HotSpot will stop in the breakpoint() function:

VM option '+TraceBytecodes'
VM option 'StopInterpreterAt=1'

[5027] static void java.lang.Object.<clinit>()
[5027]        1     0  invokestatic 2304 <registerNatives> <()V> 

Unfortunately, that's basically all we can currently do with the debugger support in NetBeans. If we have a look at the stack in the Call Stack window, we can see something similar to:

#0  breakpoint ()
    at /OpenJDK/jdk7/hotspot/src/os/linux/vm/os_linux.cpp:394
#1  0x0665d5f5 in os::breakpoint ()
    at /OpenJDK/jdk7/hotspot/src/os/linux/vm/os_linux.cpp:389
#2  0x40246164 in ?? ()
#3  0xbfffb658 in ?? ()
#4  0x43316097 in ?? ()
#5  0xbfffb67c in ?? ()
#6  0x43375ab8 in ?? ()

The unknown frames are neither a debugger nor a NetBeans problem. It's just the result of the fact that the HotSpot is more than often in generated code. This is true for both interpreters, the template interpreter as well as for the C++ interpreter to a certain degree, and of course for compiled (JITed) code. Unfortunately, the NB debugger support can not handle assembler (e.g. there is no stepi command, no register window and no assembler code view). Moreover, signal handling is not implemented (crucial for HotSpot debugging) and the thread support doesn't work either. Generally speaking, the debugging support in NetBeans is quite basic right now and only useful for debugging not deeper than to the C/C++ level.

Probably I should have read the NetBeans GDB Debugger Milestones before starting. It lists all the missing features for future milestones. By private communication with some NB developers (thanks to Leonid Lenyashin and Sergey Grinev) I also got the information that the  dbx-engine already has disassembler support, stepi, a register view and it works well with multiple threads. However, the dbx-engine is currently only available in Sun Studio which at the time of this writing is still NetBeans 5.5.1 based (although there should be a new NB 6 based express release in couple of weeks). I did some quick experiments and the dbx-engine looks indeed promising, but dbx had problems with the debug information generated by g++, so I gave up. Perhaps I'll try again with the next Sun Studio release that can read my NetBeans 6 project files.

I also had the chance to ask Gordon Prieur, a Sun staff engineer and project lead for Sun Studio IDE some questions about the status of the NetBeans gdb debugging engine. Following you can find his answers:

  • NetBeans has no gdb command line (how do I send a command (like for example "stepi" or "x /8i $pc") that is not supported from the Debugger Menu or Debugger Toolbar to gdb)    
    • There are 2 issues in this question. One has to do with the gdb module missing features. The other with adding command-line support. I'll answer these separately.        
      • o Missing features (specifically "stepi" and "x ..."): There is currently no support       in the module for assembly level debugging. Some assembly support is being       added in NB 6.1 (Leonid can give you more details about this than I can) but       assembly level debugging won't make it into 6.1. Hopefully it will be part of      7.0. I would expect stepi and x type commands would be part of this.
      • o GDB Command-line: This is unlikely to ever be publicly available. The problem      is that too many gdb command-line commands change state which the NetBeans      gdb module wouldn't know about. To correctly support this we'd either need to      process all typed commands before sending them to gdb, or to have gdb tell us      any time a command-line change affects us.
  • The "Thread View" is not working (it shows no output at all)    
    • This feature is being added in NB 6.1
  • There is no register windows that displays the native registers (and I can not print them manually either because of point 1.)    
    • A register window has been considered but is not yet planned for a specific release. Its very possible it will be done when we add assembly debugging support.
  • If I get a warning from gdb (e.g. "Previous frame inner to this frame (corrupt stack?)") the NB debugging session is aborted (see issue 125932). This should be handled more gracefully.    
    • In general, errors/warnings like this happen when gdb (the debugger, not the gdb module) dumps core or is no longer usable. I don't see any more graceful way of handling a gdb core dump. I'm definitely open to suggestions on how it can be handled...



Congratulation! If you really read all this blog from the beginning up to here please drop me a note. Hopefully you managed to successfully build and run the HotSpot on Linux!

Although this tutorial got quite lengthy in the end, building and running an own version of OpenJDK and the HotSpot VM should be not to hard for any developer with average Linux experience. Depending on the Linux distribution, there will probably always be the need to install some newer and sometimes even some older version (e.g. gcc) of some packages to meet the build requirements of the OpenJDK - but that should be manageable.

Regarding NetBeans, there's no clear recommendation from my side. If you're already familiar with an IDE like NetBeans, it my be worth while going through all the project configuration hassle to get the integrated Mercurial support and a reasonably well working Code Assistance - but you will still have to use the command line for debugging. On the other side, if you're already using Emacs with Cscope and DVC there's probably no killer argument for switching to NetBeans for HotSpot development.


[1] Kelly O'Hair's Build Cheat Sheet
[2] Kelly O'Hair's Glossary for JDK Builds 
[3] OpenJDK Build Readme
[4] NetBeans C/C++ Support Quick Start Tutorial
[5] Interview with Gordon Prieur about Sun Studio 12

Here comes yet another step-by-step tutorial which explains how to fetch the OpenJDK sources, compile them and work with them inside the NetBeans IDE. It focuses on building and running the different flavors (opt/debug, client/server JIT compiler, template/C++ interpreter) of the HotSpot VM on Linux/x86 and concludes with a short evaluation of NetBeans 6.0 as an development environment for HotSpot hacking.

Because it was two big for a single blog entry, it is split up into two parts: this first part explains how to build and run the HotSpot while the NetBeans integration is described in the second part.

If you're interested in building on Windows you can consult Ted Nedwards blog. A description of how to build different Java-only parts of the JDK with NetBeans 6 can be found here (note that the current NetBeans "world" project under jdk/make/netbeans which is supposed to build the entire JDK (including the HotSpot) doesn't work anymore after the Mercurial switch and the resulting directory restructuring).

Prerequisites - Mercurial, Forest extension, Freetype, Findbugs, CUPS

I started to develop on a Suse Enterprise Linux 9.3 server with 4 Intel Xeon CPUs at 3GHz with 4GB of memory. It had gcc 3.3.3 and gnumake 3.80 installed by default which both suffice for OpenJDK development.

The following subsections will detail how I installed various required software packages. I usually compile and install new software into /share/software and link the resulting executables to /usr/local/bin which comes first in my PATH environment variable.


Also my box had Python 2.4 installed, I decided to install a fresh, 2.5 version of Python. You probably don't have to repeat this step because Mercurial should work perfectly fine with Python 2.4.

> cd /share/software
> tar -xzf Python-2.5.1.tgz

> cd Python-2.5.1/
> mkdir /share/software/Python-2.5.1_bin
> configure --prefix=/share/software/Python-2.5.1_bin
> make
> make install
> ln -s /share/software/Python-2.5.1_bin/bin/* /usr/local/bin/



After this step I downloaded, compiled and installed Mercurial. As mentioned above, you'll be probably fine if you use your default Python installation for this step:

> cd /share/software
> tar -xzf mercurial-0.9.5.tar.gz
> cd mercurial-0.9.5/
> make install-bin PYTHON=/share/software/Python-2.5.1_bin/bin/python PREFIX=/share/software/hg

> ln -s /share/software/hg/lib/python2.5/site-packages/* /share/software/Python-2.5.1_bin/lib/python2.5/site-packages/
> ln -s /share/software/hg/bin/hg/* /usr/local/bin/
> cat > ~/.hgrc
username = Volker H. Simonis
> hg debuginstall

The last command hg debuginstall, should complete without error and should produce the following output:

Checking encoding (UTF-8)...
Checking extensions...
Checking templates...
Checking patch...
Checking merge helper...
Checking commit editor...
Checking username...
No problems detected


The Forest Extension

After I had a working Mercurial, I could use it to get the Forest extension which isn't strictly needed but which will simplify the download of the OpenJDK sources. Note that you'll have to set the  http_proxy environment variable to point to your http proxy server if you're behind a firewall (e.g. http_proxy=http://proxy:8080).

cd /share/software
hg clone
ln -s /share/software/hgforest/ /share/software/hg/lib/python2.5/site-packages/hgext/


Cloning the OpenJDK sources

Now comes the big moment. I fired up a hg fclone command to clone the OpenJDK sources. If everything works fine, this should download about 28.000 files to your local machine. The output should look as follows (stripped-down..):

> cd /share/software
> mkdir OpenJDK
> cd OpenJDK

> hg fclone jdk7
requesting all changes
adding changesets
adding manifests
adding file changes
added 2 changesets with 26 changes to 26 files
26 files updated, 0 files merged, 0 files removed, 0 files unresolved


... ... ...

requesting all changes
adding changesets
adding manifests
adding file changes
added 2 changesets with 2974 changes to 2974 files
2974 files updated, 0 files merged, 0 files removed, 0 files unresolved

There may still be a problem here, if you're behind a firewall. Instead of getting the sources, you may get the following error:

> hg fclone jdk7
abort: error: Name or service not known

This is because of a bug in fclone which doesn't honor the setting of the http_proxy environment variable (although hg alone does use it). Fortunately, this can be fixed easily by setting the http proxy in the ~/.hgrc configuration file like this:


That's it! Now we should have the sources. In order to build them, we still need to install some third party libraries like Freetype, Findbugs and Cups and of course the binary encumbrances which are bundled in the appropriate binary plugs file. You may skip any of the following installation steps if your Linux distribution already has the development packages installed for the corresponding library.


Let's start with Freetype:

> cd /share/software/OpenJDK
> tar -xzf freetype-2.3.5.tar.gz

> cd freetype-2.3.5/
> vi include/freetype/config/ftoption.h
> mkdir ../freetype-2.3.5_bin
> ./configure --prefix=/share/software/OpenJDK/freetype-2.3.5_bin/
> make
> make install

In order to make the Freetype library accessible for the OpenJDK build, we can set the following environment variables:

> export ALT_FREETYPE_LIB_PATH=/share/software/OpenJDK/freetype-2.3.5_bin/lib
> export ALT_FREETYPE_HEADERS_PATH=/share/software/OpenJDK/freetype-2.3.5_bin/include

Another possibility to make a library available during the OpenJDK build is to set the corresponding variable on the make command line. This is the approach that I'll use later on in the build section. Nevertheless, I'll also list the export statements after the installation of each library for the sake of completeness.


The same thing for Findbugs ..

> cd /share/software/OpenJDK
> tar -xzf findbugs-1.3.1.tar.gz
> export FINDBUGS_HOME=/share/software/OpenJDK/findbugs-1.3.1



.. and Cups:

> cd /share/software/OpenJDK
> tar -xzf cups-1.3.5-source.tar.gz

> cd cups-1.3.5/
> mkdir ../cups-1.3.5_bin
> ./configure --prefix=/share/software/OpenJDK/cups-1.3.5_bin 
> make
> make install
> export ALT_CUPS_HEADERS_PATH=/share/software/OpenJDK/cups-1.3.5_bin/include


Binary plugs, Boot JDK and Ant

Finally we need to get the Binary plugs and a boot JDK (at least Java 6). I already had Java 6 and Ant installed in /share/software/Java/1.6.0 and /share/software/Ant/1.6.4 respectively, so I only had to download and install the Binary plugs and make all of them known to the OpenJDK build:

> cd /share/software/OpenJDK
> java -jar jdk-7-ea-plug-b24-linux-i586-04_dec_2007.jar
> export ALT_BINARY_PLUGS_PATH=/share/software/OpenJDK/jdk-7-ea-plug-b24-linux-i586-04_dec_2007/openjdk-binary-plugs
> export ALT_BOOTDIR=/share/software/Java/1.6.0
> export ANT_HOME=/share/software/Ant/1.6.4/

With this step we ultimately finished the necessary preparations and can now happily proceed to build the OpenJDK!

Building the OpenJDK

Before we start the build, we should first run the sanity check to see if our settings and the available tools and libraries are sufficient for the build. Because I don't like to clutter my environment, I'll set all the needed environment variables on the command line, such that they only affect the current command as follows:

> cd /share/software/OpenJDK/jdk7
> LANG=C \
  FINDBUGS_HOME=/share/software/OpenJDK/findbugs-1.3.1 \
  ANT_HOME=/share/software/Ant/1.6.4/ \
  ALT_CUPS_HEADERS_PATH=/share/software/OpenJDK/cups-1.3.5_bin/include \
  ALT_BOOTDIR=/share/software/Java/1.6.0 \
  ALT_BINARY_PLUGS_PATH=/share/software/OpenJDK/jdk-7-ea-plug-b24-linux-i586-04_dec_2007/openjdk-binary-plugs \
  ALT_FREETYPE_LIB_PATH=/share/software/OpenJDK/freetype-2.3.5_bin/lib \
  ALT_FREETYPE_HEADERS_PATH=/share/software/OpenJDK/freetype-2.3.5_bin/include \
  make sanity

You shouldn't proceed forward, until the sanity check completes without any errors or warnings. The checks performed by the sanity check are still quite weak and a passed sanity check is no guarantee for a successful build. The sanity check for example just verifies that ALT_BINARY_PLUGS_PATH points to a valid directory, but not if that directory really contains the binary plugins!

Internally, SUN apparently still uses gcc 3.2.2 to build the JDK and with gcc 3.2.2 there seem to be no warnings during the build so they decided to use the -Werror option on Linux which instructs gcc to treat every compiler warning as error. If you however want to build with a gcc version higher than 3.2.2 (and you'll probably want to do this on a newer Linux distribution) you'll either have to use precompiled headers (USE_PRECOMPILED_HEADER=true) or comment the line WARNINGS_ARE_ERRORS = -Werror in the file hotspot/build/linux/makefiles/gcc.make (see Bug 6469784).

Using precompiled headers is probably the easier way to go for first time users and it has the additional benefit of speeding up the build considerably. But it only helps with warnings related to inlining and it has the disadvantage of hiding problems with the includeDB (see this mail thread for a discussion of the topic). If you want to use gcc 4.3 or higher, you'll probably have to disable the treatment of warnings as errors for a successful build (see for example here).

Building the corba subdirectory will fail if you haven't set ALT_JDK_IMPORT_PATH. This is because of a known bug in corba/make/common/shared/Defs.gmk (see   this mail thread). You can fix the problem by inserting the following lines into corba/make/common/shared/Defs.gmk, just before the line that includes Compiler.gmk:


After this last patch, we can finally start the build. I'll focus here on debug builds because you're probably a developer if you read this and as a developer you're probably interested in a debug build (after all you could download a product build, so it would not be worth the work).

> cd /share/software/OpenJDK/jdk7
> LANG=C \
  FINDBUGS_HOME=/share/software/OpenJDK/findbugs-1.3.1 \
  ANT_HOME=/share/software/Ant/1.6.4/ \
  ALT_CUPS_HEADERS_PATH=/share/software/OpenJDK/cups-1.3.5_bin/include \
  ALT_BOOTDIR=/share/software/Java/1.6.0/ \
  ALT_BINARY_PLUGS_PATH=/share/software/OpenJDK/jdk-7-ea-plug-b24-linux-i586-04_dec_2007/openjdk-binary-plugs \
  ALT_FREETYPE_LIB_PATH=/share/software/OpenJDK/freetype-2.3.5_bin/lib \
  ALT_FREETYPE_HEADERS_PATH=/share/software/OpenJDK/freetype-2.3.5_bin/include \
  DEBUG_NAME=debug \
  ALT_OUTPUTDIR=/share/software/OpenJDK/jdk7/build/openjdk_full_debug \
  make 2>&1 | tee /share/software/OpenJDK/jdk7/build/openjdk_full_debug.log

Note that we don't use the make_debug target because there's a bug in the top-level Makefile that ignores the ALT_OUTPUTDIR if that target will be used (see this mail thread). You should be also aware of the fact, that the build will always create an empty directory named  <ALT_OUTPUTDIR>-fastdebug which can be ignored and removed.

It is also advisable to save the build output in a file. This can be achieved by redirecting the whole output to tee as shown in the call to make above. tee is a utility that duplicates its input to the standard output and to an additional file. This file can be consulted later on if there have been build problems or if we just want to know how a file has been built and where the resulting object files have been placed to.

Following the above pattern it is also possible to build a product or a fastdebug build. You just have to set SKIP_DEBUG_BUILD=true SKIP_FASTDEBUG_BUILD=false DEBUG_NAME=fastdebug for a fastdebug build and SKIP_DEBUG_BUILD=true SKIP_FASTDEBUG_BUILD=true for a product build.

Notice that ALT_PARALLEL_COMPILE_JOBS is currently only honored by the corba and the jdk subprojects while the hotspot subproject uses  HOTSPOT_BUILD_JOBS as indicator to do a parallel build. Unfortunately, due to another bug, neither ALT_PARALLEL_COMPILE_JOBS nor HOTSPOT_BUILD_JOBS is handed over from the top-level makefile to the hotspot makefile. However, this can be easily fixed by adding the following lines to /make/hotspot-rules.gmk, just before the hotspot-build target:


With this change, setting ALT_PARALLEL_COMPILE_JOBS on the make command line will be enough to trigger a parallel hotspot build, which should be considerable faster on a multi-processor machine (a good default setting for ALT_PARALLEL_COMPILE_JOBS is hard to predict, but 1.5xNrOfCPUs should be a good starting point).

Sooner or later (depending on your machine and the right setting of  ALT_PARALLEL_COMPILE_JOBS:) the build should finish (hopefully without any error). Among others, this will create the following subdirectories in the build location that was specified with ALT_OUTPUTDIR:


corba, hotspot and langtools contains the build results of the corresponding subprojects. bininclude and lib contain the binaries, include files and libraries that are contained in the corresponding directories of a Java SDK or RE distribution. Finally, j2sdk-image and j2re-image contain a complete image of a Java SDK or RE distribution, assembled from the subdirectories of the build subdirectory. If everything went fine, we can now call bin/java (or j2sdk-image/bin/java or j2re-image/bin/java which is all the same) to verify our build:

> /share/software/OpenJDK/jdk7/build/openjdk_full_debug/bin/java -version
openjdk version "1.7.0-internal-debug"
OpenJDK Runtime Environment (build 1.7.0-internal-debug-dXXXXXX_04_jan_2008_11_27-b00)
OpenJDK Server VM (build 12.0-b01-jvmg, mixed mode)

That looks really nice, isn't it!!! We managed to built a complete debug version of the OpenJDK from scratch!

Building the HotSpot

Now that we've successfully built the OpenJDK and started hacking the HotSpot VM, we probably don't want to go through all this hassle just to verify that a small VM change compiles and works correctly. Luckily, the HotSpot developers at SUN didn't wanted either, so they provided an elegant way to rebuild the HotSpot part of the VM and test it.

And here is how it works. Go to the hotspot/make directory and execute the following make command:

> LANG=C \
  ALT_BOOTDIR=/share/software/Java/1.6.0/ \
  ALT_OUTPUTDIR=../../build/hotspot_debug \
  make jvmg jvmg1 2>&1 | tee ../../build/hotspot_debug.log

As you can see, there are considerably fewer variables needed to build the VM (in fact the only real dependency is the boot JDK specified with ALT_BOOTDIR). By selecting the corresponding build target it is possible to build debug builds (jvmg and jvmg1 targets), fastdebug builds (fastdebug and fastdebug1 targets), optimized builds (optimized  and optimized1 targets) and product builds (product and product1 targets). A "1"-suffix in the target name indicates that the client version (the one with the C1 JIT compiler) will be build while a target name without suffix will build the server version (the one with the C2 JIT compiler) of the corresponding VM. A fastdebug build is an optimized build with assertions (C/C++ style asserts in the VM code) enabled. An optimized build has no assertions, while a product build is an optimized build without assertions and -DPRODUCT defined (this may for example disable non-product switches in the resulting VM).

With this information in mind, you'll probably easily guess that the last make command builds the debug version of the client and the server VM. The results will be placed in the corresponding <os>_<arch>_compiler1/ and <os>_<arch>_compiler2/ subdirectories of the output directory with the little anomaly that debug build will be placed in the  jvmg subdirectory (i.e. linux_i486_compiler1/jvmg/ and linux_i486_compiler2/jvmg/ in this example).

Running the HotSpot

These directories not only contain the HotSpot VM as a shared library ( but also a small executable called gamma. I don't really know the origin of this name (perhaps someone of the geeks can comment on this?), but it is a really convenient possibility to test the newly created VM:

> build/hotspot_debug/linux_i486_compiler1/jvmg/gamma -version
build/hotspot_debug/linux_i486_compiler1/jvmg/gamma: \
 error while loading shared libraries: \
 cannot open shared object file: No such file or directory

> LD_LIBRARY_PATH=build/hotspot_debug/linux_i486_compiler1/jvmg \
  build/hotspot_debug/linux_i486_compiler1/jvmg/gamma -version
JAVA_HOME must point to a valid JDK/JRE to run gamma
Error: could not find
Error: could not find Java 2 Runtime Environment.

> LD_LIBRARY_PATH=build/hotspot_debug/linux_i486_compiler1/jvmg \
  JAVA_HOME=build/openjdk_full_debug/j2sdk-image \
  build/hotspot_debug/linux_i486_compiler1/jvmg/gamma -version
openjdk version "1.7.0-internal-debug"
OpenJDK Runtime Environment (build 1.7.0-internal-debug-dXXXXXX_04_jan_2008_11_27-b00)
OpenJDK Client VM (build 12.0-b01-internal-jvmg, mixed mode)

As you can see, we just have to put the directory which contains the desired (client or server) into the LD_LIBRARY_PATH and define JAVA_HOME to point to a valid JDK or JRE (e.g. the one we built in the first step). gamma is a simple launcher intended for internal engineering test. It is build from hotspot/src/os/linux/launcher/java.c and  hotspot/src/os/linux/launcher/java_md.c (search for launcher.c in the build log to see the details). hotspot/src/os/linux/launcher/java.c and hotspot/src/os/linux/launcher/java_md.c are stripped down versions of the real Java launcher sources jdk/src/share/bin/java.c and jdk/src/solaris/bin/java_md.c from the jdk  workspace. The gamma launcher misses some of the logic of the default Java launcher which finds the corresponding VM and JDK automatically. Therefore it is necessary to signal their location by setting the LD_LIBRAY_PATH and JAVA_HOME environment variables. The big advantage however is the fact that it builds within the hotspot project without any dependency on the jdk workspace and comes in quite handy for fast development and testing.

Just as a side note: you probably wondered why I wrote  jdk/src/solaris/bin/java_md.c in the previous paragraph, for the location of java_md.c in the jdk workspace. The solaris part was not a typo. The jdk workspace isn't that well structured like the hotspot workspace which divides platform and architecture dependant files into the corresponding  os, cpu and os_cpu subdirectories. Instead, in the jdk workspace there's just a windows directory for the Windows native code and a solaris directory which contains all the Unix native code (separated by ifdefs if necessary). (In fact there is a  linux subdirectory, but it only contains the linux man pages and no code). This may identify as serious problem for the various porting projects which attempt to port the OpenJDK to other Unix-like operation systems.

Currently, the gamma launcher is somewhat outdated (hotspot/src/os/linux/launcher/java.c is based on the 1.6.0-b28 JDK version of jdk/src/share/bin/java.c as stated in the file) but it is still sufficient to test the HotSpot VM. Hopefully it will be updated, as the JDK7 development moves forward.

Building a HotSpot with C++-Interpreter

If you want to build the C++-Interpreter instead of the default template interpreter, you have to additionally set CC_INTERP=true on the build command line. Currently the C++-Interpreter only works for the 32-bit x86 debug build and for the 32-bit opt and debug builds on SPARC (see my previous blog entry for how to get the 32-bit x86 opt and the 64-bit SPARC versions running). Notice that you'll also have to disable the the treatment of warnings as errors if you built the C++-Interpreter, because its sources generate some warnings.

For your convenience I created a patch file (in fact I run hg diff > linux32.pach in the hotspot directory) which fixes all the problems mentioned so far. To apply it, just download it and call patch as follows:

> cd hotspot/
> patch -p1 < linux32.patch
patching file build/linux/makefiles/gcc.make
patching file src/cpu/x86/vm/cppInterpreter_x86.cpp
patching file src/share/vm/interpreter/bytecodeInterpreter.cpp

You should now be able to build a debug version of the C++-Interpreter enabled HotSpot with the following command:

> LANG=C \
  ALT_BOOTDIR=/share/software/Java/1.6.0/ \
  ALT_OUTPUTDIR=../../build/hotspot_CC_INTERP_debug \
  CC_INTERP=true \
  make jvmg jvmg1 2>&1 | tee ../../build/hotspot_CC_INTERP_debug.log

Notice that building an interpreter-only VM isn't currently supported out of the box with the current top-level HotSpot makefiles. Previously, the CORE targets (debugcore, jvmgcore, fastdebugcore, optimizedcore, profiledcore and  productcore) could be used for this purpose and building the HotSpot that way, gave you a pure, interpreter-only VM. However now, as Tom Rodriguez explains "..core is just a system without a compiler but still including the classes needed by the compiler like nmethod so it's slightly less minimal ... and the core makefile targets should still work". These core targets are defined in the platform-dependant makefiles hotspot/build/<os>/Makefile only, and not in hotspot/make/Makefile so you'll probably have to hack hotspot/make/Makefile to build them.

Developing with NetBeans

See HotSpot development on Linux with NetBeans - Part 2



[1] Kelly O'Hair's Build Cheat Sheet
[2] Kelly O'Hair's Glossary for JDK Builds
[3] OpenJDK Build Readme
[4] NetBeans C/C++ Support Quick Start Tutorial
[5] Interview with Gordon Prieur about Sun Studio 12

The Template-Interpreter

The default interpreter that comes with the Hotspot VM is the so called "Template Interpreter". It is called template interpreter, because it is basically created at runtime (every time the Hotspot starts) from a kind of assembler templates which are translated into real machine code. Notice, that although this is code generation at runtime it should not be confused with the ability of the Hotspot to do Just In Time (JIT) compilation of computationally expensive program parts.

While a JIT compiler compiles a whole method (or even more methods together if we consider inlining) into executable machine code, the template interpreter, although generated at runtime, is still just an interpreter. It interprets a Java program bytecode by bytecode. The advantage of the template interpreter approach is the fact hat most of the code that gets executed for every single bytecode is pure machine code as well as the dispatching from one bytecode to the next, which can also be done in native machine code. Moreover this technique allows a very tight adaption of the interpreter to the actual processor architecture so the same binary will still run on an old 80486 while it may well use the latest and greatest features of the newest processor generation if available.

Beside the slightly increased startup time, the second drawback of the template interpreter approach is the fact that the interpreter itself is quite complicated. It requires for example a kind of builtin runtime assembler, which translates the code templates into machine code. Therfore porting the template interpreter to a new processor architecture is not an easy task and requires quite a profound knowledge of the underlying architecture.

The C++-Interpreter

In the earlier Java days (around JDK 1.4) a second interpreter existed beside the template interpreter - the so called C++ Interpreter. It was probably named that way, because the main interpreter loop was implemented as a huge switch statement in C++. Despite its name however, even the C++ Interpreter isn't completely implemented in C++. It still contains large parts like for example the frame manager which are written in assembler. It doesn't rely on recursive C++ method invocations to realize function calls in Java but instead uses the frame manager just mentioned before, which controls the stack manually. But despite these issues, the C++ interpreter is probably still easier to port to a new architecture than the template interpreter.

In Java 1.4 the C++ interpreter has been used for the Itanium port of the Hotspot. But after SUN abandoned the support for the Itanium architecture, it got quite silent around the C++ Interpreter although it was still present in the Hotspot sources. With the advent of OpenJDK, the demand from the developer community to get a working example of the C++ interpreter grew (see BugID: 6571248) and so the C++ interpreter was finally reactivated in build 20 of OpenJDK, (at least for the i486 and the SPARC architecture).

The C++ interpreter was basically working out of the box for the 32-bit x86 debug build and for the 32-bit opt and debug build on SPARC. If you would like to try the opt build on a 32-bit x86 platform, you'll currently have to apply this small patch: bytecodeInterpreter.patch. To make the C++ interpreter 64-bit clean on SPARC, a few more changes have to made, but I succeeded to get it running (at least for the JVM98 and the DaCapo benchmark suits) by applying these patches: bytecodeInterpreter_sparc.hpp.patch, cppInterpreter_sparc.cpp.patch, parseHelper.cpp.patch. After applying the patches you can build the Hotspot VM with the C++ interpreter instead of the usual template interpreter by setting the environment variable CC_INTERP in the shell where the build is started.

Template- vs. C++-Interpreter shootout

Beside the expected porting effort, performance will be probably one of the other main reasons for the decision for or against one of the two interpreters. I have therfore run the DaCapo performance test suite with both interpreters in interpreter only mode (-Xint) and in mixed mode (-Xmixed) together with the C2 server JIT compiler. The tests have been executed with a 32-bit VM on Linux/x86 and with a 32- and a 64-bit VM on Solaris/SPARC. The results can be seen in the following tables.

Table 1: Interpreted execution (-Xint) on Solaris/Sparc  
 32 bit64 bit
antlr126516 ms257359 ms49.16%131355 ms289253 ms45.41%
bloat327444 ms851316 ms38.46%352711 ms956596 ms36.87%
chart250255 ms600670 ms41.66%265860 ms677299 ms39.25%
eclipse1003766 ms2180171 ms46.04%1041304 ms2454685 ms42.42%
fop19114 ms44072 ms43.37%20614 ms49592 ms41.57%
hsqldb67514 ms159739 ms42.27%76838 ms186426 ms41.22%
jython184255 ms445747 ms41.34%197455 ms504520 ms39.14%
luindex317580 ms726604 ms43.71%325140 ms809468 ms40.17%
lusearch57484 ms139343 ms41.25%61858 ms158497 ms39.03%
pmd153715 ms376361 ms40.84%164771 ms430127 ms38.31%
xalan69368 ms171061 ms40.55%75989 ms196171 ms38.74%


Table 2: Mixed mode execution (-Xmixed) on Solaris/Sparc  
 32 bit64 bit
antlr37962 ms39326 ms96.53%37339 ms45151 ms82.70%
bloat12018 ms24324 ms49.41%13403 ms29218 ms45.87%
chart14344 ms17339 ms82.73%16610 ms20054 ms82.83%
eclipse139999 ms172798 ms81.02%154389 ms195541 ms78.95%
fop3036 ms3700 ms82.05%3382 ms4018 ms84.17%
hsqldb11258 ms15007 ms75.02%16359 ms20612 ms79.37%
jython9792 ms15659 ms62.53%11562 ms18601 ms62.16%
luindex80190 ms83652 ms95.86%82075 ms86279 ms95.13%
lusearch6692 ms8671 ms77.18%7731 ms9742 ms79.36%
pmd11364 ms16937 ms67.10%17218 ms23836 ms72.24%
xalan7901 ms9768 ms80.89%10517 ms13019 ms80.78%


Table 3: Interpreted and mixed mode execution on Linux/x86  
 Interpreted execution (-Xint)Mixed mode execution (-Xmixed)
antlr58452 ms107494 ms54.38%31660 ms35035 ms90.37%
bloat136235 ms335865 ms40.56%6201 ms17728 ms34.98%
chart90805 ms209499 ms43.34%7574 ms11154 ms67.90%
fop8381 ms19088 ms43.91%1489 ms1956 ms76.12%
hsqldb32907 ms68857 ms47.79%4629 ms7192 ms64.36%
jython83621 ms188785 ms44.29%4403 ms8259 ms53.31%
luindex161362 ms344860 ms46.79%67150 ms73282 ms91.63%
lusearch33548 ms86230 ms38.91%4425 ms7198 ms61.48%
pmd69562 ms161983 ms42.94%5574 ms9899 ms56.31%
xalan49219 ms115101 ms42.76%5335 ms7449 ms71.62%


Although the numbers should be treated with some caution because of some possible measurements inaccuracies, all in all the results could be interpreted as follows. In interpreter mode (-Xint) the performance of the C++ interpreter varies between 35 and 50 percent of the performance of the template interpreter. In mixed mode (-Xmixed) a VM that runs with the C++ interpreter reaches from 45 up to 90 percent of the performance of a VM that runs with the template interpreter. The still sometimes huge differences between a VM with template versus one with C++ interpreter in mixed mode, where most of the "hot" code should be compiled anyway, may be in part explained by the lack of interpreter profiling in the C++ interpreter (the C++ interpreter runs with -XX:-ProfileInterpreter). This may lead to less optimal code generation but more details have to be further evaluated.

If you want to get more information about the current status of the C++ interpreter, you should probably follow the C++ Interpreter threads on the OpenJDK Hotspot mailing list. You can also read Gary Bensons online diary. There he writes about his experience of porting the OpenJDK to PowerPC using the C++ interpreter.

Although JCK tests are not a regression test suite it is probably not uncommon that they (or at least a significant subset of them) are used as automated tests. To do this successfully, a number of JCK tests like for example interactive tests or tests which require a special setup have to be excluded from the test suite. This can be easily achieved with the help of exclude lists.

However, even with such a setup, one still encounters infrequent, spurious test failures in random tests. Usually, these failures are not reproducible but still need a lot of attention in order to ensure that they do not signal a real problem with the tested VM.

After having analyzed such failed tests over a longer period of time I realized that most of them have been caused by an InterruptedException that was not supposed to be catched within that test. Notice that while some tests explicitely report that the reason of a failure is a catched InterruptedException, other tests fail silently. In such cases, only a closer look at the test sources revealed the fact, that most of them use to call one of the Object.wait(), Thread.join() or Thread.sleep() methods which can all be interrupted and throw an InterruptedException.

The only strange thing was that I couldn't identify any single call to Thread.interrupt() during the execution of these failed tests. This observation led to the assumption, that the interrupts must originate from the test agent or from the harness. Because the failures where so seldom and occurred in random tests, I assumed that they could be provoked by timeouts caused by a high machine load. But decreasing the JCK timeout factor just led to more tests that where aborted. Such test are flagged with an "Error" status in contrast to failed test which return in time but don't return the expected result (they are flagged as "Failed"). So that obviously wasn't the solution.

Finally, I decided to try it the hard way and instrumented Thread.interrupt() and the two constructors of the InterruptedException class to print a timestamp and a stack trace every time they were called. After running the JCK tests with the modified VM I could identify some interesting calls to Thread.interrupt():

java.lang.Throwable: Thread.interrupt() called for Thread[Agent0,3,main]
        at java.lang.Thread.interrupt(
        at com.sun.javatest.agent.SocketConnection$1.timeout(
        at com.sun.javatest.util.Timer$

They only occurred in JCK runs with failed tests, but interestingly enough, they sometimes happened long (up to 30 minutes) before the failing test. After understanding the semantics of Thread.interrupt(), which only sets the status of a thread to "interrupted" but doesn't actively interrupt the thread, it was clear why - the corresponding thread gets interrupted for a yet still unknown reason, but the interrupted status of the thread is not cleared. Afterwards, the first test which calls one of the interruptible Object.wait(), Thread.join() or Thread.sleep() methods, will instantly raise an InterruptedException and fail badly.

So I just had to find out, why somebody would want to interrupt the test thread, why this interruption happens so randomly and probably most interesting: why isn't this interrupt handled by the test that provokes the interrupt. After downloading the JTHarness sources from digging into the code, I came up with the following explanation:

Below you can see a simplified view of how an agent handles request from the harness (for the full story see com/sun/javatest/agent/ For every test, the agent establishes a new connection (an object of type SocketConnection) to the test harness. It receives the test name and parameters from the connection, executes the test and returns the result back through the connection.

Listing 1: Agent in pseudocode
Agent.handleRequestsUntilClosed() {
  while(!closing) {
    Connection connection = nextConnection();
    Task t = new Task(connection);
      // Task in pseudocode;
      status = execute();
        // Task.close in pseudocode
        if (!connection.isClosed()) {

The interesting part here is the call to connection.waitUntilClosed(). The method waitUntilClosed() is declared in the Connection interface in com/sun/javatest/agent/ to potentially throw anInterruptedException. However, the implementation of the interface in com/sun/javatest/agent/ doesn't throw any exceptions. It works as follows:

Listing 2: waitUntilClosed() in pseudocode
SocketConnection.waitUntilClosed(int timeout) {
  waitThread = Thread.currentThread();
  Timer.Timeable cb = new Timer.Timeable() {
    public void timeout() {
  Timer.Entry e = timer.requestDelayedCallback(cb, timeout);
  try {
    while (true) {
      int i =;
      if (i == -1) break;           
  finally {

It first creates a Timeable object that will interrupt the thread and close the in- and output stream associated with the current socket after a given amount of timeout. Thereafter it reads from the socket input stream until it encounters an EOF condition. If the EOF is catched within the timeout period, the Timeable object is removed from the waiting timer thread and the method returns. Otherwise, if the stream was not closed by the harness within the timeout period, the timer thread will call the timeout() method of the Timeable object which in turn will call interrupt() on the waiting thread and explicitely close the socket streams. At this point, the thread waiting for the EOF condition on the socket stream, will finally get it and will return. Notice that in the latter case, the interrupt status of the thread is neither queried and transformed into an InterruptedException nor is it cleared.

At the end, handleRequest(), the caller of waitUntilClosed() will call theTask.close() method that will finally close the socket. Notice that handleRequest() doesn't handle the interrupt status of the thread either, so the thread will effectively stay marked as interrupted after a timeout has happened in waitUntilClosed(). This will ultimately lead to a failure in the next JCK test that calls one of the interruptibleObject.wait(), Thread.join() or Thread.sleep() methods.

The solution of this problem is easy. We could either query and clear the interrupted status of the agent thread after the call towaitUntilClosed() in handleRequest()like so:

Solution 1: clear thread interrupt status in handleRequest()
  if (Thread.interrupted() && tracing) {
    traceOut.println("Thread was interrupted - clearing interrupted status!");

Notice, that the call to Thread.interrupted() queries and clears the interrupted status of a thread simultaneously. A slightly more elegant solution would probably be to query and clear the interrupted status already in the finallyblock of the waitUntilClosed() method ofSocketConnection. In the case the thread was interrupted by the timer thread because of a timeout, the interrupted status of the thread should be cleared and an InterruptedException should be thrown. This exception will then be handled correctly in Task.handleRequest().

Solution 2: clear interrupt status and throw InterruptedException in waitUntilClosed()
  finally {
    if (Thread.interrupted() {
      throw new InterruptedException();

With either of these two changes, the JCK tests will run more stable on machines with a high load an don't produce any spurious test failures because of InterruptedExceptions any more.

Notice that there's one last caveat: for some (to me yet unknown) reason, the JTHarness 3.2.2 executes the JCK tests in a folder in a different order than the harness that comes with the JCK test suit (and which is version 3.2_2). For example the tests in /api/java_lang/management/ThreadMXBeanare executed in the following order (and succeed) with the original JCK harness:


With the JTHarness 3.2.2, there is one test - namely TrdMBean - that will fail because the ThreadMXBean tests are executed in a different order:


To avoid this problem, the test can be placed in the exclude file. This can of course only be done if the JCK run is not intended for certification! If anyone of the experts knows the reason why these tests get executed in a different order by the two versions of the test harness, please comment.

Although I'm referring to version 3.2.2 of JTHarness in this blog, the latest version 4.1.1 suffers from the same problem. I've therefore opened a bug report for this issue ("Bug in test agent causes random test failures (with InterruptedException)") which got the internal review ID 1109902. If you're interested in resolving this problem, you can vote for the bug once it appears in the bug database.

Update (2008-07-15): thanks to Brian Kurotsuchi this problem has been resolved in the latest version 4.1.4 of JTHarness (see jtharness issue 35).

Recently I did some benchmarking with the HotSpot and because my program was obviously too slow, I began to browse the HotSpot sources for some secret tuning parameters that could save my day. And indeed, after some digging, I found a real big fish: the "-Xintelligent_as_can_be_execution" option.

Well, sounds extremely promising, I thought by myself, and started some experiments. For your convenience, I'll reproduce here the small factorial program, that I used to measure performance:

import java.math.BigInteger;

public class Factorial {

  public static BigInteger factorial(long l) {
    BigInteger result = BigInteger.ONE;
    BigInteger fac = BigInteger.ONE;
    while (l-- > 0) {
      result = result.multiply(fac);
      fac = fac.add(BigInteger.ONE);
    return result;

  public static void main(String args[]) {
    long l = args.length > 0 ? Long.parseLong(args[0]) : 1;
    long t1 = System.currentTimeMillis();
    System.out.println("Factorial " + l + " = " + factorial(l));
    System.err.println("Elapsed time = " + 
                       (System.currentTimeMillis() - t1) + "ms");

I compiled the program and started Java on the console (I redirected the output of the resulting number to /dev/null to display only the elapsed time):

> java Factorial 10000 >/dev/null
Elapsed time = 2579ms

Now the same program with the secret "-Xintelligent_as_can_be_execution" option. Big expectations...

> java -Xintelligent_as_can_be_execution Factorial 10000 >;/dev/null
Elapsed time = 14930ms

...big frustration. The execution time has increased by a factor of five! What's wrong here? If you can't belive it, just trust me and try the option with your preferred application. A performance degradation is guaranteed.

But what is this intelligence option good for, if it only slows down a program? So I started further investigations and found another very interesting, undocumented (and therefore probably extremely striking) option: "-Xcompletely_brain_damaged_execution". Seems counterintuitive, but if the "-Xintelligent_as_can_be_execution" option slows the program down, perhaps the "-Xcompletely_brain_damaged_execution" option will accelerate execution time up to infinity?

> java -Xintelligent_as_can_be_execution \
       -Xcompletely_brain_damaged_execution Factorial 10000 >/dev/null
Elapsed time = 2599ms

Well, not exactly brilliant, but at least the brain damage option seems to compensate for theintelligence option. Finally I decided to abandon the search for secret high performance options and back-doors intended only for the use by the coalition of the willing and other registered secrete services and just went on improving my own program...


This is my first blog on and I thought I'll start with something funny..


If you didn't knew the two secret options discussed just before you don't have to be upset. The funny thing is that they really work as described (I hope you tried them out), but as in the majority of cases, there's a really trivial explanation for this phenomena.

As you probably know, the HotSpot VM supports some extended options, among them "-Xint" and "-Xcomp". The first one runs the VM in an interpreter-only mode (without JIT compilation) while the latter advises the VM to compile all Java methods before executing them. For some reason (probably accidentally - or intentionally if you like to belive in conspiracy), the HotSpot programmers decided to match the original options against the beginning of a given command line argument (in fact they use strncmp()with the length argument being set to the length of the original option). Therefore, the "-Xintelligent_as_can_be_execution" option is recognized as "-Xint" while the "-Xcompletely_brain_damaged_execution" option is identified as "-Xcomp".

Obviously, the execution of a program in interpreter-only mode is considerably slower than the execution in mixed mode (which is the default mode). On the other hand, "-Xint" and "-Xcomp" are mutually exclusive options. If they are given both on the command line, the last one wins. That's why in the last example, we ended in the compile-all mode. Usually, the execution time in this mode is slightly slower than in mixed mode, because each and every Java method will be JIT-compiled. In our small example however, the execution time was similar to the one in mixed mode.