Part II, in which we discuss the internal performance implications of said image type 

Let's dive briefly into some of the performance implications with BufferedImages (because I'm writing this and I tend to wind up in the performance arena no matter what I'm talking about).

There is currently only one image type that we guarantee acceleration on (if possible on the runtime platform): VolatileImage. When you create one of these images, we try to stash it in accelerated memory (such as VRAM on Windows, or an accelerated pixmap on Unix) and then perform rendering operations to and from that image using any available hardware acceleration.

VolatileImages work well for things like back buffers (ala the Swing back buffer, which is now a VolatileImage), where you obviously want to render to them frequently and copy from them as fast as possible. But for your average image, managing a VolatileImage can be tiresome (you have to make sure it's there before and after you use it), and you can't get all the flavors of images you want (currently only opaque volatiles exist).

But think about your average application: there's the back buffer and screen that you write to often, so you want those rendering-to operations to go really really fast. But then there's a bunch of other images like icons, sprites, whatever which you really only write to once or occasionally, but fromwhich you would like to copy often.

This is where Managed Images come in (our new catch-phrase which, roughly translated, means "we will try our darnedest to accelerate this for you"). Here, you create an image however you need to, start working with it, and internally we will recognize that these copying operations can go much faster using an accelerated version, so we will just create that cached version for you. You don't need to manage the image and you don't need to know how these operations are happening; you just keep calling your rendering operations and let us take care of the pesky details.

Now, the fine print: as of 1.4.*, we hooked out only certain parts of the API to create these managed images. Specifically, you need to create an image either by calling one of the create*Image methods:

GraphicsConfiguration.createCompatibleImage(w, h)
GraphicsConfiguration.createCompatibleImage(w, h, transparency)
Component.createImage(w, h)
m ethods or by loading the image, ala: 
You can also use 
new ImageImage(...).getImage()
( because ImageIcon currently uses Toolkit.getImage() under the hood, with the old MediaTracker functionality thrown in for free). Images that you get from other key means, such as ImageIO-created images or any image created explicitly through calling new BufferedImage() are not managed, and thus will not benefit from under-the-hood acceleration possibilities.


So in the current implementation of Java2D, the advice posted by ajsutton in response to Part I of this BufferedImage article is well-taken: if you want to take advantage of possible acceleration for your image, use a compatible image (or one of the other means above). Then we will attempt to accelerate this for you. Another benefit of using a compatible image is that you will get an image that is "compatible" (thus the name of the method above) with the display device you are rendering to, which saves pixel format conversion during the copy loops.

(A further caveat is that not all image types that you get from the above methods are acceleratable. For example, if you create an image with the flag Transparency.TRANSLUCENT then we do not currently accelerate that image and you end up going through software rendering loops regardless. Look for this to change as the library evolves and we try to accelerate more and more standard yet nifty features of the API).

Okay, so that's the state of things now: use BufferedImage for all of your condiment image needs, and if performance is particularly important to you, then use one of the variants mentioned above is the way to go. But what about the future?

Gosh, I'm glad you asked that. What a great question.

You may already be wondering, in reading the above explanations and caveats: "Why can't they just accelerate everything? Why are only portions of the API managed?" In fact, this is totally correct; there is nothing preventing this from happening (other than the most obvious of reasons: time to implement and lots of other stuff that we've been working on in the meantime). For example, let's say you have a 16-bit BufferedImage you created from scratch and you want to copy it to a 32-bit display. This means that we have to do a pixel-format conversion, so we can't cache the 16-bit version, right? But we can cache a new 32-bit version; we just copy the 16-bit version to our new 32-bit cached version, and then simply use the cached version thereafter.

Starting in jdk1.5 (currently in the oven, baking for a while, available at some unspecified (by me) date in the future), we will manage a much wider array of images. In fact, most of the images you can create or load will be managed for you. The code is in there, I've seen it working with my own eyes: BufferedImage objects running as fast as compatible images. It's pretty sweet...

That's all for now. I've glossed over many of the details of images and acceleration, but hopefully I've given a taste of how accelerated images work today and in the future. And hopefully you will be able to use this information to get the fastest and tastiest BufferedImage applications possible.