Earth Animations for Education from NASA Blog



    The Scientific Visualization Studio
    SVS Animations in Your Software
       EarthFlicks, the Example Application
    Step 1: Determining What's Available
       Describing a Capabilities Entry
    Step 2: Forming Requests and Retrieving Images
       Threaded Retrieval
    Step 3: From Image to Texture Map
    Step 4: Displaying and Animating the Images
    Playing with the Globe

    You've seen the beautiful images of Earth from space taken by NASA astronauts and satellites. By viewing a series of images one after another like frames of a movie, we can watch changes to the Earth as they happened. You can follow hurricanes moving towards land and observe ice caps contracting. You can replay and study changes due to weather, natural events, and human activities. NASA has assembled dozens of such animations, all free. This article describes how to write Java software that uses the OpenGL graphics interface to display these images on a 3D globe.

    The Scientific Visualization Studio

    A team of expert scientists, engineers, and artists at NASA's Scientific Visualization Studio (SVS) carefully compose these animations. This group performs their magic at NASA's Goddard Space Flight Center in Maryland. They have been creating animated visualizations of Earth and space science data for over 15 years. They select images from a seemingly infinite collection of NASA imagery. They design each animation to illustrate a particular phenomenon or event. Here are two examples:

    Figure 1
    Figure 1. African fires during 2002

    Figure 2
    Figure 2. Atmospheric water vapor during 1998

    Recently, SVS established an image server to make some of the best earth science animations available programmatically over the internet to software applications. Currently about 80 are available; that will grow to over 130 by October, 2005. You can see the growing list at the SVS server site. The full SVS catalog contains over 2500 animations and is at the SVS home page.

    SVS animations dynamically illustrate the formation of what's shown in these static images: The fires in Africa progress southwards with the season. The vapor and rain (yellow) flow across the globe as the days progress. You can see more single shots like those shown above at the SVS website, but they're really meant to be seen "in motion," wrapped around a 3D Earth. The SVS website doesn't do that. You need a software program running on a local computer to get the full effect. You also need the ability to interact with the model by turning the world around and zooming in and out. In this article, I describe a bare-bones program, in order to keep the code easily understandable. The best program for viewing SVS animations is the free and open source program NASA World Wind.

    SVS Animations in Your Software

    SVS imagery is designed to be used in educational software. The web service responds to HTTP requests by returning an image. There's a strict protocol and request language, with the request parameters tacked on to what we'd consider a conventional URL.

    Four steps are required to retrieve and display SVS animations in 3D:

    1. Query the SVS server to determine the animations that are available from the server's table of contents.
    2. Form and send requests and retrieve the images of the animation. Each image requires a separate request.
    3. Convert the images to texture maps so they can be wrapped around a globe using OpenGL.
    4. Display the images in sequence to "play" the animation.
    EarthFlicks, the Example Application

    The code I'll use to demonstrate and explain all this is a small Java application called EarthFlicks, which is available for download at the end of this article. EarthFlicks creates two windows: one for selecting and retrieving animations from SVS, the other for showing and playing them. Figure 3 shows what it looks like:

    Figure 3
    Figure 3. EarthFlicks

    Selecting an animation in the table and clicking the Import button causes EarthFlicks to retrieve and cache locally all the images for that animation. When they are subsequently played, EarthFlicks recognizes them in the cache and draws the images from there, rather than from the SVS server.

    EarthFlicks has been kept as simple as possible in order to keep its code clear and useful in a tutorial. However, retrieving and playing SVS animations requires a lot of code; more than I would have guessed before I started writing EarthFlicks. Not all of that code can be listed in this article. What are listed are important details unique to the task. The rest is documented in the source files themselves, which are available at the location mentioned above.

    Step 1: Determining What's Available

    As of July 2005, SVS holds 90 animations and 25 static images, such as the Blue Marble imagery of a cloudless Earth and really cool composites of the Earth at night. This collection is growing and is expected to contain 130 animations by October 2005.

    The server advertises the available animations in the capabilities document. This XML document includes the names of the images, URLs for requesting them, the image formats it can provide them in, descriptions of their sizes, regions of the Earth, and other information that's useful or not, depending on how the images are to be used. WMS is an ISO standard for web services that serve maps and other geographic information. The specification is at the OpenGIS specification page. There's also a tutorial I wrote at The Code Project for C# programmers, but describes WMS in general. The standard is in its third revision, 1.3.0, which was final in 2004. Industry and government have been using WMS for about three years.

    WMS defines two primary requests and their parameters. One request, GetCapabilities, asks for a server's table of contents, as above. The other, GetMap, asks for a specific image. The first part of either request is the server's address and the path to its WMS web service. This is everything before the question mark in the link above. The second part, the URL's query string, contains the actual request and its parameters. An image request is similar to the capabilities request, but has many more parameters, as I describe below.

    The simplicity of this scheme really impresses me. It makes it possible to use the standard network-software infrastructure that a run-time platform already provides. The classes and libraries a programmer needs to issue the requests are merely the ones they'd use to create and send any HTTP request. In Java, that's simply class.

    Describing a Capabilities Entry

    An image request needs some of the information included in the capabilities document. Example 1 shows an excerpt from the SVS server's capabilities document. It describes just one of the available animations, which is a sequence of images showing the smoke flow from large Southern California fires in the winter of 2003. One frame of that animation is shown in Figure 4.

    Figure 4
    Figure 4. Aerosols from California fires

    Example 1. The partial Layer entry for the California Fires animation

     <Layer opaque="0" noSubsets="1" fixedWidth="640" fixedHeight="384"> <Name>3132_21145</Name> <Title>Aerosols from 2003 Southern California Fires (640x384 Animation)</Title> <EX_GeographicBoundingBox> <westBoundLongitude>-160</westBoundLongitude> <eastBoundLongitude>-60</eastBoundLongitude> <southBoundLatitude>10</southBoundLatitude> <northBoundLatitude>58</northBoundLatitude> </EX_GeographicBoundingBox> <Dimension name="time" units="ISO8601" default="2003-11-01"> 2003-10-23/2003-11-01/P1D</Dimension> </Layer>

    Animations and single images are called layers in WMS. The attributes of the Layer element indicate that its images contain transparent pixels (opaque=0), that images must be requested only at the latitudes and longitudes given in the EX_GeographicBoundingBox element (noSubsets=1), and that the server will always return an image that is 640 pixels wide and 384 pixels high (fixedWidth=640 and fixedHeight=384).

    The Name element uniquely identifies this sequence of images on the server. This is not a name you'd show an application user, however. That's why the layer definition also contains a Title element. Some image sequences span the entire globe, but others, like this one, focus on a particular region. That region is identified by latitude and longitude in the element EX_GeographicBoundingBox.

    So where is the sequence that forms the animation? That is implied in the Dimension element. Notice that the dimension element's body is a string with three parts separated by slashes ("/"):


    The first part indicates the first date and time of the sequence, the second part the final date and time, and the third part specifies the period (P) of the images between. In this case, the period is one day (P1D). What all this tells us is that the server holds separate images corresponding to every day from October 23, 2003 to November 1, 2003. We can request any or all of those.

    Other animations might have dimensions in days, hours, months, or minutes, or a combination of those. They may have dimensions other than time, too, but I don't go into that here. The full syntax for dimensions is described in the WMS specification.

    Example 1 shows only some of the information about layer3132_21145. The full entry in the capabilities document also contains an Abstract element that gives a more elaborate textual description of the connection between the heavy rainfall and the mudslides. Also there are keywords, attribution (to NASA), a URL to the logo to go with that, and a URL to retrieve a legend image identifying the meaning of the colors in the images. The SVS server documentation has more detailed examples and a description of layer definitions. That documentation is at the SVS document site.The EarthFlicks source code contains classes to download, parse, and make sense of the server's capabilities document.

    Step 2: Forming Requests and Retrieving Images

    The individual images of an SVS animation are retrieved one by one with a separate WMS GetMap request for each image in the sequence. In most cases, those requests vary only in the date and time of the image they ask for.

    Example 2 contains another excerpt from the SVS capabilities document. It describes an animation illustrating the changes in the global biosphere between August 1997 and July 2003. Notice theDimension element, which is much more complex than the one described in the previous section. Here, there are seven date ranges rather than one; each is separated by a comma. The seven ranges each specify contiguous dates, but those ranges are not contiguous with each other. They are in chronological order, however. Note, too, that no times are given in these dimensions, only dates.

    Example 2. The partial Layer entry for the Global Biosphere animation

     <Layer opaque="1" noSubsets="1" fixedWidth="2048" fixedHeight="1024"> <Name>2914_17554</Name> <Title>Global Biosphere from August, 1997 to July, 2003 (2048x1024 Animation)</Title> <EX_GeographicBoundingBox> <westBoundLongitude>-180</westBoundLongitude> <eastBoundLongitude>180</eastBoundLongitude> <southBoundLatitude>-90</southBoundLatitude> <northBoundLatitude>90</northBoundLatitude> </EX_GeographicBoundingBox> <Dimension name="time" units="ISO8601" default="2003-07-20"> 1997-08-13/1997-12-27/P8D, 1998-01-01/1998-12-27/P8D, 1999-01-01/1999-12-27/P8D, 2000-01-01/2000-12-26/P8D, 2001-01-01/2001-12-27/P8D, 2002-01-01/2002-12-27/P8D, 2003-01-01/2003-07-20/P8D</Dimension> </Layer>

    The WMS request for the first image in the sequence of Table 2 is this:     SERVICE=WMS&REQUEST=GetMap&LAYERS=2914_17554&FORMAT=image/png     &WIDTH=2048&HEIGHT=1024&CRS=CRS:84&BBOX=-180,-90,180,90&TIME=1997-08-13.

    If you click this link, you should eventually see a map of the world with some pretty colors on it, as shown in Figure 5. It's the first image we're going to wrap around a globe later. The image is three and a half megabytes in size, so it takes a while to download if you're on a slow connection.

    Figure 5
    Figure 5. The first image of the Global Biosphere animation

    The WMS request for this image is very similar to that for the California Fires sequence of the previous section. The parameter names are the same, but the values differ for LAYERS,WIDTH, HEIGHT, BBOX, andTIME. The layer name, image width, image height, and latitudes and longitudes are different, and the request asks for a time specific to this animation.

    The request to retrieve the next image in the sequence is this one:
        SERVICE=WMS&REQUEST=GetMap&LAYERS= 2914_17554&FORMAT=image/png&

    The only difference from the request for the first image is theTIME, which is eight days later than the previous time. We know to ask for this date because the layer's period specification in the first range of the dimension element isP8D. (See Table 2.) SVS will currently only return images in PNG format. Even so, that format must be included in the map request: FORMAT=image/png.

    EarthFlicks retrieves all of an animation's images by looping through its dimensions and issuing a request for each image. It uses an internal class named MapRequest, which is passed the individual parameter values and then creates the full URL for the request. Images are retrieved by creating object and invoking itsopenConnection() method. See the EarthFlicksRetriever class in Example 3 for the code that does this.

    Threaded Retrieval

    As you've seen, images can be multi-megabytes of data. Retrieving them is not something to be doing in an application's user-interface thread. EarthFlicks performs the retrieval in a separate thread. The Retriever class establishes the connection for each image's retrieval and reads the returned stream to capture and save the image bits. Example 3 shows therun() method of the Retriever class, where this is done.

    Example 3. The run() method of the EarthFlicksRetriever class

     package EarthFlicks; public abstract class Retriever implements Runnable { public class Status { public static final int NOT_STARTED = 0; public static final int CONNECTING = 1; public static final int RETRIEVING = 2; public static final int POSTPROCESSING = 3; public static final int DONE = 4; public static final int INTERRUPTED = 10; public static final int ERROR = 11; } // These change with each call to start(), // some change as retrieval progresses. private int status = Retriever.Status.NOT_STARTED; private int bytesRead; private int contentLength; private String contentType; private Exception error; private Thread thread; protected String destination; // These variables are set by the client. private url; private Object clientObject; protected RetrieverClient client; public void start() { this.reset(); this.thread = new Thread(this); this.thread.start(); } public void run() { try { this.status = Status.CONNECTING; this.begin(); if (Controller.getCaches().containsUrl(this.getUrl())) { destFile = Controller.getCaches().getFile(this.getUrl()); this.destination = destFile.getPath(); this.contentLength = (int)destFile.length(); this.bytesRead = this.contentLength; this.status = Status.POSTPROCESSING; } else { // Retrieve the contents. int numBytesRead = 0; connection = this.url.openConnection(); connection.setAllowUserInteraction(true); incoming = connection.getInputStream(); this.contentLength = connection.getContentLength(); this.contentType = connection.getContentType(); destFile"EFL_", makeSuffix(this.contentType)); destFile.deleteOnExit(); // It's copied later. this.destination = destFile.getPath(); outgoing = new; this.status = Status.RETRIEVING; byte[] buffer = new byte[4096]; try { while (!Thread.currentThread().isInterrupted() && numBytesRead >= 0) { numBytesRead =; if (numBytesRead > 0) { this.bytesRead += numBytesRead; outgoing.write(buffer, 0, numBytesRead); this.update(); } } } finally { if (incoming != null) try {incoming.close();} catch ( e) {}; if (outgoing != null) try {outgoing.close();} catch ( e) {}; } this.status = Status.POSTPROCESSING; } } catch (Exception e) { this.status = Status.ERROR; this.error = e; } finally { if (Thread.currentThread().isInterrupted()) this.status = Status.INTERRUPTED; this.update(); this.inThreadEnd(); if (this.status == Status.POSTPROCESSING) this.status = Status.DONE; this.end(); } }

    Before issuing an HTTP request to the server to retrieve an image, the run() method checks to see whether that image is in a local image cache that EarthFlicks maintains. Many of the animations consist of tens or hundreds of images. To play an animation at any useful frequency those images must be local, and probably on the computer's local disk rather than even a fast local network. The code for the image cache is included in this article's code bundle.

    Notice that the run() method ofRetriever includes calls to methods namedbegin(), update(), andend(). These methods cause invocation of callback methods in the retriever's client, the code that created and started the retriever, so that the client code can update progress bars and status readouts and otherwise monitor the retrieval. Example 4 below shows these method implementations inRetriever. Since the retrieval is running in a thread separate from the UI, the callbacks cannot be invoked from the retrieval thread. They must be invoked from the UI thread if the client is to safely update the user interface. They are therefore only requested by Retriever to be invoked later in the UI thread. The methods that actually invoke the client's callbacks are doBegin(), doUpdate(), anddoEnd(), also listed in Example 4.

    Example 4. The callback methods of the EarthFlicksRetriever class

     private void begin() { final RetrieverEvent event = this.makeEvent(); java.awt.EventQueue.invokeLater( new Runnable() { public void run() {doBegin(event);} } ); } private void update() { final RetrieverEvent event = this.makeEvent(); java.awt.EventQueue.invokeLater( new Runnable() { public void run() {doUpdate(event);} } ); } private void end() { final RetrieverEvent event = this.makeEvent(); java.awt.EventQueue.invokeLater( new Runnable() { public void run() {doEnd(event);} } ); } protected void doBegin(RetrieverEvent event) { if (this.getClient() != null) this.getClient().begin(event); } protected void doUpdate(RetrieverEvent event) { if (this.getClient() != null) this.getClient().update(event); } protected void doEnd(RetrieverEvent event) { if (this.getClient() != null) this.getClient().end(event); }

    Step 3: From Image to Texture Map

    At the point an image is fully retrieved, it could be read and translated by the javax.imageio.ImageIO class and displayed in a window as a java.awt.Image. But it would be only two-dimensional, like the pictures above. The SVS images are most compelling when wrapped around a 3D globe. That requires conversion of the image to a texture map that can be displayed with OpenGL using the Java bindings for OpenGL API (JOGL).

    Two things must be done to convert a PNG image to a texture map:

    1. Put the image into a format that OpenGL recognizes. Unfortunately, PNG is not one of those.
    2. Ensure that the image width and height are each a power of two.

    OpenGL wants RGB or RGBA images. They can be in any one of many possible bit depths and color precisions. In EarthFlicks I convert the PNG images to 3- or 4-component unsigned-byte format, using 3-component RGB for images without transparency and 4-component RGBA for images with transparency. Whether or not the PNG image contains transparency values is indicated in its image file.

    EarthFlicks opens the image with thejavax.imageio.ImageIO class, which copies the image into a java.awt.image.BufferedImage object, and then reads the buffered image one pixel at a time and writes to a new file an unsigned-byte for each of the pixel's red, green, blue, and alpha components. The code is shown in Example 5, which lists a method in the MapRetriever class derived from theRetriever class described in the previous section.

    Example 5. MapRetriever code converting a retrieved image to RGB or RGBA

     private void convertToRgb(RetrieverEvent event) throws Exception { // Read the image. java.awt.image.BufferedImage image =; // Transform the image's orientation to that of OpenGL. java.awt.geom.AffineTransform xform = java.awt.geom.AffineTransform.getScaleInstance(1, -1); xform.translate(0, -image.getHeight(null)); java.awt.image.AffineTransformOp op = new java.awt.image.AffineTransformOp(xform, java.awt.image.AffineTransformOp.TYPE_NEAREST_NEIGHBOR); image = op.filter(image, null); // Convert the image to RGB or RGBA. java.awt.image.Raster raster = image.getRaster(); int width = image.getWidth(); int height = image.getHeight(); java.awt.image.ColorModel colorModel = image.getColorModel(); byte[] convertedImage = null; String suffix = "." + Integer.toString(width) + "x" + Integer.toString(height); if (colorModel.hasAlpha()) { suffix += ".rgba"; int size = 4 * width * height; convertedImage = new byte[size]; int index = 0; for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { Object pixel = raster.getDataElements(x, y, null); byte red = (byte)colorModel.getRed(pixel); byte green = (byte)colorModel.getGreen(pixel); byte blue = (byte)colorModel.getBlue(pixel); byte alpha = (byte)colorModel.getAlpha(pixel); convertedImage[index++] = red; convertedImage[index++] = green; convertedImage[index++] = blue; convertedImage[index++] = alpha; } } } else { suffix += ".rgb"; int size = 3 * width * height; convertedImage = new byte[size]; int index = 0; for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { Object pixel = raster.getDataElements(x, y, null); byte red = (byte)colorModel.getRed(pixel); byte green = (byte)colorModel.getGreen(pixel); byte blue = (byte)colorModel.getBlue(pixel); convertedImage[index++] = red; convertedImage[index++] = green; convertedImage[index++] = blue; } } } // Save the converted image to a file. newDest ="EFL_", suffix); newDest.deleteOnExit(); // Cached later if needed. outgoing = null; try { outgoing = new; outgoing.write(convertedImage); } finally { if (outgoing != null) try {outgoing.close();} catch ( e) {}; } this.destination = newDest.getPath(); }

    Performing the conversion this way and forming unsigned-byte color components eliminates any headaches from dealing with format variations. The ImageIO class takes care of converting the colors from whatever form they're in to simple red, green, blue, and alpha eight-bit values. In fact, this scheme will work with any file format that ImageIO can read, not just PNG. The disadvantage to this scheme is that it's probably the slowest way to do it. Some format variants map one to one with RGB or RGBA when read by ImageIO, but EarthFlicks makes no attempt to detect or take advantage of those situations.

    In Example 5, the image is also "flipped" so that the origin is at the lower left rather than the original upper left. Most image formats have an upper-left origin, but OpenGL assumes a lower-left.

    The result of this conversion is an RGB or RGBA image captured in a BufferedImage and ready for its dimensions to be scaled to powers of two if necessary, using OpenGL'sglScaleImage() function. This function scales an image to any power of two, but EarthFlicks has it scale to the next largest for the dimension (360 goes to 512, for example). The conversion and scaling steps produce an RGB or RGBA image suitable for OpenGL to use as a texture map. EarthFlicks saves these images as files on the local disk, with a filename suffix of .rgb or .rgba. Their widths and heights are encoded in their file names.

    Step 4: Displaying and Animating the Images

    SVS images typically do not come with an underlying image of the Earth's surface showing through. This is not a problem for images that completely cover the globe. But many SVS animations make no sense unless they're superimposed on a globe with base imagery of Earth terrain. EarthFlicks addresses this by implementing an Earth model that's covered with NASA's Blue Marble imagery, shown in Figure 6.

    Figure 6
    Figure 6. The Blue Marble imagery

    Several resolutions of the Blue Marble imagery are available. EarthFlicks uses three of those, automatically switching to the most appropriate and efficient as the user zooms in and out.

    The SVS images can be overlaid onto the Earth's surface. There are many ways to do this. I chose to create a solid sphere matching the base globe, and apply the SVS images as texture maps to that. The code for this is in the EarthFlicks Shell class, in the Globe package. That class computes the vertices of triangle strips to represent the surface, and assigns appropriate texture coordinates to fit each SVS image to the correct latitudes and longitudes.

    Applying the texture to the shell is both simplified and complicated by the texture cache EarthFlicks uses. To best use the computer's graphics memory, EarthFlicks maintains a cache that keeps the most recently used textures in memory. This cache is responsible for opening texture files when they are first needed, managing them in memory, and binding them to OpenGL texture identifiers. The TextureCache class does all of this. The code for that class also shows how EarthFlicks specifies textures and makes them current in OpenGL.

    I've found that the fastest way to get an image from disk to memory is by memory mapping the image file. Java'sjava.nio.MappedByteBuffer makes this very easy. EarthFlicks maintains a singleton mapped-file cache that maps and manages memory-mapped image files. The implementation is in theMappedFileCache class in the Globepackage.

    This mapped-file cache is the third cache I've described for EarthFlicks. To recap, there's one cache to hold the converted images to be used as texture maps permanently on disk. There's another cache to manage run-time texture loading and binding to OpenGL. Finally, there's a third cache to manage memory-mapped image files.

    Playing with the Globe

    At this point, there exist sequences of images from the SVS server converted from PNG to images that can be used by OpenGL as texture maps, all cached on a local disk. There's also code to map these images into memory, bind them to OpenGL, and display them on a globe with a pretty Blue Marble background surface. But these images do not "play" as animations yet.

    Playing the images is probably the simplest part of all. TheImagePlayer class in EarthFlicks accepts an image sequence and plays it by setting a timer and displaying consecutive images of the animation at each timer tick. Itsstart() method kicks off the animation. When all the images in the sequence have been displayed,ImagePlayer invokes a callback to EarthFlicks'Controller class to return the user interface to the pre-animation state, ready for the animation to be run again.

    If an image sequence is specific to a region of the globe,ImagePlayer positions the viewpoint over that region before playing the animation. It also has methods to stop, pause, and resume the animation, and to remove it from the globe.

    One last thing to do is provide a means for the user to turn the globe and zoom in and out. EarthFlicks implements a very simple mechanism for this: a rectangular region defined by latitude and longitude is defined as a viewing window onto the globe. Moving this region makes the globe appear to rotate. Expanding or contracting it has the effect of zooming in and out. This is not a full 3D manipulation model that allows the user to "fly around" in any orientation. The user is always looking directly at the globe, and the viewing direction is always perpendicular to the Earth's surface. If EarthFlicks were to more realistically represent the Earth by applying surface elevation, this manipulation model would be inadequate.

    This manipulation scheme does have a couple of advantages: it allows for a very quick, high-level clipping scheme to avoid drawing regions of the globe that are not in view. And it keeps the user oriented to the globe so that "up" is always north.


    The SVS animations are designed to be used in visualization programs for education. I've described in this article how to retrieve the images composing those animations and display them on a 3D model of the Earth. There isn't room here to describe or even list all the code involved in doing that, so I've tried to identify, explain, and illustrate the tasks unique to this type of application and the SVS imagery. Much of this information, of course, can also apply to images and image sequences available from other WMS servers.


    The author can be reached at

    Creation of this article and the example application was funded by NASA.