1 2 3 Previous Next


99 posts

Reanimation Blog

Posted by chet May 23, 2009

Romain Guy and I will hit the stage again at JavaOne this year to talk about animation.

Our talk this year will be a bit different from past years. First of all, it will be less about nitty-gritty code and more about concepts as we explore higher-level ideas of animation and what we can learn and apply from traditional cartoon animation to animated GUIs. Secondly, Romain will be be dressed in a gorilla suit.

Our session is on Wednesday at 11:05 and again on Friday at 2:30 (this repeat session may be given in Esperanto).

I haven't had time to see what else is on for the week, but I did notice that there's a "Filthy Rich Clients" session on Thursday morning after the keynote. I've never seen this session from the audience point of view; I might check it out. Otherwise, I'll be floating around sessions that I won't be allowed into because I haven't bothered to pre-register for them. I will also be speaking, demoing, and generally hanging out at the Adobe booth. Stop by, say hi, bring chocolate cookies.

Now, back to working on demos for our talk.


Not Dead Yet Blog

Posted by chet Apr 21, 2008

Romain and I will be returning to JavaOne this year to give another talk this year. We thought about presenting on horticulture, or the effect of air travel on the Amazon rain forest, but in the end we decided at random to talk about this topic: Filthy Rich Clients: Filthier, Richer, Clientier. And though it was tempting to just present stuff we'd already written (with 550 pages of material in the book, there's a lot to, well, steal for presentations), we thought it would be more fun and to do new stuff. Who needs sleep? We hope you can make it to the session.

For anyone that signed up for the session already, we hope you got word that the conference has shuffled us out of our original slot on Tuesday and put us on Thursday afternoon instead (4:10 - 5:10 pm). Maybe they just realized that everyone enjoys pre-enrolling for sessions so much that they figured you'd want another chance to log in and see how that change affected your schedule. And if you didn't pre-register for the session yet, you might want to do so; I've heard that the room is filling up (It's not clear why - maybe it was the "free money!" mention in the abstract. Or maybe it's Romain's groupies. Again.).

There's other stuff happening as well. For one thing, there's an author signing for our book. Maybe someone can explain to me why you actually want us to deface your book. It's a nice book,  clean and professional looking, and you want us to write in it? I understand having a famous author sign a book they wrote; heck, it'll be worth more when they're dead. But the authors of Filthy Rich Clients? The only thing worth more when I'm dead will be my life insurance policy. Anyway, we'll be there at the bookstore, signing anything you put in front of us except a blank check.

For other happenings that week, check out my other blog.


Crystal Methodology Blog

Posted by chet Jan 17, 2008

            Not only am I a huge fan of                 software design patterns, I'm also strongly supportive of process             in software.             Process makes us strong. Process enables us to achieve highly metric-driven             quality             levels. Process allows us to attend meetings throughout every day, ensuring that             any coding time we get will be that much more intense because it is necessarily             so short and focused. And finally, process allows us to draw pretty charts and graphs on             endless presentations.


            Or, as I like to say it every morning when I wake up, "Software Is Process."             For without the process, where would software engineers be but in their offices,             cranking away code, pretending to be productive?


                Now that people have had time to understand and incorporate the important patterns                 I discussed in my earlier                     Male Pattern Boldness article, it seems high time to tackle the larger topic of                 Software Processeseses. The field of Software Methodology is rife with theories,                 names, buzzwords, and descriptions that improve our tawdry geek lives constantly by letting                 us focus on that which makes us most productive: studying and then trying to                 implement completely new software processes to attempt the same job that we could have                 been actually doing in the meantime.




            First, here is a historical perspective. Traditional software implementation was a rather             simple and straightforward process, resembling something like the following:


                Figure 1: Comprehensible, and therefore wrong, software process.


            But this process was flawed by its inherent simplicity; if everyone could understand             and follow it, what hope did the industry have in creating more meetings and process             documentation? Changes were suggested for this methodology, resulting in more comprehensive             models like this one:


                Figure 2: More complicated, and therefore better, software process.


            Eventually, some rogue elements of the community came up with a different process             model, based on fundamental programming philosophies:


                Figure 3: Simplified Software Process (SSP) model. Pretty dumb. Incredibly popular.            


                But the field has been somewhat quiet lately, leading to more coding than is really                 good for us, so I feel motivated to introduce some of my favorite new process models                 into the community. There are obvivously more than I can cover in a simple article like                 this, probably deserving an entire bookcase of unread tomes, but these will                 have to do for now as I have to go brainstorm with my team in an offsite about how                 we can be more effective (this week's task is to come up with a mission statement).



                In the Scrum                 development model, the focus is on short iterations and constant communication.                 The Scum model, however, focuses on the individual. In particular, each                 engineer works completely on his or her own, producing code at an alarming rate.                 Changes are integrated and merged willy-nilly, causing untold breakage due to the complete                 lack of communication. At each fault, the offending code, putback, and engineer                 are indentified as scum and are tossed out of the project (this step is                 called "Hack-n-rack"). The resulting code                 and team are thereby better over time, having weaned out the weak members through                 natural selection.
                As it's inventor, Dr. Feen Bookle, PhD, Mrs, QED, JRE, said at its unveiling at                 the Conference On Terribly Important Academic Philosophies and Theories on Software                 Process Methodology Discoveries (CTEAPTSPMD), "Scum will always float to the top.                 Skim it off and you've got just the juicy bits left. Plus the bottom-feeders."

Fragile Programming

                As I mentioned in my                     earlier article, Fragile Programming is an important element of the Delicate                 software pattern. It is related to                     Agile programming, which is typified by small development cycles that are                 designed to meet the reality of constant requirements churn. But in the newer, and                 therefore better, Fragile methodology, each iteration is started from scratch                 based on only the latest requirements. Instead of building upon the existing code                 base, which has presumably attained some amount of stability and robustness through                 its existence and evolution, Fragile projects rewrite the entire project anew in                 each cycle, thus generating brand spanking new products that adhere more closely to the latest                 whim of the client. This process results in software that is tuned exactly to what                 the client asked for, and the resulting instability of the code base can thus be                 blamed directly on the client, allowing a convenient excuse for the failing                 development team.

Conference Drive Development (CDD)

                This brave new methodology, suggested to me by                 Charles Nutter, looks like it has serious potential. As anyone who has                 ever developed software and demos for a conference before knows too well, there's                 nothing like a looming keynote or session deadline to enforce good coding standards,                 carefully thought-out APIs, and integrated feedback from the larger community. Developers                 typically end up at conferences with completely spec'd out products that only lack                 for a smattering of documentation before being released out onto the masses like                 white                 cat hair on a black sweater. It's                 just that whole "Quality" part of the release process that drags it down for the                 next couple of years and keeps it from being                 an instant product reality.
                With the increasing volume of conferences around the world, I see CDD as becoming                 more and more interesting. Products that hinge releases upon the mad rush of pre-conference                 development will be able to ship new versions every month, or even faster.                 Sure, they'll go out with bugs, no documentation, and a complete lack of testing,                 but the demos will rock!

Rabid Application Development

                Rapid Application                     Development helped move developers from the more stodgy                 development processes of earlier decades when people were dumber onto quicker models                 of development, based on fast prototyping work. Rabid Application Development takes this a step further. Instead of using prototypes                 as ideas to help with future, more stable work, the prototypes are the product,                 and are checked in as soon as they are complete, or in many cases, sooner. The key                 to Rabid Development is to keep the engineering team going at such a frenetic pace                 in coding and checking in code that nobody, including the client, ever realizes                 what a complete load of crap they've produced. This model is used throughout most                 university CS courses and has become the default process basis for all homework assignments. It is also                 the mainstay of software startups everywhere.


                Developed as a response to the Waterfall                 model, where software goes through various stages of development during its life                 cycle, the Cliff model leaps suddenly from the starting point (known as the Hairbrained                     Idea) to the end (known as the Product, but informally referred to by Cliff teams                 as the Corpse). These projects are generally executed overnight with several pots                 of coffee by developers with no social lives. They start from an idea in a chat message during a World of Warcraft session and result in the engineers checking in the Corpse                 by the start of work the next morning.
                No output of the Cliff model has ever been useable outside of negative case studies,                 but interest in this approach persists by those engineers still lacking in better                 things to do at 2 AM.


                Related to                 Object Oriented Programming (OOP) methodologies, the Oops system seeks to develop                 reusuable objects, but never quite makes it. Previous components are                 never exactly                 what they need to be, thus requiring that the functionality be rewritten                 from scratch, resulting in a general "Oops!" exclamation from the teams involved.                 But as all engineers know, and all managers hate, it's always more fun writing things                 from scratch anyway.                 The Oops methodology is the one most favored by all programmers.

                Clean Your Room (CYR)

                This methodology comes as a response to the                     Cleanroom Software                 Engineering process, which strives to produce software                 whose quality can be certified. Clean Your Room, on the other hand, takes a different                 tack,                 basing its philosophy upon the teenage kid tenet:                 "Why should I clean my room when it's just going to get dirty again?" In this process,                 the focus is upon implementing cool, new features (called Wall Posters after the typical                 decorations in                 most teenager bedrooms) and not on tests, documentation, or bug-fixes.                 The belief with these other traditional elements of software products is                 that as the software changes, tests would be obsoleted, documentation would have                 to be rewritten, and new bugs would be introduced. So what's the point in doing                 the work twice? All CYR                 projects are run under the theory that eventually,                 when the product is completely done, the other non-coding aspects of the product will eventually                 be seen to (hopefully                 by someone else). Since no CYR project has ever reached completion,                 this has yet to be proven in practice.

                Testosterone-Driven Development

                Like its namesake Test-driven                     Development, which is known for the requirement of engineers writing tests                 before code, Testosterone-driven Development focuses on testing first. But it does                 so in an an extremely aggressive manner, requiring every engineer to produce entire test                 suites, test frameworks, and test scripting languages for every line of product                 code written, including whitespace and comments. Engineers violating this contract                 are taken out back where they have the crap kicked out of them. Any code found to have bugs                 not caught by tests results in the offending engineer having to go three                 rounds with                 the project manager (with the engineer being handcuffed and blindfolded during the                 match). Finally, any bug found in tests will result in the engineer's body never being                 found again.
                Expectations are high from this newcomer to the field, although to date none of                 the products using this process has left any survivors.

                I realize that the above list is quite a small sampling of the many wonderful methodologies                 which are possible. But I hope you will try at least some, or maybe even all, of                 these out in your team, throwing the entire project into disarray every couple of                 years while you reinvent everything. If you find either success                 or complete abject failure in your attempts, I encourage you to write a paper, speak                 at conferences, and publish books on the topic. Then form a consulting company that                 helps other development teams try to adopt the same methdologies.


            Software products are a journey. They aren't just about the code you see             at the end; they're about the path taken to get there. And the paths not taken.             And the signs on the road. And the maps used. And the gas station attendants asked             for directions when you got lost. And the hikes through the wilderness when you             ran out of gas. And the ceaseless talk radio that your parents played while you             got carsick in the back all over your sister. Process is the car that gets you there;             you just need to pick the right one, pay too much for it, and then build it from             scratch first.


Scene Graph: Demo-licious Blog

Posted by chet Jan 16, 2008

        In the dark ages, before the Scene Graph         project was public, development on the library was coupled with development of demo         applications. These demos were written for various reasons: to test new functionality,         to get a feel for the API and development experience, to have benchmarks for performance         tuning, and to have stuff to show when we talked about it at             conferences.


        When we made the Scene Graph library public, we also wanted to make our demo applications         public. But there was one big problem; unlike the library itself, we hadn't written         those applications with the public, or even anyone but ourselves, in mind. And publishing         code that you haven't actually sanity-checked can be unwise. So we         posted the library with just the Nodes demo (quickly cleaned up for the occasion, like putting a tuxedo         on a homeless guy),         and the intention of going back to get the other demos in a publishable state.


        Now, a month later, we think we're there; we've created a             new project on java.net to host all of our public demos, including the Nodes         demo already published as well as the jPhone demo discussed in             my previous blog entry. You can go to that project, run the demos, see the         code, sync up to the source base, and play around with all of them.


        Feedback (on the demos or the Scene Graph in general) should use the forums and         aliases on the Scene Graph project,         not the demos project (at least until we figure out how to disable those elements         on that project). It's helpful to funnel all of the feedback through one         channel.


        So what are you reading this for? Go to the             scenegraph-demos project and enjoy the new stuff. It's demo-licious.


        A few weeks ago, in a quest for more performance benchmarks, the scene graph team         asked for a demo that was representative of some of the graphics and animations that         might be typical in a consumer-oriented application.


        I had run across an interesting video on the Apple site recently,             iPhone: A guided tour. The device in the video had ideas of what we were looking for; fades,         moves, scales, transitions... all the whizzy animations that consumers love. So         I took a whack at doing something similar with the scene graph.


        Introducing: JPhone:




        Okay, so it's not really the same thing. For one thing, I just used some icons I         had lying around which don't look as good at the large size required for this interface.         I'd love to use the iPhone icons instead, but I'm still waiting for the Apple lawyers         to call me back. And waiting. And waiting. (Steve?)


        Also, I guess I have to admit it: the jPhone demo is not a phone. Even if         you pick up your monitor and hold it next to your ear, all you'll hear is the sound         of your brain screaming in pain from the pixel radiation. (And the ocean. Isn't it         funny how you can always hear the ocean? Or maybe it's just sound waves.)         But mimicking a phone         wasn't the point; it was all about the user interface.


        Finally, it'll become obvious when you start to use it that, well, there are no         'applications' behind the icons; it's just the same dummy screen that comes up again         and again. But once more, my petty rationalization comes in handy; the demo was         supposed to be about GUI animations, not actual functionality.


        But hey, It's A Demo!


        Anyway, on with the article. Note that my discussion below is all about the jPhone         demo. I don't actually know how things operate under the hood on that phone thingie         from Apple; all I know I learned from watching that video. But I do know how the         jPhone demo works, so I'll stick to that.


        The GUI: Main menu, tray menu, and applications


        There are three different GUI areas in the display, used at different times. What         I call the "main menu" is the grid of icons arrayed out across the first screen,         starting at the top. Each of these icons accesses a different application (each         of which looks uncannily similar in JPhone) when clicked. The "tray menu" at the bottom has four additional icons, which are much the same as the         icons in the main menu, but for common functionality that the user might want to access more frequently.         Finally, the "application" screens are those displays that come         up after the user clicks an icon. For example, when the user clicks on the Calculator         button, they probably expect a calculator application screen to become active.


        Main Menu


        The main menu of the application consists of a grid of icons, four columns wide.         The menu is created in the cleverly-named createMenu() method.    


        Each "icon" consists of both an image and a text caption. So for each icon object         in the scene graph, we create an SGGroup to hold the children,         an SGImage to hold         the image of the icon, and an SGText object to hold the caption.


        The group is created in a single line as follows:

        SGGroup imageAndCaption = new SGGroup(); 

        The image node takes a few more lines, as we need to scale the image appropriately         and then set the image on the node:



        SGImage icon = new SGImage();
        try {
            BufferedImage originalImage =
            BufferedImage iconImage = new BufferedImage(ICON_SIZE,
                            ICON_SIZE, BufferedImage.TYPE_INT_ARGB);
            Graphics2D gImage = iconImage.createGraphics();
            gImage.drawImage(originalImage, 0, 0, ICON_SIZE, ICON_SIZE,
        } catch (Exception e) {
            System.out.println("Error loading image: " + e);
    (Note that our image scaling assumes that a one-step bilinear scale will give us     sufficient quality, which it does in the case of up-scaling the smaller images we're     using for icons. For a more general scaling solution that gives dependable quality and decent     performance, check out Chris         Campbell's article on         The Perils of Image.getScaledInstance()).

            The text node takes a few lines to set up the text rendering and location attributes             appropriately:

        SGText textNode = new SGText();
        Rectangle2D rect = textNode.getBounds();
        textNode.setLocation(new Point2D.Double(
                (ICON_SIZE - rect.getWidth())/2, ICON_SIZE + 10));

    To position each icon in the menu, and to allow the icon to be moved later when it     animates, we create a transform node as the parent of the icon and add that node         to the scene graph:    

        SGTransform.Translate transformNode = SGTransform.createTranslation(xOffset, 
                yOffset, imageAndCaption);

        Tray Menu


        The tray menu, initialized in the createTray() method, consists of just four icons at the bottom of the screen. These icons         are mostly like the main menu icons, although they have a different background         and the animation they undergo during transitions is different, so there are differences in their setup.


        First, the tray needs to be positioned on the screen, so the tray group needs an         overall transform. Also, we will fade the tray in and out during transitions from         and to the application screen, so the tray group also needs a filter node         to handle fades. These nodes are set up as follows:

        final SGGroup trayGroup = new SGGroup();
        SGComposite opacityNode = new SGComposite();
        SGTransform trayTransform = SGTransform.createTranslation(0, 
                SCREEN_H - (1.5 * ICON_SIZE), trayGroup);

        Next, we have an interesting background for the tray that consists of a basic gray         gradient with a solid darker gray underneath the captions. We set this up with a         couple of shape nodes and a transform node to position the darker area appropriately:

        // Set up the basic tray background
        SGShape trayBackground = new SGShape();
        trayBackground.setShape(new Rectangle(SCREEN_W, 
                (int)(ICON_SIZE * 1.5)));
        trayBackground.setFillPaint(new GradientPaint(0f, 0f, Color.DARK_GRAY,
                0f, (float)(ICON_SIZE * 1.5), Color.LIGHT_GRAY));
        // Set up the darker background for the captions
        SGShape captionBackground = new SGShape();
        captionBackground.setShape(new Rectangle(SCREEN_W, 20));
        captionBackground.setFillPaint(new GradientPaint(0f, 0f, Color.DARK_GRAY,
                0f, 10f, Color.GRAY));
        SGTransform captionBGTransform = SGTransform.createTranslation(0, 
                (1.5 * ICON_SIZE) - 22, captionBackground);

        The icons themselves are set up just like those for the main menu, adding themselves         to the trayGroup object created above; I'll skip the details since the code is similar         to what we saw earlier for the main menu icons.




        The application objects, created in the createApp() method, are simply images. The         only interesting part about them is the animation of scaling and fading in and out         as they become active and inactive. We load and scale the application image just         like we did for the icons in createMenu() above, so I won't show that code here.         We then add filter nodes for opacity and for scaling, so that we can fade and scale         the application screen during animations:

        // App screen is just an image, scaled/faded in when it becomes active
        final SGImage photo = new SGImage(); 
        // ... code to load/scale/set image removed for brevity ...
        SGComposite opacityNode = new SGComposite();
        AffineTransform fullScale = new AffineTransform();
        AffineTransform smallScale = 
                SCREEN_W/2, SCREEN_H/2);
        smallScale.scale(.1, .1);
        smallScale.translate(-SCREEN_W/2, -SCREEN_H/2);
        SGTransform scaleNode = SGTransform.createAffine(smallScale, opacityNode);
        fullScale = new AffineTransform();

        Note that the application node starts out invisible (because it is hidden until         triggered by a mouse click on one of the menu icons), completely transparent (until         it is faded in), and scaled to 10% of its true size (until it is scaled in during         a later animation).




        The objects set up above were necessary, but the fun part is really the animations         that drive the application. The animations are all triggered based on user clicks. A click on         the main menu icons will run animations on the main menu, the tray menu, and the         application screen simultaneously. A click on the application screen will run         all of the same animations - in reverse. Let's see how we set up and run these animations.         There are two Timelines create to run these animations (recall from         an             earlier blog entry that Timeline is a convenient grouping mechanism for animations):

        Timeline menuOutTimeline = new Timeline();
        Timeline menuInTimeline = new Timeline();

        Main Menu Animation


        The interesting part in the main menu animation is in trying to guess what's going         on in the Apple video (and the iPhone interface). As an engineer, I would expect the icons to move in a linear         fashion, sliding horizontally or vertically, perhaps the same every time, or maybe         with the direction set based on which icon was clicked. In fact, one of the engineers         on the team once rewrote my animation code to do just that, assuming that there         was a bug in my code and I must have meant to have this straight-line animation instead of the effect I had implemented.    


        But if you look closely at the video (or, heck, at that iPhone you have in your         pocket), you'll see that the icons move in diagonal trajectories off of the screen,         all shooting off in different directions. For example, here's a stop-action view         captured from the video:


    iPhoneAnim1crop.jpg     iPhoneAnim2crop.jpg     iPhoneAnim3crop.jpg    


    iPhoneAnim4crop.jpg     iPhoneAnim5crop.jpg     iPhoneAnim6crop.jpg    


        Also, it looks the same every time, no         matter which icon is clicked. I would then assume (and have implemented the code         this way in jPhone) that the icons all shoot away from one central point on         the screen. But that doesn't appear to be the case.


        Anyway, I think the animation for jPhone's main menu looks pretty good, if not exactly         what they happen to do on that other device.


        The basic idea in jPhone is to calculate the movement vector based on some movement         "center" (xCenter, yCenter) and the location of each icon (xOffset, yOffset):

        double xDir = xOffset - xCenter;
        double yDir = yOffset - yCenter;

        We can then calculate the new offscreen position of the icon (xOffscreen, yOffscreen)         using this direction vector (I won't show that code here for brevity, but it basically         sets an offscreen value in one coordinate (x or y) and then calculates the other         coordinate based on the direction vector).


        Finally, we can create an animation that will move the icon from its position in         the main menu to this offscreen position as follows:

        Clip iconClip = Clip.create(MENU_OUT_TIME,
                transformNode, "translateX", xOffscreen);
                new BeanProperty(transformNode, "translateY"), 

        The transformNode object being animated is the filter node in charge of the icon         location, so this animation will alter the translateX and translateY properties         of that object during the animation. Similarly, we create the opposite animation         to move the icon back to its original location from where it's residing offscreen:

        iconClip = Clip.create(MENU_IN_TIME,
                transformNode, "translateX", xOffset);
                new BeanProperty(transformNode, "translateY"), 

        Tray Menu Animation


        Tray menu animation is simpler; we just fade the tray out and back in when applications         become active or inactive. To fade the tray out, we create an animation on the opacity         property of the opacityNode object that we created earlier:

        Clip fader = Clip.create((int)MENU_OUT_TIME, opacityNode, 
                "opacity", 1f, 0f);
        fader.addTarget(new TimingTargetAdapter() {
            public void end() {

        Note that we're actually doing two things here; we're fading out the node from opaque         to completely transparent, and we're setting the visibility of the node to false         when the animation ends. The visibility property of nodes controls whether the nodes         process events or try to render themselves; since the node will be completely transparent         when it is faded out completely, it doesn't make sense for it to participate in         either events or rendering.


        When an application becomes inactive, we run the reverse animation on the tray menu         to make it visible and fade it in:

        fader = Clip.create((int)MENU_IN_TIME, opacityNode, 
                "opacity", 0f, 1f);
        fader.addTarget(new TimingTargetAdapter() {
            public void begin() {

        Application Animation


        Finally, we need to animate the application screen. These animations are similar         to what we've seen before, although in this case we are both fading and scaling         the application screen:

        final Timeline appAnims = new Timeline();
        Clip fader = Clip.create((int)MENU_OUT_TIME, opacityNode, 
                "opacity", .1f, 1f);
        Clip scaler = Clip.create((int)MENU_OUT_TIME, scaleNode, 
                "affine", smallScale, fullScale);
        fader.addTarget(new TimingTargetAdapter() {
            public void begin() {

        The application animation is also how we start tying the different animations and         events together. First of all, we kick off the overall menuOutTimeline animation         from the application animation by scheduling it as a dependent animation         of the application's fader Clip, as follows:


        Next, we add an attribute to each icon that tells it which animation it is associated         with:

        appIcon.putAttribute("startAnim", appAnims);

        Finally, we add a mouse listener to each icon that will listen for clicks and start         the animation appropriate for that icon:


    where the mouse listener is defined to start the animation that we stored as an attribute     on the icon in question:

        SGMouseListener appStartListener = new SGMouseAdapter() {
            public void mouseClicked(MouseEvent e, SGNode n) {
                Animation a = (Animation) n.getAttribute("startAnim");

        We do similarly for the reverse animation for the application (I'll skip that code         for brevity and added surprise and excitement).




        Chris might prefer that I not mention this detail, since the final API for the scene         graph will hopefully make this step irrelevant, but for now the only way to animate         transforms such as the moves and scales shown above is for the animation engine         to know how to interpolate AffineTransform objects. It does not know how to do this         by default (because, frankly, it's not typically the way you would want to interpolate         between positions and orientations; it can produce unexpected results, thus the         need for better functionality in the API eventually). So we need to add this capability         to the application. We do this by creating a TransformComposer object         that tells the system         how to interpolate between AffineTransform objects, and we register the composer         as follows:

        static {
            Composer.register(AffineTransform.class, TransformComposer.class);

        I won't go into the details of TransformComposer here, but see             my earlier blog entry on the animation system and the Composer             JavaDocs on the Scene Graph project site             to understand more about Composer.         Basically, a custom Composer simply converts between an arbitrary type and an array         of double values, which the base Composer class then knows how to linearly interpolate         between.




        That's mostly it. If you run the application and click on the icons you can         see the fading, moving, and scaling animations all working together to show a nice,         smooth transition between the menu and application screens.


        Of course, in demos enough is never enough. So we decided to put in one more element         for fun.


        In the Apple video, you'll notice that many of the demos they show are run by this         disembodied hand. It could be the Hand of God, but I don't think that Steve was         in the video. Besides, the hand isn't wearing a black turtleneck.        




        It seemed like our demo needed an element like that, so we created         the handCursor node.


        Handy Cursor


        Custom cursors are fairly easy in Java, but they are also fairly limited. In particular,         your cursor image is limited to 32x32, which doesn't really give us the effect we         were looking for. I want a hand, not a hand-shaped wart. We need a friggin' huge cursor.


        In a traditional Swing application, we could manage this using the glass pane, displaying         an arbitrary image in that overlay on top of the application GUI. But the scene graph makes this         even easier; we just need a shape node.


        First things first: we need to manage the real Swing cursor. Since we cannot use         the actual cursor as our hand, we will instead make the real cursor invisible with         the following code, so that if we can't make the cursor do what we really want,         we can at least get it out of the way:

        BufferedImage emptyImage = new BufferedImage(32, 32,
        invisibleCursor = Toolkit.getDefaultToolkit().
                createCustomCursor(emptyImage, new Point(0, 0), "empty");

        Next, we will create our handCursor object, parent it to the root node, and add         it as a MouseMotionListener on the application as follows:

        handNode = new HandCursor(rootNode);

        I won't show the entire HandCursor class (it's frankly not interesting enough), but         the basics are as follows: First, we load the hand image and scale it to an appropriate         size (using code similar to that shown earlier for the menu icons).         Next, we create an SGImage node and a transform node (for moving it), and parent the transform node to the root node. We also make the shape invisible         at first, since the hand cursor is not showing         by default:

        handNode = new SGImage();
        handTransform = SGTransform.createTranslation(0, 0, handNode);

        When the application frame detects that the key "h" has been typed, it makes the         default cursor invisible and the hand cursor visible with the following:


        Now, all we have to do is track the mouse         position and display the hand node appropriately:

        public void mouseMoved(MouseEvent me) {
            mouseX = me.getX();
            mouseY = me.getY();
            handTransform.setTranslation(mouseX - 160, mouseY - 5);

        (where the hard-coded numbers in setTranslation() position the "hotspot" of the hand         cursor at the tip of the index finger).




        That's pretty much it. There's more code in JPhone, but I think I've covered         all of the interesting scene graph-related pieces above. Play with it, check out         the code, and write a phony scene graph application of your own.


        jPhoneAnim1.jpg         jPhoneAnim2.jpg         jPhoneAnim3.jpg         jPhoneAnim4.jpg         jPhoneAnim5.jpg        


        Related Information


        JPhone Demo: a Java Web Start application


        JPhone.java: the source code for the main demo class


        TransformComposer.java: the source code for the custom composer helper class


        Scene Graph Demos: The project site for all of the Scene Graph demos posted by the         team (including JPhone)


                    (This is the conclusion of a two-parter that was begun                 last week and split in half         for no particularly good reason. If you didn't read                 last week's entry yet, please         do. I'll wait. ... Now, are you ready? Then let's get started.)        


New Stuff


Most of the changes between TimingFramework and the scene graph animation library             are tweaks on existing             functionality, as described above. But there is also new functionality in the             scene graph animation engine. Some of it is functionality that             we have wanted in             the TimingFramework for some time - we just happened to get around to it in this             animation engine first.




The Timeline class is probably the biggest new thing since the         Timing Framework. It adds a crucial missing piece of functionality from the             original library: the ability to group animations, schedule them in a             coordinated fashion, and use one, single heartbeat for all animations.


"Nice Grouping!"


The previous approach of daisy-chaining animations one-by-one with TimingTrigger             was quite useful for individual sequences of animations. And the new             addBeginAnimation()/addEndAnimation() methods of Clip enhance this capability             significantly. But it's still not what you want for larger systems with entire             groups of animations that need to be coordinated. Instead, you want some             way to create groups of animations             that operate on a single timeline relative to             each other and then schedule that group appropriately             with other animations or other             animation groups. This functionality makes it much easier to build up             more complex and interdependent models of animations.


Timeline enables this capability by allowing you to schedule animations on a given             Timeline relative to when the Timeline itself starts. So, for example, if you             want to fire off animations a, b, and c 100, 200, and 300 ms after some even             occurs, then you can schedule these animations on a single Timeline and start             the Timeline when that event occurs:

            // Create the animation group
            Timeline timeline = new Timeline();
            timeline.schedule(a, 100);
            timeline.schedule(b, 200);
            timeline.schedule(c, 300);
            // Later, when the event occurs

"Two Hearts Beat as                 One"


One of the biggest recurring constraints that I ran up against with the             TimingFramework was the fact that each Animator started its own Swing Timer by             default. You could change this behavior, with the late-addition class             TimingSource, but it wasn't a convenient way to get what you really             wanted: a single timer for the whole system. You could also get a single-timer kind of behavior by adding multiple             TimingTargets to a single Animator, but this approach only works easily in             situations where you want similar timing and behavior characteristics from all             of these targets; for example, animations of different durations or interpolation             are difficult to             synchronize in this way.


What the system really needed was a single timing source that sent a heartbeat             pulse to all animations. Each animation could then turn that pulse into an             appropriate timing fraction, just as Animator does with its internal             Swing Timer events. But there are a couple of excellent reasons why the             single-pulse-generator model is superior to the multiple-Swing-Timer approach:

  •                 Synchronized animations: If you have several animations in the                 system that are affecting the GUI, or are otherwise related in some way,                 you probably want them all to receive their timing events at the same time                 instead of at slightly different times because their internal Timers are                 kicking off at different intervals. Coordinating                 rendering changes in the single GUI display is a Good Thing; each element animating                 separately from everything else around it could contribute to a more chaotic user                 experience.
  •                 Resolution and frame rate: Anyone that worked through the gorey                 details in the Resolution section of chapter 12 of Filthy                     Rich Clients (or anyone that's                 just worked closely with the Swing Timer) knows that the performance of that                 Timer is often gated by constraints on the native platform. For example, on                 Windows XP, the Swing Timer typically has an inter-event rate of about 16                 milliseconds. This is because that's the highest rate achievable by the native                 low-resolution timer upon which the Swing Timer depends (through its use of                 Object.wait()). This problem is compounded when there are several Timers firing                 off at the same time, because the timing events are all gated by that                 underlying wait() resolution, and cannot actually process the                 wait()'s in parallel.
                    For example, say you have one Animator that you'd like to set up with a resolution of                 10 ms. On Windows XP, you'd actually get timing events at a resolution of 16 ms                 instead. Now, suppose you create a second Animator, also with a resolution of 10                 ms. Since the underlying Swing Timer processes the timing events one by one,                 and since the gating resolution of the timer is what it is, you'll actually                 end up with an effective resolution of 32 ms for each of these Animators.
                    Now consider the model of the new animation engine, where there is just one single                 underlying                 timer running, sending out timing pulses to all animations in the system.                 This is more like                 the multiple-TimingTargets-per-Animator model where the only gating factor in                 resolution is that of the core timer itself, not how many Animators are                 waiting for the timing events.

Both of these capabilities of Timeline, grouping and the system-wide heartbeat,             are managed through the various schedule() methods of Timeline. You schedule an             animation to run with some offset from the beginning of a Timeline, and the             Timeline ensures that that animation will start when it needs to and thereafter             receive             the master heartbeat events from the system. You can schedule other animations             all on that same Timeline or on other Timelines and then schedule             the Timelines themselves to start when appropriate.

            Timeline t1 = new Timeline();
            // ... schedule animations on Timeline t1 ...
            Timeline t2 = new Timeline();
            // ... schedule animations on Timeline t2 ...
            // Timeline t3 = new Timeline();
            // schedule t1 to start 100 ms after t3 starts
            t3.schedule(t1, 100);
            // schedule t2 to start 200 ms after g3 starts
            t3.schedule(t2, 200);
            // start t3

An important point to note here is that Timelines and Clips may both be             scheduled on a Timeline. The schedule() methods actually             take an Animation             parameter, where Animation is a superclass of both Clip and Timeline. So a             Timeline itself can be started relative to some point in another             Timeline. In this way, we can create hierarchical groups of animations             that are automatically triggered according to how we scheduled them together.




            One constraint of the TimingFramework is that while time could be interpolated             non-linearly (using a non-linear Interpolator object), space was always interpolated linearly.             For example, if you set up an animation between points (x0, y0) and (x1, y1), then             the system would interpolate intermediate (x, y) points linearly between             these points; all points calculated             by the system (by the old Evaluator class) would lie along a straight line drawn between             the two endpoints.


            The new MotionPath class makes it possible to create keyframes, and an Evaluator             to interpolate between them, for curved paths.


            What Now? Whither TimingFramework?


            A logical question to ask now is, what about the Timing Framework? Is there a future             in that project? Or should I start using the scene graph animation library instead?


            I think the answer to these questions is still being figured out (since the scene             graph library itself is still very much in-development), but here are a             couple of reasonable ways to think about it, depending on your timeframe:




            The TimingFramework is in good shape in general. There was a reason that I declared             it 1.0 (I didn't just randomly decide to add .44 to the previous release numbered             0.56). So please continue to use it as you see fit in your work for now. There are             some minor issues that crop up occasionally that should probably be addressed in             that library (although I admit I have been a bit preoccupied on Scene Graph and             other things for a while and haven't been as responsive to issues as I'd like).


            Scene Graph animation, on the other hand, is very much in flux right now. We're             reasonably happy with the functionality of the library, but I wouldn't be shocked             to see some more refactoring take place as we continue working on the Scene Graph             project in general. So while I encourage you to take a look at it and play around             with it, I wouldn't bet on the current implementation of it quite yet.




            I think (and this is where it gets fuzzy, because we're a bit busy focusing on the             short-term right now and just trying to finish up the Scene Graph library in general)             that the scene graph animation engine, or something very like it, will probably             be the single library for animation eventually. It just doesn't make sense to have             two such libraries, at least not             coming out of the same group at Sun and not when             one is essentially a subset of the other. When we get there and what the eventual,             single library looks like when we're there is still a mystery. But long term, I             see these libraries converging, and probably looking more like the Scene Graph version             than Timing Framework.


            But in the meantime, please use the Timing Framework while you investigate and starting             playing with the Scene Graph animation engine.                 Send us feedback on what we could improve to make sure that the             library we eventually end up with supports what you need from it.


            By the Way


With all of this power to do cool, whizzy animations in Desktop Java             applications, I'm thinking that "Scene Graph" isn't really a good enough name. Here's a possible             alternative that I'm proposing, as of now:




But it still feels like something's missing. Maybe it's just not graphic enough.


Been There, Scene That Blog

Posted by chet Jan 4, 2008

(This is Part 1 of a two-part blog. It's been broken in two for no particularly good reason other than it was getting a bit long for a single entry and I always like a bit of suspense and tension in my technical articles - don't you? . Look for Part 2 in the next few days)


You may have already heard about the Scene                 Graph project that we released last             month on java.net.        


In case you haven't heard about it yet, here's some information about it:


            We released the project last month on java.net.


Since the team was pretty busy at the time (I was at                 JavaPolis talking about the             project, among other things, and the team was cranking away on the actual             code), we haven't really gotten around to blogging about it yet to tell people             more about the project (hey, the code's out there - does anyone need anything             more?). Fortunately for us,                 Josh seems to blog in his sleep, so there's at least been some information             floating             out there in the blogosphere. But maybe now that the library is available, we should actually             talk about it to help everyone understand what it is, how it works, and what             you can do with it.


            Hans Muller is working an             intro/overview on the subject. Chris Campbell has a blog in the works on scene graph image effects. And I was tasked with (surprise, surprise) a             discussion of animation.




I think that the best way to describe the animation engine in the scene graph             project is that it is like the next major version of the                 Timing Framework. Maybe it's because I'm a graphics geek, but I always find             it easier to understand concepts through pictures. So here's a technical diagram             illustrating how the scene graph animation engine relates to the Timing Framework.             It's a bit technical, but hopefully you'll get the point.




            The similarlity between the two libraries is not, obviously, a coincidence. We started with the 1.0 version             of the Timing Framework and changed it to suit our needs for the scene graph, where "suit our needs" means that we refactored the API to improve upon various things and added functionality that has so far             been lacking in the Timing Framework.


Rather than explain how the Timing Framework works, I'd encourage you to check             out the project, the docs linked on the project site, the demos for that project, and             the copious other amounts of information on the library (including demos,             chapters in the Filthy Rich Clients             book, and so on). I'll assume that             anyone reading past here has some passing familiarity with the Timing             Framework.


I'll step through the major categories of differences between the             TimingFramework and this new animation library, to give a sense of what's new             in the scene graph. I'll show some sample code along the way, although I'd encourage you to check out             the Nodes example on the             scene graph site to see animation usage in action.


Refactoring: What's Changed?


Clip is the new Animator


After living for way too long with the bureaucratically dull name         TimingController             that came from the original version of the Timing Framework, we finally             renamed that class to the friendlier Animator name that the library enjoys             today in version 1.0. But now it's changed again. That             class, the most fundamental in the whole library, is now called Clip. This             was done to help people that might be familiar with the concepts covered by             that class in other toolkits and environments where the name Clip is common. It's also             closer to what the object is; yes, it animates things, but it's really just a             short, atomic animation which is meant to be strung together with other             animations in an application, much like short clips are edited together to make             an entire movie (or, in the case of the             Fantastic Four sequel, to make a travesty).


Besides, Clip saves 4 characters every time you type it. Pretty cool, huh?             That's like several milliseconds of coding time per day that we've saved you, our             user. All part of the             job, providing service and performance with a smiley.


Clip has a handful of factory methods for the common cases:

            create(int duration, double repeatCount, Object target, 
                   String property, T... keyValues)

            create(int duration, double repeatCount, Property property, T... keyValues)

            create(int duration, double repeatCount, TimingTarget target)

            create(int duration, Object target, String property, 
                   T... keyValues)

            create(int duration, Property property, T... keyValues)

            create(int duration, TimingTarget target)

            (The Property object will be explained later, but think of it as a replacement for             the previous PropertySetter object in the Timing Framework).


            Like Animator, Clips begin running when the start() method is called (although             Clip also has more involved and powerful scheduling capabilities, discussed later):

            // Fade in myObject over a half second
            Clip clip = Clip.create(500, myObject, "opacity", 0f, 1f);

PropertySetter is now BeanProperty


The old way of having the animation engine automatically set object/property             values was through the PropertySetter class, which was constructed with a given             object and property-name pair. We felt that this was a bit too constrictive in             a world where there might be other ways that one might want to set values on             objects; what if someone has an object without JavaBean-like getter/setters on             it?


So we defined a new interface, Property, that abstracts out the concepts and methods for getting and setting the value of some property.        

            public interface Property<T> {
                public <T> getValue();
                public void setValue(T value);

Then we refactored the old PropertySetter class as the new class         BeanProperty,             which is an implementation of that interface that specifically defines             properties in terms of JavaBean objects/name pairs.

            public class BeanProperty<T> implements Property<T> {
          public BeanProperty(Object object, String propertyName);

                public <T> getValue();
                public void setValue(T value);

Also, note that we separated the functionality of setting values on a Property object             from animating those values. These concepts overlapped in the previous             PropertySetter object, which was an implementation of TimingTarget and handled             both animating the value in question as well as setting it on the specified             object. Now, the new KeyFrames object handles animating a property, and the             resulting value is then sent into the appropriate Property object.




One of my favorite refactorings from the TimingFramework was the change that Chris             Campbell made to interpolators. The base interface, Interpolator, is the same:

            public interface Interpolator {
                public float interpolate(float fraction);

            The change is that there used to be several implementation classes of             Interpolator (LinearInterpolator, DiscreteInterpolator, and             SplineInterpolator), and now there is just one class, Interpolators,             that provides five factory             methods for different types of interpolators:

            public class Interpolators {
                static Interpolator getLinearInstance();
                static Interpolator getDiscreteInstance();
                static Interpolator getSplineInstance(float x1, float y1, float x2, float y2);

                static Interpolator getEasingInstance();
                static Interpolator getEasingInstance(float acceleration, float deceleration);

The linear, discrete, and spline interpolators are virtually the same as         before, in both construction and             operation. But there is now an additional "easing" variety that takes the place             of the setAcceleration()/setDeceleration() behavior in the old Animator class.             This simplifies the use of acceleration/deceleration, and makes it more             consistent with the use of other interpolator functionality.


Also, note that Clip uses an easing interpolator by default (with the default factors             of .2f/.2f), as opposed             to the old linearly-interpolated Animator object. Linear interpolation makes             more sense as a default from an analytical standpoint (it seems less arbitrary             than some particular choice of easing factors), but frankly most of the animations that             you'll create should be non-linearly interpolated, which means that you would either not get the behavior you should if you used the linear             default or (as in all of my animation demos) you'd have to keep writing the same             boilerplate code to set the easing factors to get the behavior right.




The previous structure of KeyFrames was closely modeled on the                 SMIL approach,             where there was a list of n KeyTimes, a list of n             KeyValues which corresponded             to the times in KeyTimes, and an optional list of (n-1) Interpolators for the             intervals between the times in KeyTimes. The basic functionality of that system             is unchanged, but it has been reorganized in a way that we think makes more             sense for the API.


            Now there is a KeyFrame object which holds a time/value pair as well as an optional Interpolator. The Interpolator is used for the interval between the previous             KeyFrame and this one (which is ignored if this is the first             KeyFrame, since there is no preceding interval). A KeyFrames object in the new system is just a             collection of these individual KeyFrame objects. There is also more latitude now for             creating animations without start/end values. For example, if a KeyFrames             object is defined without a KeyFrame object at time t=0,             then the animation             derives the value at t=0 when the animation is started. Additionally, if there is no KeyFrame at time t=1, then the animation simply holds the value set by the             preceding KeyFrame.


KeyFrames implement the TimingTarget interface, and are used as targets of an             animation to animate a value and then set it on the Property object with             which they were constructed.             Note that KeyFrames objects are created either with an explicit Property object             or with the object/property             pair that is used to construct a BeanProperty object internally:

            static KeyFrames<T> create(Object target, String propertyName, KeyFrame<T>... keyFrames);
            static KeyFrames<T> create(Object target, String propertyName, T... keyValues);
            static KeyFrames<T> create(Property<T> property, KeyFrame<T>... keyFrames);
            static KeyFrames<T> create(Property<T> property, T... keyValues);

            Evaluator is now Composer


The system in TimingFramework to handle interpolation between values of various             types uses the Evaluator class (or, rather, subclasses of Evaluator). Each Evaluator subclass is defined to take values of a certain type, a fraction             to interpolate between then, and would linearly interpolate between those values.             Chris             abstracted this functionality another level and created the Composer class. It             performs similar functionality, but only requires from its subclasses the             work of breaking down a type into component pieces that are stored in             double values. That is, the Evaluator class internally handles the simple             parameteric calculation that interpolates between two double values. All that             Composer and its subclasses need to do is to marshall values between their             native representation and a representation in a series of doubles.


            As in the TimingFramework, most types that you would care about animating are             already handled by the system. But if you do need to animate a variable of a type             that is unknown to the system, then you need to create your own Composer subclass and register it             with the system. It will then be used whenever the system encounters that type             and looks for an appropriate Composer. Instead of putting sample code for a here, I'll just defer to the JavaDocs             and the source code, which do a good job of showing what any new subclass would             need to do in order to work within this system. But assuming you've defined some             custom Composer, MytypeComposer, registration is easy:

            Composer.register(new MytypeComposer());

TimingTrigger is replaced by Clip dependency scheduling


One piece of functionality that is completely gone is the old         TimingTrigger.             This class is the mechanism for sequencing animations in the Timing Framework;             you daisy-chain animations by triggering one animation to start when             another ended. But now, the Clip class has this capability built-in, with more             bells and whistles. Instead of going through a Trigger to sequence animations,             you can tell the animations directly that you want to sequence them on each             other. Also, there is more flexibility in how the animations are sequenced; you             can schedule an animation to start on another animation's start or end, and can             do either one with a specified offset value.

            addBeginAnimation(Animation anim);
            addBeginAnimation(Animation anim, long offset);

            addEndAnimation(Animation anim);
            addEndAnimation(Animation anim, long offset);          

            (where Animation is the superclass                 of both Clip and the as-yet-undiscussed Timeline class).


            So, for example, you can have             clipB start 100 ms after clipA ends with the following call:

     clipA.addEndAnimation(clipB, 100);

Minor Changes


There are various minor changes to the system that you will notice as you try it             out. It's not worth going through all of these (because it'd take me a long             time to go over the API to catch all of these changes, for one thing), but they             should be fairly self-explanatory when you encounter them. For example, the old             Animator.INFINITE constant, which was used, for example, to define unending             animations has become Clip.INDEFINITE. Not a big change, but it seemed more             semantically correct.


                    (That's it for this entry - check back next week for the gripping conclusion to this discussion, as we go over some of the new bits in the scene graph animation library that are not now part of the Timing Framework)        


My JavaPolis07 Slides Blog

Posted by chet Dec 19, 2007

Review Review Blog

Posted by chet Nov 14, 2007

I don't like to toot my own horn, but I'm happy if others happen to toot it for me.

Geertjan Wielenga has posted a review of Filthy Rich Clients on his blog.

If you're still wondering whether this book is for you, check out the review. Geertjan does the best job I've seen yet of describing the content and the reasons why any Swing developer should read it.

Or if you're still wondering what Swing is and what book I'm talking about, then I have no idea what you're doing here to begin with.  But I'd encourage you to get the book anyway.


Of Pros and QCons Blog

Posted by chet Nov 8, 2007

I spent Wednesday at QCon, in San Francisco, giving a presentation on Java's new consumer focus and participating on a panel with the auspicious title, What Will the Future of Java Development Be?.

I wrote a bit about one of the panel discussions for The Register, so I won't repeat that stuff here. Instead, I wanted to just repeat my favorite question that came up that evening. It was in my original article for The Register, but it got edited out (maybe it was too off-topic). Lucky for me I have this other outlet specifically designed for off-topic topics.  So here it is:

A good language changes the way we think about programming. What language change would you like to see that would change the way we think?"

I found the question itself confusing. I mean, if I could imagine something that would change the way I thought, then wouldn't I already think that way? It seemed like I could easily fall into an infinite recursion loop, overrun my stack, and have a brain fault.

All of us had to answer that one, so I mumbled out something about improving the overall platform (not just the language). But mostly, I was busy trying to reboot my brain.


The presentation that Doug Felt, Phil Race, and I did in 2005 has the best description+diagrams of text measurement in Java 2D that I have seen. It's so useful, in fact, that I put a link to the talk in one of the footnotes of the book (footnote 15, p. 78). (By the way, the great text descriptions were all due to Phil and Doug - I was just blathering on about graphics and animation, which should come as a shock to absolutely nobody).

Imagine my surprise when I discovered that the link was dead and that the presentation is no longer available on the JavaOne site. Apparently, there is a 2-year-archive policy, so all of the 2005 presentations are no longer stored.  Maybe they figure that content that old can't possibly be relevant anymore, but for presentations on APIs that live longer than 2 years, that's not necessarily true.

Anyway, I am posting the talk here mostly so that I can post a hopefully more permanent link in a corrected footnote (for the third printing, which is apparently going to press soon!). But I encourage you to take a look at the presentation if you're interested in text measurement (or Graphics Effects or Text Rendering or Printing, which we also cover in the slides). People that follow my blog and book will see some earlier versions of Filthy Rich topics in my slides. For example, the original work in Animated Transitions came out of work related to this presentation - it just took a bit more time to settle down into the utility library that is discussed in the book and published on the project site. I was already thinking about filthy rich content; I just needed a catchy name for it...

Here's the presentation: Advanced Java 2D for Desktop Applications.



    The JavaOne Call for Papers deadline always sneaks up on me. It's like jet     lag; one minute     you're productively cranking out code, then suddenly you're asleep     on the keyboard, drooling on the spacebar.


Like previous years, we want to encourage as much external participation in the         conference as possible. We know there are great Desktop Java applications being         written and deployed out in the real world; we'd like you to come talk about them         at the conference. Case studies, techniques, tricks, in-depth discussions of technologies,         frameworks and architectures for productive development,         whizzy cool effects - anything that others want to learn about is fair game.


        You only have until November 16th (that's next Friday for anyone currently time-confused         by the Daylight Savings Time switch last weekend), so             get your abstracts in now.         Submitting a proposal isn't that much work - you just need to put enough         information in the abstract and the outline in order for the track team to be able         to understand what you're covering, how your going to do it, and why people would         want to attend the talk.


        So wake up, wipe the drool off the keyboard, and             submit that proposal. Help us create and present a great Desktop track this         year!


Male Pattern Boldness Blog

Posted by chet Oct 24, 2007

Move It! Blog

Posted by chet Oct 22, 2007

    Introducing Animated Transitions,     a new library for the easy creation of animated segues between application states.    


It's been a long slog, from initial demos of the technology in a session on "Advanced 2D" at JavaOne 2005, to use of an early version of the library in the Aerith application, to finishing off the library and creating more demos exercising it for the book Filthy Rich Clients, to getting legal approval for pushing the actual source code (an exercise over the last several months that was not unlike slamming the refrigerator door on my head, over and over. Every day.).

But it's finally done, and the long-awaited day is finally here:


    The Animated Transitions library is hereby released        


The project is available on java.net at http://animatedtransitions.dev.java.net with a BSD license.

The library is fully described in Chapter 18 of Filthy Rich Clients. That chapter includes a complete description of the library's API, detailed explanations of two sample applications that use the library, and some nitty-gritty details on how the library internals work.

But because there are probably a couple of people left on the planet that do not yet have a copy of the book (no idea how this happened. Maybe it's because we have been so quiet about it. We should really talk more about it), and because I'm such a nice guy and all, I wrote up a short tutorial on the basics of using the library, along with a new demo that shows the basics in action. You can find that tutorial in the java.net article,     "Create Moving Experiences with Animated Transitions".

In fact, here's a web-started version of the demo so that you can see it in action. Click on the handy image below and run it. Click on the More/Less buttons to see what it's all about. Note: There are some artifacts reported on the Mac, perhaps related to the way they treat layout and the panels that contain the buttons.



Play around with it. Check out the article and the accompanying demo. Check out the demos on the book's website. Write your own demos. Or, even better, use the library in your actual applications. Make those applications more dynamic and help your users actually understand the interfaces they're beset with.

Go on: Move it!


Filthy Download Blog

Posted by chet Oct 15, 2007

Filter Blog

By date: