Debugging Swing Blog


    Developing Swing applications is not a simple task. There seem to be multiple alternative ways to achieve a required result, and in most cases the correct solution requires some "unwritten knowledge." Unfortunately, it is very easy to write Swing applications that break various rules--rules that are not enforced by the runtime--which results in visual artifacts, unresponsive or frozen UIs, and unnecessary CPU consumption. This article shows a few simple techniques that can help debug these problems for even the most complicated Swing applications.

    The Swing Threading Model

    John Zukowski's article " Swing Threading and the Event-Dispatch Thread" provides an excellent overview of the Swing threading model. Put simply, there is a dedicated single thread called an event dispatch thread(EDT) that is responsible for processing the interaction between the UI, the operating system, and the user. To summarize in one sentence: if you want to interact with the UI, do it on the EDT and do it quickly. Failing to comply with this simple rule can result in the following:

    • Visual artifacts
    • Unnecessary CPU consumption
    • An unresponsive UI
    • A permanently frozen UI

    Unfortunately, the rules that govern the interaction with the EDT are not enforced anywhere in Java, neither at compile time nor at runtime (at least as of Java SE 6). In addition, breaking these rules will not always result in buggy behavior--but might embarrass you when you're running a demo for your CEO. This is due to unpredictable thread context switching on the JVM and OS level. To top it off, the multi-threaded nature of UI applications makes it very hard to apply the usual step-through debugging techniques in the IDE of your choice.

    Breaking the Rules

    In general, there are four rules that are very easy to break:

    • Don't change the UI state outside the EDT.
    • Don't execute long-running operations on the EDT.
    • Don't acquire locks on the EDT.
    • Don't change component state during painting.

    In the following sections we will see examples of violating these rules, along with simple tools that can detect and pinpoint the violating code.

    Changing the UI State Outside the EDT

    This violation has been discussed at great detail by Scott Delap in "Easily Find Swing Threading Mistakes" and Alexander Potochkin in " Debugging Swing, the Final Summary." The latter is a veritable treasure trove of various techniques to trace the interaction between the EDT and the application code, spanning repaint managers, JVMTI, class instrumentation, and aspect-oriented programming.

    Executing Long-Running Operations on the EDT

    This single violation has, by far, contributed the most to the perceived slowness of Java on the desktop. Exhibiting this behavior is as simple as the following:

    JButton hogEDT = new JButton("Hog EDT"); hogEDT.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { // simulate load try { Thread.sleep(5000); } catch (InterruptedException ie) { } } }); 

    This is all you need to do to make the application unresponsive for the next five seconds to any user or OS interaction (even blocking dirty-region repaint when you switch to some other application and then back), and the default templates that ship with all major IDEs make it even easier. While there are plenty of tutorials (most of them focusing on the SwingWorkerclass) that show how to offload heavy tasks to worker threads (so that the event dispatch thread is "free" to continue processing subsequent events), there are quite a few Swing applications that continue "hogging" the EDT.

    How can this be automatically detected with minimal changes to the application code? Let's define the problem. We want to automatically detect any application code activity that takes more than X milliseconds to run on the EDT. We also want to know exactly what code is running when the EDT is hogged. Furthermore, we want to do this without changing all the event handlers in the application. Fortunately, there is quite a simple solution that combines a custom event queue with new JMX functionality that has been added in JDK 5.0.

    First, let's start with a custom event queue:

    public class TracingEventQueue extends EventQueue { private TracingEventQueueThread tracingThread; public TracingEventQueue() { this.tracingThread = new TracingEventQueueThread(500); this.tracingThread.start(); } @Override protected void dispatchEvent(AWTEvent event) { this.tracingThread.eventDispatched(event); super.dispatchEvent(event); this.tracingThread.eventProcessed(event); } } 

    And to install it, call the following line beforecreating your first frame or dialog:

    Toolkit.getDefaultToolkit().getSystemEventQueue().push( new TracingEventQueue()); 

    Yes, it is this simple. Once this event queue is installed, it starts tracing the execution time of all event processing (see the code of TracingEventQueueThread for more technical details). It creates an additional thread that wakes up every 100 milliseconds and checks how much time has passed since an event has been dispatched. If the processing time is more than 500 milliseconds (the constructor parameter), it puts the information on the event to the console. Here is what the console looks like for the example above (button click resulting in five-second processing on EDT):

    Event [4839036] java.awt.event.MouseEvent is taking too much time on EDT (531) Event [4839036] java.awt.event.MouseEvent is taking too much time on EDT (640) Event [4839036] java.awt.event.MouseEvent is taking too much time on EDT (749) Event [4839036] java.awt.event.MouseEvent is taking too much time on EDT (859) Event [4839036] java.awt.event.MouseEvent is taking too much time on EDT (1015) ... 

    Note that the specific implementation doesn't "flag" an event as traced; it just continues reporting a long processing of the same event until it's done.

    Is this information helpful? It is, since now we know that some listener that is processing the last mouse event is hogging the EDT. Is this information enough to pinpoint the problem? Not really, because all we have is the event information (with perhaps the source object). Wouldn't it be nice to show the exact stack trace for the EDT when the tracing message is printed? It would indeed.

    What are the options here? We can throw an exception and print its stack trace, but that would show the wrong thread, since the tracing is done on a separate thread that has nothing to do with the EDT. We can interrupt the EDT, but that would break the application logic without getting the actual stack trace. However, there are two additional techniques that can be used to get the stack trace of the EDT. The first is to use the JVM Tool Interface, which provides APIs for monitoring tools, and is used extensively by Java profilers. The major disadvantage of capturing the stack traces with JVMTI is that the code needs to be written in a native language, usually C or C++. In addition, the JVMTI agents tend to be very verbose for even the simplest tasks, not to mention that you would have to compile them for every target platform. There is a better solution: JMX.

    JMX stands for Java Management Extensions and provides a set of tools to manage and monitor applications. An excellent introduction to using JMX for analyzing threads of a running JVM is available in the Attach API section of the August 2007 installment of Core Java Technologies Tech Tips. The code in that section shows how to connect to a running JVM and get the information on all its threads, all in pure Java.

    It should be noted that in order to analyze threads in a running JVM, you don't have to run an additional JVM. You can simply "connect" to yourself, at which point you can get to information not usually accessible at the Java level. TheTracingEventQueueJMX does just that, using theManagementFactory.getPlatformMBeanServer() API. The ThreadMXBeaninterface provides a surprising amount of information on the VM threads, including their names and stack traces. When a long-running task is detected, the following code is called:

    long threadIds[] = threadBean.getAllThreadIds(); for (long threadId : threadIds) { ThreadInfo threadInfo = threadBean.getThreadInfo(threadId, Integer.MAX_VALUE); if (threadInfo.getThreadName().startsWith("AWT-EventQueue")) { System.out.println(threadInfo.getThreadName() + " / " + threadInfo.getThreadState()); StackTraceElement[] stack = threadInfo.getStackTrace(); for (StackTraceElement stackEntry : stack) { System.out.println("\t" + stackEntry.getClassName() + "." + stackEntry.getMethodName() + " [" + stackEntry.getLineNumber() + "]"); } } } 

    Here, we iterate over all the threads in the target VM (which is the same VM that we're running in), and locate the EDT (based on the name prefix) and get its stack trace. The output for our example code is:

    Event [21006994] java.awt.event.MouseEvent is taking too much time on EDT (531) AWT-EventQueue-1 / TIMED_WAITING java.lang.Thread.sleep [-2] org.pushingpixels.tracing.EDTHoggingJMX$1.actionPerformed [21] javax.swing.AbstractButton.fireActionPerformed [1995] javax.swing.AbstractButton$Handler.actionPerformed [2318] ... java.awt.Component.dispatchEvent [4240] java.awt.EventQueue.dispatchEvent [599] org.pushingpixels.tracing.TracingEventQueueJMX.dispatchEvent [19] java.awt.EventDispatchThread.pumpOneEventForFilters [273] ... [121] 

    This is much better. Here, we can see the stack trace originating in our custom event queue (TracingEventQueueJMX) going to the action listener (EDTHoggingJMX$1.actionPerformed), providing the line numbers that help pinpoint the exact event listener and the "violating" code.

    A couple of minor points should be noted. First of all, an event can be processed by a number of listeners. Since we're "sampling" the processing time at discrete intervals, we might erroneously flag a listener as being an EDT hog, when in fact a listener before it was responsible for the processing time. This can be addressed by lowering the sampling time. In addition, using JMX to fetch the stack trace can be detrimental to performance. However, since we're doing it only when the EDT is hogged, it doesn't interfere with the listeners that respect the EDT and offload long tasks to worker threads.

    Acquiring Locks on the EDT

    While there's nothing wrong in acquiring a lock on the EDT, a deadlock caused by incorrect synchronization will end up in a frozen UI. An example is shown in the EDTDeadlockclass, where the event listener on EDT acquires a lock, creates a worker thread, and then tries to acquire another lock. The worker thread acquires the second lock and then tries to acquire the first one. This results in a deadlock and a frozen UI. This scenario can easily happen with an incorrect usage ofSwingUtilities.invokeAndWait as well.

    The solution is the same as for the previous section. A custom event queue runs a separate thread that polls the processing time of the event dispatching. Once the execution time reaches a predefined threshold, we use the following JMX code to identify the deadlocked threads and print their stack traces:

    long[] deadlockedThreads = threadBean.findDeadlockedThreads(); if ((deadlockedThreads != null) && (deadlockedThreads.length > 0)) { System.out.println("Deadlocked threads:"); for (long threadId : deadlockedThreads) { ThreadInfo threadInfo = threadBean.getThreadInfo( threadId, Integer.MAX_VALUE); System.out.println(threadInfo.getThreadName() + " / " + threadInfo.getThreadState()); StackTraceElement[] stack = threadInfo.getStackTrace(); for (StackTraceElement stackEntry : stack) { System.out.println("\t" + stackEntry.getClassName() + "." + stackEntry.getMethodName() + " [" + stackEntry.getLineNumber() + "]"); } } } 

    This results in a nice tracing output that pinpoints the offending code causing the deadlock:

    Deadlocked threads: Thread-5 / BLOCKED org.pushingpixels.tracing.EDTDeadlockJMX$1$ [32] AWT-EventQueue-1 / BLOCKED org.pushingpixels.tracing.EDTDeadlockJMX$1.actionPerformed [44] javax.swing.AbstractButton.fireActionPerformed [1995] javax.swing.AbstractButton$Handler.actionPerformed [2318] ... java.awt.EventQueue.dispatchEvent [599] org.pushingpixels.tracing.TracingEventQueueJMX.dispatchEvent [19] ... [121] 

    Changing Component State During Painting

    Let's take a look at the following code:

    JTextField tf = new JTextField("some text") { @Override public void paint(Graphics g) { this.setBorder(new LineBorder(; super.paint(g); } }; 

    The intent here is to ensure that the text field always has a red border, but the implementation results in 100 percent CPU usage once this text field is visible. While less severe than EDT hogging or deadlocks, this is certainly an undesirable behavior. What is happening?

    If you look at the implementation of theJComponent.setBorder, you will see that in addition to setting the actual border attribute, it checks whether the old border and the new border are the same. While this check catches the changes on most fields (such as color), this method tests whether the borders point to the same object. Since we're creating a new Border instance on every paint, an additional repaint event is scheduled (and not coalesced with the current paint event, since it's already being processed). The scheduled repaint event is processed immediately after the text field is done painting itself, which results in an infinite repaint loop.

    While this specific example is a little far-fetched, this is most certainly not a theoretical exercise. Recently, as I was working on the latest release of the Substance module for NetBeans, one of my users complained about 100 percent CPU usage when any editor is opened. Eventually, I tracked it down (using the technique described in the next paragraph) to a "fight" over the contextAreaFilled property of the editor tab scroll buttons. NetBeans attached a property change listener to enforce this value to be false, while my code kept setting this value to true just before painting those buttons. This resulted in an infinite repaint loop on the buttons as two parts of the application kept toggling this property.

    Here is a simple custom repaint manager that helped me in tracking this issue:

    public class TracingRepaintManager extends RepaintManager { @Override public void addDirtyRegion(JComponent c, int x, int y, int w, int h) { try { throw new Exception(); } catch (Exception exc) { StringBuffer sb = new StringBuffer(); StackTraceElement[] stack = exc.getStackTrace(); int count = 0; for (StackTraceElement stackEntry : stack) { if (count++ > 8) break; sb.append("\t"); sb.append(stackEntry.getClassName() + "."); sb.append(stackEntry.getMethodName() + " ["); sb.append(stackEntry.getLineNumber() + "]"); sb.append("\n"); } System.out.println("**** Repaint stack ****"); System.out.println(sb.toString()); } super.addDirtyRegion(c, x, y, w, h); } } 

    And to install this repaint manager, call:

    RepaintManager.setCurrentManager(new TracingRepaintManager()); 

    For our sample text field, the following stack trace is printed continuously on the console:

    **** Repaint stack **** org.pushingpixels.tracing.TracingRepaintManager.addDirtyRegion [10] javax.swing.JComponent.repaint [4714] java.awt.Component.repaint [2924] javax.swing.JComponent.setBorder [1765] org.pushingpixels.tracing.InfiniteRepaint$1.paint [13] javax.swing.JComponent.paintToOffscreen [5104] javax.swing.RepaintManager$PaintManager.paintDoubleBuffered [1386] javax.swing.RepaintManager$PaintManager.paint [1317] javax.swing.RepaintManager.paint [1132] 

    A few notes about this implementation are in order. First, it doesn't do much good to put the tracing in your paintmethod, even if you know that it is being called in an infinite loop. Since the painting in Swing is asynchronous, you will not see the actual cause of the repaint (theInfiniteRepaint$1.paint line in the stack trace above). The repaint manager is a much better place, since it orchestrates all the repaints. In addition, you can use a custom event queue to see if an abnormal amount of events is being processed when there is no interaction with the application. In both cases, you can either throw an exception and get its stack trace, or use the JMX code discussed in the previous section. JMX isn't inherently better here, since we can put our tracing code on the thread of the interest.

    The last thing to be noted is that this technique generates too much output. It traces all the repaint events, while only some are relevant. For example, some look-and-feels provide pulsating effects on default/focused buttons. This will result in a trace that looks very similar to an infinite repaint. You might want to add some logic to the repaint manager to trace only prolonged suspicious repaints that happen on the same control.

    Note that Swing provides an option to trace the low-level primitive painting operations. Using the-Dsun.java2d.trace=log described in the "System Properties for Java 2D Technology," you will see the following output:

    sun.java2d.loops.FillRect::FillRect(AnyColor, SrcNoEa, AnyInt) sun.java2d.loops.DrawGlyphList::DrawGlyphList(AnyColor, SrcNoEa, AnyInt) sun.java2d.loops.DrawRect::DrawRect(AnyColor, SrcNoEa, AnyInt), SrcNoEa, "GDI") 

    If you know what you're looking for, this output might give you a clue: (FillRect clears the text field,DrawGlyphList paints the text, andDrawRect paints the border), but in general, it's too low-level to locate the source of the infinite repaints.


    Swing is a very powerful UI toolkit. You can do pretty much anything you want with it, but it's very easy to abuse this power. Using the non-intrusive techniques described in this article, you can easily locate event listeners and other related code pieces that violate the EDT-related rules, making your application more responsive and robust.


    I would like to thank Eamonn McManus for his help on JMX-related code.