Java Tech: The ABCs of Synchronization, Part 2 Blog

    " hrefaction="pub">

    Waiting and Notification
       Producer and Consumer
    Volatile Variables
    A Tour of J2SE 5.0's Synchronizers
       Countdown Latches
       Cyclic Barriers
    Answers to Previous Homework

    Last time, I began a two-part series that focuses on synchronization in a Java context. I introduced you to the monitor and lock synchronization concepts, for guarding critical sections of code from simultaneous execution by multiple threads. I then revealed Java's synchronized method and synchronized statement implementations of monitors and locks. Finally, I explained deadlock.

    An exploration of Java's synchronization ABCs is not complete without coverage of thread communication, volatile variables, and synchronizer tools. This month, you'll learn how Java's waiting and notification mechanism supports thread communication, and explore a classic example of two threads communicating. You'll then discover volatile variables and find out how they are related to synchronization. Finally, you'll tour J2SE 5.0's high-level synchronizer tools.

    Note: As with the previous article, this article assumes that you have a basic understanding of threads and thejava.lang.Thread class. It also assumes you understand what is meant by multiple threads simultaneously executing code. (Refer to the Note near the beginning of last month's article, if you aren't sure.)

    Waiting and Notification

    Synchronized methods and synchronized statements make it possible for threads to avoid conflicts while sharing a common resource (e.g., two threads sending data packets over a single network connection). But when coupled with Java's waiting and notification mechanism, synchronized methods and/or synchronized statements also allow threads to actively communicate, to cooperate on common goals.

    The waiting and notification mechanism supports communication between threads, as follows: a thread voluntarily waits until somecondition (a prerequisite for continued execution) occurs. At that time, another thread notifies the waiting thread, to continue its execution. That communication is made possible by five methods implemented in the Object class:

    • public final void notify() wakes up an arbitrary thread that is waiting to enter the monitor associated with the current object. The thread cannot proceed until the thread within the monitor exits and releases its lock. The woken thread competes with any other threads that are competing to obtain the monitor's lock.
    • public final void notifyAll() wakes up all threads that are waiting to enter the monitor associated with the current object. The threads cannot proceed until the thread within the monitor exits and releases its lock. The woken threads compete with any other threads that are competing to obtain the monitor's lock.

    • public final void wait() throws InterruptedException causes the invoking thread to wait (and give up its lock) until another thread invokes a notification method that wakes up the thread (which grabs the lock). This method throws an InterruptedException object if another thread previously set the thread's interrupted status.

    • public final void wait(long timeout) throws InterruptedException causes the invoking thread to wait (and give up its lock) until another thread calls a notification method that wakes up the thread (which grabs the lock), or the invoking thread has waited for timeout milliseconds. This method throws an InterruptedException object if another thread previously set the thread's interrupted status.

    • public final void wait(long timeout, int nanos) throws InterruptedException causes the invoking thread to wait (and give up its lock) until another thread calls a notification method that wakes up the thread (which grabs the lock), or the invoking thread has waited for timeout milliseconds plusnanos nanoseconds. This method throws anInterruptedException object if another thread previously set the thread's interrupted status.

    Because the synchronization lock is integrated with waiting and notification, each method above must be called from a synchronized method or a synchronized statement. Otherwise, anIllegalMonitorStateException object is thrown from the method.

    Note: J2SE 5.0's java.util.concurrent.locks package includes Condition, an interface that serves as a replacement for Object's wait and notify methods. You can use Condition implementations to create multiple condition variables that associate with one Lockobject, so that a thread can wait for multiple conditions to occur.

    Producer and Consumer

    Now that you know how threads use the waiting and notification mechanism to communicate, let's examine how that mechanism is used in a practical way. The classic example of thread communication involves the relationship between two threads: a producer thread and a consumer thread.

    The producer thread produces data items to be consumed by the consumer thread. For example, the producer obtains data items from a network connection and passes them to the consumer for processing. Each produced data item is stored in one shared variable.

    Imagine a scenario where the threads do not communicate and run at mismatched speeds. It would be possible for the producer to produce a new data item and record it in the shared variable before the consumer has retrieved the previous data item for processing. It would also be possible for the consumer to retrieve the contents of the shared variable before a new data item is produced. The result is a chaotic mess. To overcome those problems, the producer must wait until it is notified that the previously produced data item has been consumed, and the consumer must wait until it is notified that a new data item has been produced. The waiting and notification mechanism supports this communication, and is demonstrated in the following application source code. See this article's file for that source file.

    public class PC { public static void main (String [] args) { Shared s = new Shared (); new Producer (s).start (); new Consumer (s).start (); } } class Shared { private char c = '\u0000'; private boolean writeable = true; synchronized void setSharedChar (char c) { while (!writeable) try { wait (); } catch (InterruptedException e) {} this.c = c; writeable = false; notify (); } synchronized char getSharedChar () { while (writeable) try { wait (); } catch (InterruptedException e) { } writeable = true; notify (); return c; } } class Producer extends Thread { private Shared s; Producer (Shared s) { this.s = s; } public void run () { for (char ch = 'A'; ch <= 'Z'; ch++) { s.setSharedChar (ch); System.out.println (ch + " produced by " + "producer."); } } } class Consumer extends Thread { private Shared s; Consumer (Shared s) { this.s = s; } public void run () { char ch; do { ch = s.getSharedChar (); System.out.println (ch + " consumed by " + "consumer."); } while (ch != 'Z'); } }

    The application creates a Shared object and two threads that get a copy of that object's reference. The producer invokes the object's setSharedChar() method to save each of 26 uppercase letters. The consumer invokes that object'sgetSharedChar() method to acquire each letter.

    The waiting and notification mechanism is incorporated into both methods. The introduction of a Boolean writeableinstance field variable into the same class tracks two conditions (the producer waiting on the consumer to consume a data item and the consumer waiting on the producer to produce a new data item) and helps to coordinate the execution of the producer and consumer. The following scenario, where the consumer executes first, illustrates that coordination:

    1. The consumer executes s.getSharedChar() to retrieve a letter.
    2. Inside of that synchronized method, the consumer callswait() because writeable containstrue. The consumer now waits until it receives notification from the producer.

    3. The producer eventually executess.setSharedChar(ch).

    4. When the producer enters that synchronized method (which is possible because the consumer released the lock inside of thewait() method prior to waiting), the producer discovers writeable's value as true and does not call wait().

    5. The producer saves the character, sets writeable tofalse (which will cause the producer to wait on the next setSharedChar() invocation if the consumer has not consumed the character by that time), and invokesnotify() to waken the consumer (assuming the consumer is waiting).

    6. The producer exits setSharedChar(char c).

    7. The consumer wakes up (and re-acquires the lock), setswriteable to true (which will cause the consumer to wait on the next getSharedChar()invocation if the producer has not produced a character by that time), notifies the producer to awaken that thread (assuming the producer is waiting), and returns the shared character.

    When run, the application produces the following output (which I've shortened, for brevity):

    A produced by producer. A consumed by consumer. B produced by producer. B consumed by consumer. C produced by producer. C consumed by consumer. D produced by producer. D consumed by consumer. E produced by producer. E consumed by consumer. F produced by producer. F consumed by consumer.

    As you can see, the producer always produces a data item before the consumer consumes that data item. Furthermore, each data item is produced exactly once and each data item is consumed exactly once.

    Volatile Variables

    The assignment of a value to a variable that is not of type long or double is treated by Java as an indivisible operation. Retrieving that variable's value is also treated as an indivisible operation. In other words, Java guarantees that a variable write operation results in all bits being written as a unit, and guarantees that a variable read operation results in all bits being read as a unit. There is no need for synchronization.

    Because synchronization isn't needed when reading/writing a non-long/non-double variable, one expects that two threads can communicate, via a single shared variable, without a problem. Thread A writes a value to the shared variable, and thread B retrieves the new value from that variable.

    But a problem exists. Java's memory model permits threads to store the values of variables in local memory (e.g., machine registers or a processor's cache on a multiprocessor machine). This is done for performance reasons: it is faster to access local memory than main memory. On Java VMs with memory-model implementations that support local memory, other threads do not "see" the change when a thread modifies a cached value. This lack of visibility across threads is a major problem in loop scenarios. For example, one thread continually iterates a loop until a certain variable is set to a specific value. Another thread assigns this value to the variable at some specific moment, so that the loop can end. But if the loop's thread cannot "see" the new value, because the value is stored in the other thread's local memory, an infinite loop occurs.

    Synchronization represses this caching behavior. According to the Java memory model, a thread's local memory is reset to the values stored in main memory when the thread acquires a lock. Furthermore, local memory values are written back to main memory when the thread releases that lock.

    Unfortunately, excessive synchronization can cause a program's performance to suffer. To balance the need to (occasionally) avoid local memory versus not sacrificing performance, Java provides thevolatile keyword. For each variable marked with that keyword, Java reads that variable's value from main memory, and writes that variable's value to main memory. Local memory is avoided.

    I have written an application that demonstrates a volatile variable. When you run that application, it starts two threads. One thread runs in a loop and continually outputs the value of an integer variable (which it also increments). The other thread sleeps for five seconds and then writes a value to another variable, to "tell" the loop thread to terminate. That application's source code, which is included in this article's attached file, appears below.

    public class Terminate { public static void main (String [] args) { ThreadB tb = new ThreadB (); ThreadA ta = new ThreadA (tb); tb.start (); ta.start (); } } class ThreadA extends Thread { private ThreadB tb; ThreadA (ThreadB tb) { this.tb = tb; } public void run () { try { Thread.sleep (5000); } catch (InterruptedException e) { } tb.finished = true; } } class ThreadB extends Thread { public volatile boolean finished = false; private int sum = 0; public void run () { while (!finished) { System.out.println ("sum = " + sum++); Thread.yield (); } } }

    Notice that I've marked ThreadB'sfinished variable with the volatilekeyword. This guarantees that each thread will always access the value of finished from main memory. For Java VMs that are capable of caching variable values in local memory, threads will be required to access the value of finished from main memory; no infinite loop will occur when this program is run on those Java VMs, because both threads work with the same main memory copy of finished.

    Note: J2SE 5.0 introducesjava.util.concurrent.atomic, a package of classes that extends the notion of volatile variables to include an atomic conditional update operation, which permits lock-free, thread-safe programming on single variables.

    A Tour of J2SE 5.0's Synchronizers

    J2SE 5.0 introduces four kinds of synchronizers, high-level tools that let your code achieve different kinds of synchronization: countdown latches, cyclic barriers, exchangers, and semaphores. These high-level synchronization tools are implemented by five classes in thejava.util.concurrent package. This section identifies each class, as it briefly explores the first three synchronizers and extensively explores semaphores.

    Countdown Latches

    A countdown latch is a synchronizer that allows one or more threads to wait until some collection of operations being performed in other threads finishes. This synchronizer is implemented by the CountDownLatch class.

    Cyclic Barriers

    A cyclic barrier is a synchronizer that allows a collection of threads to wait for each other to reach a common barrier point. This synchronizer is implemented by theCyclicBarrier class and also makes use of theBrokenBarrierException class.


    An exchanger is a synchronizer that allows a pair of threads to exchange objects at a synchronization point. This synchronizer is implemented by the Exchanger<V>class, where V is a placeholder for the type of objects that may be exchanged.


    A semaphore, as implemented in J2SE 5.0'sSemaphore class, is a synchronizer that uses a counter to limit the number of threads that can access limited resources. Ironically, this high-level synchronization tool is based on the same-named, low-level primitive that is often used to implement monitors. The discussion that follows first explores this low-level primitive, and then explores Semaphore.

    In 1965, E. W. Dijkstra invented the semaphore concurrency construct. He used that construct to support mutual exclusion, the act of restricting shared resource access to oneprocess at a time. Nowadays, we don't think about processes. Instead, we think about threads. That's why I refer to thread instead of process in the following paragraph.

    Dijkstra's semaphore construct is a protected variable that initializes to a non-negative integer count, implements P (from the Dutch word "proberen," meaning "to test") and V (from the Dutch word "verhogen," meaning "to increment") operations, and has a wait queue. When a thread executes P on the semaphore, the count is tested to see if it is greater than 0. If that is the case, the semaphore decrements the count. But if the count is 0, the calling thread is made to wait in the queue. When a thread executes V on the semaphore, the semaphore determines if one or more threads are waiting in the queue. If so, the semaphore allows one of those threads to proceed. But if no threads are waiting, the count increments. Each one of the P and V operations is indivisible.

    Semaphores classify as counting semaphores or binary semaphores. A semaphore is a counting semaphore if it can assume any non-negative integer value. The SDK documentation for J2SE 5.0'sSemaphore class refers to that class as a counting semaphore. But if the semaphore can only assume 0 or 1, it is known as a binary semaphore. Binary semaphores are a special case of counting semaphores.

    Semaphore uses the concept of permits to explain how it works. A thread must acquire a permit, guaranteeing that an item is available for use, before the thread can obtain the item from a pool of items. When a thread finishes using the item, it returns that item back to the pool and the permit back to the semaphore.

    Use the public Semaphore(int permits, boolean fair)constructor to create a semaphore: permits specifies the number of available permits (hence resources that are available in a pool), and fair indicates whether or not the semaphore guarantees a first-in first-out (FIFO) granting of permits under contention (true, if this is the case). An example:

    Semaphore sem = new Semaphore (1, false);

    creates a binary semaphore that does not guarantee FIFO granting of permits under contention.

    The public void acquire() method acquires a permit from the semaphore. The calling thread blocks until one is available or the thread is interrupted. An example:

    sem.acquire ();

    The public void release() method returns a permit to the semaphore. If any threads are waiting for a permit, one of those threads is selected and given the just-released permit. The thread is then re-enabled for thread scheduling. An example:

    sem.release ();

    To demonstrate Semaphore, I've created an application that makes use of the Pool class, shown in the SDK's Semaphore documentation. Check out that application's source code, which locates in the article's file.


    An exploration of Java's synchronization fundamentals is not complete without an examination of thread communication, volatile variables, and synchronizers. You've learned how Java uses its waiting and notification mechanism to support thread communication, have seen that volatile variables serve as a special-purpose alternative to synchronization, and have acquired insight into J2SE 5.0's high-level synchronizer tools.

    I have some homework for you to accomplish:

    • Five philosophers sit around a circular table. Each philosopher alternates between thinking and eating rice. In front of each philosopher is a bowl of rice that is constantly replenished by a dedicated waiter. Exactly five chopsticks are on the table, with one chopstick between each adjacent pair of philosophers. Each philosopher must pick up both chopsticks adjacent to his/her plate simultaneously before that philosopher can eat.

      Create a Java application that simulates this behavior. Avoid deadlock and the problem of indefinite postponement, where one of more philosophers soon starve because philosophers adjacent to the starving philosophers are always eating. Make sure that mutual exclusion is enforced, so that two adjacent philosophers do not use the same chopstick at the same time.

    Next time, Java Tech explores SANE and TWAIN.


    Answers to Previous Homework

    The previous Java Tech article presented you with some challenging homework on locks, synchronized methods, and synchronized statements. Let's revisit that homework and investigate solutions.

    1. If a thread holds a lock, what happens when the thread needs to enter another critical section controlled by the same lock?

      The thread is allowed to enter the other critical section and execute its code without having to re-grab the same lock. Furthermore, the lock is not released when the thread leaves the inner critical section and returns to the outer critical section, because that lock must be held while the thread continues to execute within the outer critical section.

    2. In the earlier example of a component's event registration and listener logic, I presented two synchronized methods and a synchronized statement. You might be tempted to remove the synchronized statement from notifyClients() and make the entire method synchronized. What is wrong with that idea?

      We run the risk of deadlock if we make the entire method synchronized. How is this possible? Assume two threads: an event-notification thread that invokes each registered listener's event-handling method, and a listener thread that registers that object's interest in receiving event notifications. Assume the event-handling method is synchronized, to protect itself from simultaneous invocation by multiple threads. Now consider this scenario: the event-notification thread has just entered its synchronized method and enters the loop to begin sending event notifications to registered listeners. At the same time, the listener thread is executing within code that synchronizes via the same lock as the listener's synchronized event-handling method. Each thread holds its own lock. Suppose the listener thread invokes a synchronized method in the event notification object to deregister its listener object. Because the deregistration method is synchronized using the same lock as the event-notification synchronized method, the listener thread must wait because the event-notification thread holds that lock. When the event-notification thread attempts to invoke the listener's event-handling method, it is forced to wait when it attempts to acquire the listener's lock, because the listener thread holds that lock. Neither thread can proceed because each thread holds the other thread's required lock, and deadlock occurs.

    3. Construct an example that illustrates a lack of synchronization due to a pair of threads acquiring different locks. Use the keywordthis and the synchronized statement in the example.

      Consult the source code in this article's attached code.zipfile.