This discussion is archived
2 Replies Latest reply: Nov 23, 2012 11:12 AM by DrClap RSS

Is there anything wrong with ConcurrentHashMap in the method clear()?

815368 Newbie
Currently Being Moderated
Hi,guys
I have a work about concurrency, there were many “write” operation, when the map is over a size ,I should get all object in the map and do something ,then clear it,and repeat.
So I have some confuse about the method "clear" in class "ConcurrentHashMap".The code I got is like this
    public void clear() {
        final Segment<K,V>[] segments = this.segments;
        for (int j = 0; j < segments.length; ++j) {
            Segment<K,V> s = segmentAt(segments, j);
            if (s != null)
                s.clear();
        }
    }
    
let us assume there are two thread, A and B. A called the method “clear”, B called the method “put” .It's possible that the object B put is missing when A is not done.So how can I avoid this happen.Should I block the map when I call “clear”?If I do like this ,the "put" may have a block with each other. Or what other?

Any help is greatly appreciated.
  • 1. Re: Is there anything wrong with ConcurrentHashMap in the method clear()?
    DrClap Expert
    Currently Being Moderated
    It seems to me that if you "clear" a set in one thread and "add" an entry to the set at the same time that the "clear" is taking place, then whether or not you see the result of the "add" after both operations are complete would be undefined. And indeed for the case of "clear" and "get" the API documentation says exactly that:
    For aggregate operations such as putAll and clear, concurrent retrievals may reflect insertion or removal of only some entries.
    However if you're wanting to wait until N puts have taken place before doing something with the contents of the map, I would suggest that something involving a CountdownLatch or a CyclicBarrier would be suitable. And I would consider not clearing the map when the countdown is complete, but just creating a new map for the next batch of puts to use.
  • 2. Re: Is there anything wrong with ConcurrentHashMap in the method clear()?
    DrClap Expert
    Currently Being Moderated
    In fact now that I read up on CyclicBarrier a bit, it looks like it would be quite suitable for your requirement:
    A CyclicBarrier supports an optional Runnable command that is run once per barrier point, after the last thread in the party arrives, but before any threads are released. This barrier action is useful for updating shared-state before any of the parties continue.
    In other words when N entries have been added to your map, the barrier point is reached and your Runnable creates a new empty map for subsequent parties to use and processes the old map with N entries.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points