I have a server application in which I only want a specific number of simultaneous requests.You are going to find that extremely difficult to control. Are you aware of the TCP/IP backlog queue? It permits the existence of typically hundreds of connections that have already been completed by the stack before the application even knows about them. That means that the client can connect and send data before your application ever gets around to accepting the connection.
JimM wrote:my comments about race conditions are still relevant. without any explicit coordination, you can't make any assumptions about what threads are available at any given time.
jtahlborn - The code provided here is a contrived example trying to emulate the bigger app as best as I could. The actual program doesn't have any sleeps, the sleep in the secondary thread is to simulate the program doing some work & replying to a request. The sleep in the primary thread is to simulate a small delay between 'requests' to the pool. I can make this 1 second and up to (at least) 5 seconds with the same results. Additionally I can take out the sleep in the secondary thread and still see the a rejection.
OK, I think I see what you are saying. I reviewed the ThreadPoolExecutor code (again) and think I had a flaw in how I thought the flow went.
I think what you are saying is that it is possible for 2 requests to come in simultaneously such that one of the requests is put in the queue and before it can be moved from the queue to an available thread the next requests tries to put a new job in the queue and as such the queue is 'full' and failure occurs, that makes sense.