This content has been marked as final. Show 2 replies
I don't recall that there are any known problems with long term timers.
* Retry your testing with, say 10 second, instead of 1 day timers, to see if you can recreate the problem with a quick reproducer.
* Regardless, note that the redelivery count cannot be exact - there are conditions were a redelivery can occur even if the application never got the message, which would cause the message to appear to "disappear" for a day. Also, shutting down your server for long periods would throw off your current algorithm. In addition, it looks like a large backlog could also throw off your algorithm - if say, a message becomes visible, but it still takes a long time for your system to retrieve and process the message (due to a long wait in the queue and/or a lengthy message processing step), then the number of tries-per-day is necessarily going to be less than 1.
* I'm not sure, but I think it might help your design if you can consider using a more frequent retry, plus some sort of very simple application logic that checks the age of a message and reacts to 14 day messages rather than introduce a dependency on the delivery count. Also, you might want to consider reducing the "MessagesMaximum" configuration setting on your connection factories to one (the default is 10) -- this will reduce/eliminate the chance that a redelivery of a message that's already in the pipeline of an asynchronous consumer/MDB, but not seen by an application, is silently forced to redeliver on an error.
Another way to help eliminate the dependency on delivery count would be to set messages to expire in 14 days, and configure an "error destination" that accepts expired messages for the queue...