mkmunir wrote:And what happens when it deadlocks again? What happens when this approach results in a severe performance bottleneck due to serialisation?
How I tested is was, I had three oracle sessions opened in three different windows...and i called the function from all the three windows with probably .5 of sec difference from one from anothere. The first time around it gave me a dead lock...and ever since its been working right....as expected...
mkmunir wrote:I "+attacked+" the approach - not you personally. And IMO, that approach deserves to be "+attacked+" as it is wrong.
Well, I do not understand why you are getting so frustrated and making personal attacks on me.
I looked into the issue, the dead lock was because of a forgotten COMMIT command.And what about a so-called forgotten commit in future? What about forcing serialisation?
The requirements are, 'NO HOLES IN THE NUMBERS GATHERED FROM THE SEQUENCE BY ANY INDEPENDENT PROCESS' and I cannot change these requirements..This is not a requirement. What you have written is an expected solution for some kind of problem. And that is where the failure is.
Toon Koppelaars wrote:Nah, not a consensus. I've never considered autonomous transactions as a real cause of deadlocks, but rather a potential cause of data inconsistency.
- The deadlock that occurred just happened because the setup of rows in the my_sequences table, wasn't committed yet, while testing the function inside the same session: this caused that session to deadlock with itself. What bothers me though, is that apparently there seems to be some 'consensus' here on this forum that autonomous transactions always cause deadlocks... This of course is not true.