This content has been marked as final. Show 61 replies
So Damir - you run into this same locking issue in your applications as well?
sorry I have to correct you :-)
If user B waits, he will wait at the UPDATE statement (because of the implicit lock), in that case he has already passed the checksum check. So he will not get an error, he will overwrite the changes of user A.
User B will wait until User A's transaction completes. [...] User B will receive an error stating that the data has changed.
I think I have a compromise. When you implement the WAIT with 1, 2 or x seconds we can have the old behavior that the second users waits. In the case nothing is changed by user A, the transaction of user B go's through.
This would make sure that the behavior is very similar to the old behavior but has the advantage that we are sure that the connection is released back to the connection pool after x seconds. And the checksum check is also more consistent.
Very good point - and you are correct. That is how Scenario 1 would occur.
I'll look into this. Thanks for making this point and suggestion.
frankly no, but not because it never happened but because we decide to use our own DML kind of statements.
Your example on one table proves just this-one table locking is easy to understand and manage! In real world, there are several tables involved in one transaction (F keys)...so concentrate the whole thing on just one table is really not good approach to point a problem.
I do not want to teach but successful locking on one "important" table row may involve several hundreds of others rows in several different tables...what may not be easy to achieve without waiting and ...waiting....
Some of us may say this is a poor design...probably true...but also many of us have to live with such DB designs (inherited from previous people) with poor possibility to change it in important parts...
So exposing such a designs to WEB without ensuring that transaction processes may not last as long it needs is just not acceptable and riding a tiger.
So, as shown, it is very important to us to have 100% control over transactions, so we are using our DML processes (mostly implemented through PLSQL autonomous transaction procedures) and not a single MRU/MRD process from ApEx, just because there is no NOWAIT manipulation at all.
Hope my standing is now clear and please comment my suggestion to have page level DML based transactions...it should be easy to implement and easy to control.
The bottom line of this thread, at least so far, seems very disturbing to a non-expert user like me.
Damir is saying that he is avoiding using the APEX automatic DML processes because (to make it simple) they are not good enough to his needs. For me, the situation is reversed. I'm trying to use these built-in processes as much as possible, because I want to avoid all this very complicated concurrency control issues, mainly lost updates. However, Patrick's scenario 1 demonstrates that this could be not enough. What disturb me the most is that it seems that the flaw is in the system, and not in the way we are using it within APEX.
I can't argue with Joel's vast experience, but Patrick's scenario seems not very rare, especially in the retail market. I'll specify a private case. We've been asked to check migrating a software for selling tickets to entertainment shows, to the APEX environment. The software is using an algorithm that proposes the current available "best seat" in the house. Because of that, it is entirely possible that 2 (or more) simultaneous callers are "competing" on the same seats. According to scenario 1, I can do everything right within APEX, using the built-in processes, and still sell two tickets to the same seat - I'm using Patrick description: "If user B waits," – while the seat is being sold to user A – "he will wait at the UPDATE statement (because of the implicit lock), in that case he has already passed the checksum check." – it "thinks" the seat is still open - "So he will not get an error, he will overwrite the changes of user A" – selling a second ticket to the same seat.
Am I missing the point here?
IMHO, the problem is not that Oracle will post twice some records-NO!
This is Oracle-the best database in the world and ORA-00001 is absolute priority-don't worry about this...this is core of the database long time ago!
Problem is how the user side will behave ... what will end user saw and what will be the message from the application...you know how much friendly interface would be...
In your case if user B will try to post (at the end of page) the record (free seats) .. he will face Error page with ORA message which will tell him that this record (free seat) doesn't exist anymore. Of course if your reservation is trying to to make some pre-reservation tasks, success of them will be based on how your code is written....
There are some interesting situation : if your B client has tried to reserve seat 4,5,6 and 5 and 6 are gone...he will get an error because transaction is done in a way "All or nothing" ... there was no particular success!
In next try maybe someone other will reserve seat 4 and so on...but this is life and people should have that in mind.
If I may tell in simple relations (few involving tables) with not too much concurrent users...probably automatic DML is OK.
In all other cases yes but with some "securing" mechanisms that we are talking about....
Real life and experience will prove these words...
Hope this will help you a little...
>> Your example on one table proves just this-one table locking is easy to understand and manage! In real world, there are several tables involved in one transaction (F keys)...so concentrate the whole thing on just one table is really not good approach to point a problem.
No question. Real world transactions involve many tables and not just one. But in the case of Automatic DML on a page in APEX, by definition, this is DML against just one table or view.
Patrick's scenario is real and plausible, although I believe it to be quite rare.
Regardless, after some review, I think the correct solution is to issue the SELECT FOR UPDATE with NOWAIT at the time of checksum computation.
Joel, you are the man! :-)
I'm late to this thread, but I don't understand why the pre select is needed. Couldn't the update verify if data has changed since the row was read?
The proposed solution will lock data for a longer period of time and that could cause busy systems to get more performance problems.
Why couldn't the row_scn be checked by the update instead? Wouldn't that achieve as good a overwrite protection while minimizing the time the lock is held?
Do we really want to fail this immediatly? I'd think most times this will cause error when the user wouldn't even have noticed the slowdown. Most transactions would probabluy hold the lock for <.5 sec.
I agree that any risk of overwriting is 100% unacceptable. I would just not want to increase the performance imapact more than necessary.
Thanks for your feedback and input. We welcome it.
ora_rowscn would be the ideal way to do this. It would obviate the current method employed for optimisitic locking. However, ora_rowscn was first available in Database 10gR1, and Application Express 3.0 will continue to support 9iR2 and higher.
Ultimately, I think the point, as Patrick pointed out, is if you ever get into a state today where you are waiting on a locked row, you're waiting on the UPDATE statement and you're already past the checksum computation. So you do run the rare risk of lost updates.
I'm open to suggestions/comments from any customer, but I think we would want to fail this immediately with NOWAIT. Why? If you're ever in a state where you're waiting on a particular row at the time of update, the odds are very great (not 100%, but very high) that the data was changed for that particular row already. So even if you wait and you attempt to issue your UPDATE statement after you acquire the lock, it's probably going to fail anyway (due to the checksum comparison).
I agree that it could not be done in 9i that way. It would still be nice to use that in 10G databases. Couldn't the checksum test be done in SQL so the test can be done when the update is executed.
The problem I see is that this problem will occur mostly when we have top compete with systems that wasn't designed for web availability. For example say a system where for update is used to lock a parent row while dependent rows are updated to ensure only one process manipulates the data at any given point in time. Then we would fail updates even though the data wouldn't have changed when the lock is released.
I don't think locking before checksumming necessarily is the solution for 100% of all application. Haveing the option to do it that way may help in some cases.
Just to throw out another idea for how to do it. In 9i, could the update statement be done such that all columns in the table are retrieved when the row is selected so the where statement can refer to the values they used to have? That would allow optimistic locking and still detect changes when the update is issued (maybe with nowait).
I'm not challenging the Oracle database in any way, and you don't need to convince me of its merits, but still, this thread is about a real potential problem.
Well, the problem is that it seems not always the case. According to Patrick first scenario, which was confirmed by Joel ("Very good point - and you are correct. That is how Scenario 1 would occur.") my situation can yield two tickets for the same seat, and not an error message.
"In your case if user B will try to post (at the end of page) the record (free seats) .. he will face Error page with ORA message which will tell him that this record (free seat) doesn't exist anymore."
I'm talking about a very simple case – only two users, one table and updates using APEX automatic DML. It seems that in a specific situation - which is out of my control, it is all about timing - I can face a problem of lost update, although using only built-in APEX resources. That's constitute a problem from me.
" If I may tell in simple relations (few involving tables) with not too much concurrent users...probably automatic DML is OK."
Anyway, I started my reply to Damir without reading all the following posts, especially from Joel (had to leave, and just came back). Now it seems like Joel is aware of the situation, so I'm sure a proper solution will be devised and implemented.
Update in one table that is child of some other table and has FK to that table require locking on parent table fields that are involved in relation...so am I missing something or in such a cases we really do speak about more then one table and successful locking?
So in such a cases "select ... for update of .... NOWAIT" is really something more then welcome.
Message was edited by:
"lost update" could and will happened in client-server environment under some circumstances...so the main thing is to show that...as nice as possible to user...but there is no way to prevent that. Here is pseudo code of update process (after validation and computation)
Think that in your case there is proces like:
for i in 1...[checked fields] loopIn this way, some of the reservations will be placed and in a case of some problems. It is more then easy to show on next page result of update. If you need all or nothing process then do not use autonomous transactions but code like this:
when others then
remember records that are not updated;
regular_exit := true;Hope this helps....
for i in 1...[checked fields] loop
when others then
Rollback; -- explicit
if regular_exit then