This content has been marked as final. Show 2 replies
The normal way to avoid this issue would be to have a unique index on the table which prevents duplicate entries.
I wouldn't delete the entry, only insert the entry if it doesn't exist and update it if it does.
If you lock on a key unique to the user/row (rather than the whole table) your overhead will be about 2 micro-seconds. This is extremely small for an application driven by user interaction. You could lock the whole table which might limit your application to about 100 updates per second but would be much simpler to manage. Personally, if admins are performing more than 100 updates per second there is something wrong with your model.
In short I would just lock the whole table and optimise it if this proved to be an issue.
I'm agree with Peter, use primary keys or secondary unique keys that identify data in your tables (if you can't, you have a database design problem) and then lock the rows affected in the Admin operation at row level. It only locks operations on the same rows until commit/rollback is done, so application's performance is only affected for the second Admin attended for milliseconds if they aren't long ops. It's thread safe and cluster safe.
You probably must think to redefine the Admin operations if they're critical in your application.