This content has been marked as final. Show 7 replies
One of the principles of relational theory is that you never store a value that can be derived. You will always know how many volunteers there are in a project by doing a count query. Storing this value may make it quicker to retrieve, but in the long run will be more expensive to maintain, and will always leave you open to integrity problems.
Also, creating database objects with mixed case is something I would consider a sackable offense. ie CREATE OR REPLACE trigger "BI_Volunteers_currently_signed_up"
I agree with Andre.
If you want to have a "running counter" in the data, then any transaction affecting the count needs to update the counter.
This requires the counter to be locked in order for a transaction to make the change. And released only when that transaction is committed or rolled back.
The end result? Serialisation. Only ONE transaction at a time can happen as ALL transactions need to get into the queue waiting for a lock on the counter, in order to update it.
This kills performance and scalability. Annihilates it. Completely. You can have the most powerful computer on this planet, capable of doing a billion transactions every second. But with your approach and design, it will crawl along at the speed of only being able to process a single transaction at a time.
This approach is exactly NOT how one designs multi-user and multi-process systems. Serialisation has a very heavy penalty. And needs to be avoided.
985553 wrote:Bad idea? A bit of an understatement. ;-)
So basically, you would deem it a bad idea, and the same information should just be communicated to the user by report based on an sql query?
If report performance is an issue (millions of rows to process to get count), then a fast refresh materialised view (aggregation that includes the counts) can be considered. The Oracle Data Warehouse guide covers this in detail.
This is Sanjit here. I analyzed your thread. if I am correct, you are referring to update your volunteers_signed_up
column to by 1 every time a record is inserted into the Bridge_Entity table, Right?
According to your requirements, I am providing you a Trigger that will resolve your issue.
CREATE OR REPLACE trigger BI_Volts_cuntly_signed_up
BEFORE INSERT OR UPDATE ON Bridge_entity
for each row
select nvl(max(Volunteers_currently_signed_up),0) into v_signed_up from volunteers
update volunteers set Volunteers_currently_signed_up=v_signed_up where project_id=:new.project_id;
Here, Bridge_entity and volunteers are two different tables and the primary key is applied on the
Projects table and foreign keys are referenced to these two tables.
In the beginning, Volunteers_currently_signed_up column will be null inserted for every project_id inserted into the
volunteers table. Later, execute this trigger. This trigger will automatically update the Volunteers_currently_signed_up
column each time a new record will be inserted into the Bridge_Entity table.
Hope this will work 4 you.
You run this SQL:
select nvl(max(Volunteers_currently_signed_up),0) into v_signed_up from volunteers where project_id=:new.project_id;This code is not thread-safe.
Between that select statement and the following statement:
v_signed_up:=v_signed_up+1;..other transactions (same project) could have committed and changed the signed up volunteer total. Which means your calculate-the-count statement is now using the wrong total.
To get the correct total means locking all rows in order to do the count and update the total, preventing any other transaction from changing the answer you are using in the update. This is called serialisation. Serialisation kills performance.
This is basic database stuff. Get it wrong and your approach and code corrupt valuable business data, and results in business decisions and reports based on data that is wrong.