Forum Stats

  • 3,814,885 Users
  • 2,258,923 Discussions


How very big DBMS like banking, Rail reservation managed terms of ACID?

831755 Member Posts: 21
edited May 28, 2013 7:09AM in General Database Discussions
As millions of operations and transaction being performed concurrently how locking, atomicty, concurrency etc managed? At a time single transaction is allowed to be executed or many? Can some one please explain it details or give me some helpful link.


  • Billy Verreynne
    Billy Verreynne Software Engineer Member Posts: 28,799 Red Diamond
    A million banking transactions are not all on the same single account - so there is very little serialisation required.

    Of course a proper relational data model, and sane code, are also required.
  • 831755
    831755 Member Posts: 21
    Suppose there's shared account where 25 shared a/c holder performing simultaneous transaction then? Can you please explain little bit detail that how many get transffered from one account to another? I dont talk straightforward method. Suppose money deducted from my account and still not added to opposite party account then where does money lies at that moment? Little but described way please.
  • 558383
    558383 Member Posts: 6,961 Silver Crown
    edited May 27, 2013 3:40PM
    A very old link but still helpful is Scaling Oracle 8i, Scale Abilities by James Morle (free ebook):

    You can also try to read online "Benchmarking Concepts" chapter in Pro Oracle Database 10g RAC on Linux: section on database configuration used by Oracle Corp. for TPC C benchmark is really interesting.
  • user4530562
    user4530562 Member Posts: 45 Blue Ribbon
    edited May 28, 2013 7:09AM
    I'll try to explain the basic concept, without going too much into details and also without considering any special cases (such as bitmap indexes, where the locking scheme may slightly differ). I will also ignore some optimizations (such as delayed commit cleanout) in order to keep things simple. You'll find more detailed information in various Oracle books such as "Oracle Database Architecture" by Tom Kyte or "Oracle Core" by Jonathan Lewis or in online resources such as (in particular chapters 4 and 25)

    In order to keep things simple (as I am only explaining the basic concept), I'll only talk about transactions modifying simple tables (i.e. "heap organized tables" - the most common type of tables) without any indexes. Please note that the table rows are stored in database blocks, which are units of configurable size, most often 8KByte. A block is stored contiguously on disk and database blocks are the minimal amount of data that Oracle will read from disk into memory (i.e. if you want to read a row of a table, Oracle will look for the block containing the row and read the entire block from disk). Also if more space is required to store the table data, this space will be allocated in multiples of blocks (i.e. Oracle will not allocate just the space required to insert the new row, but it will allocate a few blocks).

    As of your first question, many transactions may simultaneously operate on the database and even on the same database objects (e.g. tables), although there are limits to the concurrency (which are configurable to a large extent).

    When a transaction starts, it gets a slot in a system-wide transaction table* where all transactions are registered (technically this transaction table* is located in the undo segment header blocks). This imposes a first limit on concurrency - you may not have more concurrent transactions than there are transaction table* slots available. The maximum number of concurrent transactions can be configured using the Oracle initialization parameter TRANSACTIONS (see

    When your transaction wants to modify or delete a row, it will go to the database block containing that row and take an entry in the so called ITL ("interested transaction list"). The ITL is a table* at the beginning of each database block, where transactions need to get a slot before they may modify any rows in that database block. So this imposes another limit on concurrency - the maximum number of transactions concurrently modifying one database block is limited by the number of available slots in the ITL. The number of slots in the ITL may be configured when you create the table (up to an absolute maximum of 255).

    Then there is one byte for each row stored in the database block (sometimes referred to as the "lock byte"). Once your transaction has got a slot in the transaction table* and a slot in the ITL it will write it's ITL slot number into the lock byte of every row it wants to modify in that database block (and if it wants to modify multiple database blocks, it will have to acquire a slot in each of these database block's ITL and mark the rows accordingly). If one of the rows is already locked by another transaction T2, the transaction we are looking at will be blocked, i.e. it will have to wait until T2 either commits or rolls back it's changes. This is a final limitation to concurrency - one row may only be modified/deleted by one transaction at a time.

    As mentioned in the first paragraph, this is only a sketch of the basic mechanisms. There are a lot of additional details and optimizations which make things more complicated. But if you are interested in that level of details, I think you'll be better served by one of the resources I mentioned at the beginning.

    Dont' get confused by the fact that I use the word "table" in two ways. The first is to refer to a database table (i.e. the thing which you SELECT FROM and that is created with CREATE TABLE XY). The other is to refer to a data structure which contains multiple elements of the same type (such as an array in a programming language). I marked the references to the data structure with an () to make the difference clear.

    Edited by: user4530562 on 28.05.2013 04:07
  • 831755
    831755 Member Posts: 21
    Thanks a lot dear,
    This was what I expected and looking for.
  • user4530562
    user4530562 Member Posts: 45 Blue Ribbon
    While I believe I answered the question about how transactions can operate concurrently, I notice I didn't answer your question concerning ACID properties yet. I believe it is mainly the Atomicity and Isolation properties that matter in this context.

    In Oracle, modifications by a transaction are handled as follows:
    - While a transaction modifies database objects (tables), all modifications are applied immediately to the corresponding database blocks (provided the transaction was able to lock the corresponding rows, see my previous post). However, the modifications remain invisible to the rest of the world (i.e. to other session). Other sessions will see the effect of the modifications only, once the transaction commits.
    - Any SQL statement that started before the transaction commits, will continue to see the database without the transaction's modifications. I.e. any such SQL statement will not see the modifications done by the transaction throughout the statement's entire duration.
    - Despite the previous two claims, readers are never blocked by writers. I.e. a SELECT statement will never have to wait for a transaction to complete it's modifications on a table - the SELECT statement can execute concurrently with any transactions, even if they are modifying the database objects (tables) being queried.

    The mechanism to ensure this is UNDO (previously known as ROLLBACK). UNDO is a large topic, and Oracle books often dedicate an entire chapter (or more) to this one topic.
    Conceptually, UNDO is storage, where the database records for any modification done by a transaction how this modification can be reversed (e.g. if you delete a row from a table, the UNDO will contain instructions to insert the row with the original column values).
    Sessions operating concurrently use this UNDO to reverse modifications done by yet uncommitted transactions or to reverse modifications done by transactions that committed after the start of this sessions currently executing SQL statement. This reversion is not done on the "official copy" of the respective data blocks (otherwise the transaction's modifications would be lost). It is done on a clone of this "official copy" instead.
    As one may imagine, the storage available to record UNDO information is limited. This is the source of the ORA-01555 "Snapshot too old" error. This error may occur if a session needs to reverse changes in a data block (e.g. because it is executing a long running SQL) and if during the reversion process the session notices that the required UNDO information is no longer available.

    Issues I didn't answer in this short conceptual overview include:
    - How is the UNDO storage managed / allocated to the transactions?
    - How is the UNDO information structured (how exactly is the information on modifications recorded - what does an UNDO record look like)?
    - How can session efficiently find the UNDO information that needs to be applied to a particular data block?
    - How does a session notice that a particular data block is not suitable for it's need and that UNDO needs to be applied?

    You find detailed answers to these (and many other) questions in the excellent book "Oracle Core" written by Jonathan Lewis.
  • Billy Verreynne
    Billy Verreynne Software Engineer Member Posts: 28,799 Red Diamond
    828752 wrote:
    Suppose there's shared account where 25 shared a/c holder performing simultaneous transaction then? Can you please explain little bit detail that how many get transffered from one account to another? I dont talk straightforward method. Suppose money deducted from my account and still not added to opposite party account then where does money lies at that moment?
    That depends entirely on the data model.. It will dictate how/what/where/when locking is needed to move money from one account to another.

    It could only require 2 row locks. From Account and To Account. Subtract money from the first account (if sufficient funds), add money to balance of second account. Commit. This is a very short and simplistic transaction - for a very simplistic data model.

    A more complex data model may also have a Transaction table that is used to record the transaction. In which case, using the above scenario, an insert into the Transaction table (including debit, credit, bank fees, etc) is also required.

    Some models could use batch processing. Where the Transaction table says what needs to happen - i.e. read a transaction from table, apply and commit. This was typical back in the 80's when batch processing was the norm. (recall writing many a Cobol program that required processing of invoices and accounts in this fashion, with data coming from sequential devices like magnetic tapes)

    The primary "thing" that determines the locking that is required is the business transaction and the data model. Then this needs to be implemented into the RDBMS product, like Oracle. And as products differ, implementations will differ.

    In many cases, the architecture used also has an impact on how technically locking is applied. For stateless clients, pessimistic locking is a poor choice and optimistic locking used instead.

    The bottom line is that locking is determined by the data model and what the business transaction does. Oracle is not going to solve a locking/ACID problem for you if your data model is poorly designed, and your app code poorly written for executing the business transaction.
This discussion has been closed.