5 Replies Latest reply: Jan 27, 2012 3:20 AM by Andreas S. RSS

    How to save Timestamp of commit into a table

    Andreas S.
      Hi there,

      Is there any way to insert the timestamp of the commit into the table which is commited?

      I have multiple jobs which process messages and which are running in parallel. Depending of the kind of message type the processing time differs. All messages descriptions are stored in a table which have a climbing ID as primary key.

      When our client is asking for new messages he asks select * from table where id > max_id_of_last_result;

      So, when running in one instance only, the client gets all messages (no duplicates and no losses).

      When running in parallel the following happens:

      1. Client asks for new messages. max_id of result is 100.
      2. Instance 1 is processing message 1. It gets ID 101 from sequence.
      3. Instance 2 is processing message 2. It gets ID 102 from sequence.
      4. Instance 2 is faster than instance 1. It commits its transaction.
      5. Client asks for new messages. It gets ID 102 and saves it for further requests.
      6. Instance 1 has finished. Commit.
      7. Client asks for new messages greater than 102. -> It will not get 101.

      It is not possible to fix that problem in the client because every customer is only allowed to view his own messages. So it does not know if it is a gap caused by other user's messages which it is not allowed to view or if the situation above has occurred.

      We are looking for ways to solve this problem. Our aim is max scalability recording to the number of message processing instances.


      Is there any way to save the commit time into the table?
      If so we could map the commit timestamp to the message_ids. Or is it possible to save the scn into the table while commiting the transaction which generates the scn? (probably not).

      I am looking forward to hear your solutions for such problem.

      Thanks a lot,
      Andreas
        • 1. Re: How to save Timestamp of commit into a table
          911532
          Hi,

          I think you could solve this using LOCAL_TRANSACTION_ID

          this comes from dbms_transaction pkg and it returns local (to instance) unique id for current transaction. This way, you could filter using this id and map it to your message_ids. That's just something we used some time ago for parallel processes, but I do not know if it is appliable to your case.

          Cheers
          • 2. Re: How to save Timestamp of commit into a table
            TPD-Opitz
            create table test_tab (
            attr_1 number,
            test_ts timestamp default systimestamp not null);
            
            insert into test_tab(attr_1) select level from dual connect by level < 30;
            29 rows created.
            
            select * from  test_tab;
            
                ATTR_1 TEST_TS                        
            ---------- -------------------------------
                     1 26-JAN-12 01.31.41.380193 PM   
                     2 26-JAN-12 01.31.41.380193 PM   
                     3 26-JAN-12 01.31.41.380193 PM   
                     4 26-JAN-12 01.31.41.380193 PM   
                     5 26-JAN-12 01.31.41.380193 PM   
                     6 26-JAN-12 01.31.41.380193 PM   
                     7 26-JAN-12 01.31.41.380193 PM   
                     8 26-JAN-12 01.31.41.380193 PM   
                     9 26-JAN-12 01.31.41.380193 PM   
                    10 26-JAN-12 01.31.41.380193 PM   
                    11 26-JAN-12 01.31.41.380193 PM   
                    12 26-JAN-12 01.31.41.380193 PM   
                    13 26-JAN-12 01.31.41.380193 PM   
                    14 26-JAN-12 01.31.41.380193 PM   
                    15 26-JAN-12 01.31.41.380193 PM   
                    16 26-JAN-12 01.31.41.380193 PM   
                    17 26-JAN-12 01.31.41.380193 PM   
                    18 26-JAN-12 01.31.41.380193 PM   
                    19 26-JAN-12 01.31.41.380193 PM   
                    20 26-JAN-12 01.31.41.380193 PM   
                    21 26-JAN-12 01.31.41.380193 PM   
            
                ATTR_1 TEST_TS                        
            ---------- -------------------------------
                    22 26-JAN-12 01.31.41.380193 PM   
                    23 26-JAN-12 01.31.41.380193 PM   
                    24 26-JAN-12 01.31.41.380193 PM   
                    25 26-JAN-12 01.31.41.380193 PM   
                    26 26-JAN-12 01.31.41.380193 PM   
                    27 26-JAN-12 01.31.41.380193 PM   
                    28 26-JAN-12 01.31.41.380193 PM   
                    29 26-JAN-12 01.31.41.380193 PM   
            
            29 rows selected.
            Oracle seams to be verry fast... ;o)

            bye
            TPD
            • 3. Re: How to save Timestamp of commit into a table
              rp0428
              Oracle does not notify you that data has been commited until after it has at least written the data to the redo logs. At that point in time the contents of any record included in the transaction is, obviously, frozen; if it wasn't then it wouldn't be commited yet (DOH!).

              So the only way is to use the SCN from the commit that was issued and you won't have access to this until after the data is safely tucked away.

              That suggests that you need to commit, retrieve the SCN associated with that commit, convert the SCN immediately to a timestamp and then save the timestamp and a message correlated ID value to a table.

              The only way to do all of that before another user tries to query data that might be in a gap is to lock the row other users need to get ids and wrap all of the above in a transaction and then release the lock.

              The issue you pointed out is why you can't rely on putting SYSDATE in your query and storing it into a date field as part of an INSERT/UPDATE. The actual commit may (or may not) happen some time in the future.

              Ah - the classic serialization problem!
              • 4. Re: How to save Timestamp of commit into a table
                894085
                Andreas S. wrote:
                Hi there,

                Is there any way to insert the timestamp of the commit into the table which is commited?

                I have multiple jobs which process messages and which are running in parallel. Depending of the kind of message type the processing time differs. All messages descriptions are stored in a table which have a climbing ID as primary key.

                When our client is asking for new messages he asks select * from table where id > max_id_of_last_result;

                So, when running in one instance only, the client gets all messages (no duplicates and no losses).

                When running in parallel the following happens:

                1. Client asks for new messages. max_id of result is 100.
                2. Instance 1 is processing message 1. It gets ID 101 from sequence.
                3. Instance 2 is processing message 2. It gets ID 102 from sequence.
                4. Instance 2 is faster than instance 1. It commits its transaction.
                5. Client asks for new messages. It gets ID 102 and saves it for further requests.
                6. Instance 1 has finished. Commit.
                7. Client asks for new messages greater than 102. -> It will not get 101.

                It is not possible to fix that problem in the client because every customer is only allowed to view his own messages. So it does not know if it is a gap caused by other user's messages which it is not allowed to view or if the situation above has occurred.

                We are looking for ways to solve this problem. Our aim is max scalability recording to the number of message processing instances.


                Is there any way to save the commit time into the table?
                If so we could map the commit timestamp to the message_ids. Or is it possible to save the scn into the table while commiting the transaction which generates the scn? (probably not).

                I am looking forward to hear your solutions for such problem.

                Thanks a lot,
                Andreas
                Aside from having the obvious logical concurrency issues you're mentioning, ascending sequences = oltp performance problem, especially in parallel queue based systems - it will ultimately limit your scalability, especially if you are already contending across commits. In fact in may already be the index maintenance contention that is causing the symptom of sessions appearing faster than others. I guarantee if you plug in a dozen concurrent consumers to your system you'll see buffer latch waits and session serialisation whenever they are concurrent.

                Have you considered using something like continuous query notification? it will give you what you want with a whole lot less hassle than rolling your own solution, by sending an explicit (optionally filtered) notification to your registered clients when the underlying data changes, from which you can acquire the rowids of the changed (new) records. (you don't even need to maintain a session to get the notifications, you just need to keep a TCP port open). There will be an upper limit on throughput where this will cease to be a good idea, i.e. the time to process a call back and make another round-trip to fetch your rows could intrinsically lag your insert rate. If you are bursty or intermittent with message types/subscribers then it could be your ticket, perhaps. It would work for something like registrations, reservations or bookings, but not work for something with a high frequency update like market data bus or mechanical process automation real time data capture.

                Otherwise you might be able to query by SCN rather than a sequence ID, which could be a kind of proxy for your commit timestamp? I've never done this myself, and I'm aware that there are certain traps to getting involved with SCNs. (already suggested while I was answering - apologies previous poster).

                Another alternative, is if your client is always just processing messages loaded since the last message it loaded, is this not just queue propagation? Sounds like a persistent queue and a registration for subscription to me (for each client, and probably another table consumer that consumes and saves all messages into a standard SQL table). AQ is pretty cool in this capacity with the ability to dequeue in sequence, ensure delivery, be resilient across a fail over, dequeue in bulk and align planets. Ok, it can't align planets.
                • 5. Re: How to save Timestamp of commit into a table
                  Andreas S.
                  Thanks for your replies.

                  I will discuss your information with our dba next week.

                  Other ideas are still welcomed ;-)