14 Replies Latest reply on Apr 6, 2018 12:37 PM by Gasimov Hafiz

    Timesten log file cleaning

    Gasimov Hafiz

      Hi everyone.

       

      I installed

      TimesTen 11.2.2.8.0

      and i created datastore. then created  table on orattadmin user. and  i run more pl/sql command for insert into in table, so 2M row added in my table.

      cd /logdir and rm -rf all timesten log, as   tt_hafiz.log160 ...and etc.

      then run command  ./ttDaemonAdmin -stop ,  ./ttDaemonAdmin -start

      but ./ttisql hafiz; try as below error

       

      connect "DSN=hafiz";

        848: Recovery failed on 2 set(s) of data store files; the TimesTen user error log has more information

        703: Subdaemon connect to data store failed with error TT848

      The command failed.

      Done

       

      how to clean log file timesten log? i need  clearing log, becuse my server free disk size is 3%.

      i create timesten again create table, added 2M row after  truncate table table_name;

      but not clean log file. All log not changed.

        • 1. Re: Timesten log file cleaning
          ChrisJenkins-Oracle

          You must never remove database files manually (this applies not just to TimesTen but to any database). The log files are the transaction logs (undo/redo) and are essential for correct database operation and recoverability. If you remove them you essentially corrupt/destroy your database.

           

          Log files are cleaned up automatically when they are no longer needed, by checkpoint operations; after two checkpoints most log files will be removed. Any remaining are still required for database recovery, replication, cache propagation etc.

           

          Given that after just a little activity your server is down to 3% free disk space I would recommend you increase the available disk space significantly or use a server with more free space. Sufficient space is a necessity and it sounds like your server does not have sufficient for the size of database and workload that you are experimenting with.

           

          Chris

          • 2. Re: Timesten log file cleaning
            Gasimov Hafiz

            Thanks for this answer.

             

            but i dont understand "Log files are cleaned up automatically when they are no longer needed, by checkpoint operations; after two checkpoints most log files will be removed."

            how to simulation this operation?

            • 3. Re: Timesten log file cleaning
              ChrisJenkins-Oracle

              Although TimesTen is an in-memory database it does provide full persistence and recoverability using local disk storage (it also supports replication for high availability but that is a different topic). The persistence mechanism uses a combination of checkpoint files (dbname.ds[01]), and transaction log files (dbname.logNNN) to provide persistence, recoverability etc.

               

              The checkpoint files contain an 'image' of the in-mmory database and these on disk images are referred periodically by automatic checkpoint operations. When a checkpoint occurs it always updates the oldest checkpoint image (which flip flops from one checkpoint to the next); the checkpoint flushes all changed data from the in-memory database to the checkpoint disk image performing an in-place update on the disk file. After the checkpoint has completed the file on disk is an up to date image of the in-memory database. Since checkpoints do not block queries or DML, the checkpoint images on disk are in fact 'fuzzy' (not transactionally consistent).

               

              In addition to checkpointing, any change to persistent data that occurs in the database (for example as a result of DML operations) generates log records that describe the changes. These log records can be used for undo (rollback) and for redo (recovery). They are also used by various TimesTen features such as replication, AWT caching and XLA. These log records are staged in an in memory buffer (the log buffer) before being written to the current log file on disk. Each log file has a fixed maximum size (the LogFileSize parameter) and when it reaches that size it is closed an a new log file is created. The log files have a sequence number as part of their name (dbname.logNNN).

               

              So over time the size and number of log files continually increases, which would be a problem if there was no mechanism to clean them up. Luckily there is - checkpointing. When a log file is no longer required - the changes represented by the records in the file have been checkpoints into both checkpoint files and are no longer required by replication, cache, XLA etc. - then the file will be deleted when the next checkpoint operation occurs.

               

              During normal operation, checkpoint operations occur automatically based on the parameters CkptFrequency (in seconds, default is 600) and CkptLogVolume (in MB, default 0).

               

              So, in a system in steady state with a workload running that contains some amount of write operations log files are continuously generated but also continuously purged based on checkpointing.

               

              When configuring your system you need to consider:

               

              1.    What checkpoint parameters should I use? the defaults are rarely 'correct' for any given scenario.

               

              2.    How much disk space do I need for the checkpoint files? The answer is PermSize +64 [MB] * 2.

               

              3.    How much disk space do I need for log files. This depends (a) on the workload and (b) on the checkpoint parameters.

               

              It is important to ensure the system has sufficient disk space for the checkpoint files and (especially) the transaction log files since running out of disk space is highly inadvisable.

               

              Note that you can force a manual checkpoint at any time by connecting to the database using ttIsql as an ADMIN user (or the instance administrator) and issuing the command:

               

              call ttCkpt;

               

              I hope that clarifies.

               

              Best regards,

               

              Chris

              1 person found this helpful
              • 4. Re: Timesten log file cleaning
                Gasimov Hafiz

                Thanks very much.

                 

                i tryed your suggest and recieve best Result.

                "Note that you can force a manual checkpoint at any time by connecting to the database using ttIsql as an ADMIN user (or the instance administrator) and issuing the command:

                call ttCkpt;

                "

                note: truncated table then call ttCkpt; and see cleaned more log file

                 

                thanks again.

                • 5. Re: Timesten log file cleaning
                  Gasimov Hafiz

                  Dear ChrisJenkins-Oracle this subject around i learn important information.

                  I have deleted a all log files before created  this task.

                  Now I want try call ttCkpt; on PRODUCTION area.

                  But not sure loss data? or everythins is well?

                   

                  for example 

                  table1=120000row, table2=30000row and table3=50000000 row

                  i deleted all log during  create table, insert into and etc,

                  then truncate table3 but table1 and table2 not "truncate" or not "delete from"(becuse this data very important),

                  now if  i try two times call ttCkpt; commad then my data as below?

                  table1=120000row

                  table2=30000row

                  table3=0 ?

                  OR

                  all row loss on all table?

                   

                  Thanks.

                  • 6. Re: Timesten log file cleaning
                    ChrisJenkins-Oracle

                    What exactly do you mean by 'I have deleted a all log files before created  this task.'?

                     

                    The *only* way you can safely 'delete' log files is by executing ttCkpt calls. If you ever manually delete log files you have then corrupted your database irretrievable.

                     

                    Never, ever manually delete any database files. They belong to the database and are managed by the database and are essential for correct operation.

                     

                    If you did not manually remove the transaction log files then can you please clarify, in detail, the exact sequence of operations you performed.

                     

                    Thanks,

                     

                    Chris

                     

                    • 7. Re: Timesten log file cleaning
                      Gasimov Hafiz

                      i mean

                      if deleted manual log file, then i run call ttCkpt

                      now i lossing data on table?

                      • 8. Re: Timesten log file cleaning
                        Gasimov Hafiz

                        I will explain more.

                        Now if I accidentally delete the log files manually, then running ttCkpt will cause the table data to be lost?

                        • 9. Re: Timesten log file cleaning
                          ChrisJenkins-Oracle

                          Of course. It doesn't matter if the deletion was accidental or deliberate :-)

                           

                          If you manually change/delete *any* database files (checkpoint files, log files etc.) you will lose data. Very likely you will lose your entire database. This is normal for any database not just TimesTen.

                           

                          Chris

                          • 10. Re: Timesten log file cleaning
                            Gasimov Hafiz

                            Dear ChrisJenkins 

                            I do not talk about loss of log file.

                            I meant the rows on the table.

                             

                            agaın for example.

                            1. create dsn, create table, insert into many row in tables then bulk update, again new insert to table and etc

                            2. now my table row as below

                            table1=1000 row

                            table2=2000row

                            3. my log file as below

                            tt_tenant.log0

                            tt_tenant.log1

                            ...

                            tt_tenant.log100

                             

                            4. rm -rf tt_tenant.log0 ....tt_tenant.log99

                            5. only remained tt_tenant.log100

                            6.so cal ttCkpt;

                            result as below?

                            table1=1000 row

                            table2=2000row

                            OR all rows be lost?

                            • 11. Re: Timesten log file cleaning
                              ChrisJenkins-Oracle

                              The checkpoint and log files together for the persistence mechanism for the database. The log files are also used by various other database functionality such as replication, cacheing, XLA and most importantly recovery after failure or after clean shutdown.

                               

                              In your test you are not seeing any immediate problems due to removing the log files because your final SELECT operation in your example are, of course, running against the in-memory database. However the manual removal of the log files has potentially compromised the database such that if there were to be a failure of some kind the database may well no longer be recoverable afterwards.

                               

                              The exact effects of manually deleting log files will vary a lot depending on the exact scenario, the level of workload on the database, the database features that are in use etc. etc. but the simple fact of the matter is that you must never ever manually delete log files (or checkpoint files). Doing so jeopardises your entire database and there are absolutely no guarantees that you will not lose some or all of your data. Manual removal of database files is of course completely unsupported.

                               

                              I'm not sure I understand *why* you are manually deleting log files but as I have already stated you should not be doing so. There is no reason to do this anyway as checkpointing will clean up the log files safely when they are no longer needed.

                               

                              Chris

                              • 12. Re: Timesten log file cleaning
                                Gasimov Hafiz

                                "I'm not sure I understand *why* you are manually deleting log files but as I have already stated you should not be doing so."

                                 

                                becuse i already deleted log file.

                                • 13. Re: Timesten log file cleaning
                                  ChrisJenkins-Oracle

                                  Okay, so you made a mistake this one time. That happens. But you need to be sure to avoid doing this in the future.

                                   

                                  Maybe your database is okay (and maybe it isn't). As it is a test database 9I am guessing) I would recommend dropping the entire database and re-creating it from scratch just to be on the safe side.

                                   

                                  Chris

                                  • 14. Re: Timesten log file cleaning
                                    Gasimov Hafiz

                                    thanks for all reply Dear ChrisJenkins