6 Replies Latest reply on Jul 13, 2018 12:58 PM by Gasimov Hafiz

    sys.odbc.ini file not read during bulk insert.

    Gasimov Hafiz

      Hi everyone.

      my TimesTen sys.odbc.ini file as below.

       

      [tt_isra]

      Driver=/home/orattadmin/TimesTen/tt_isra/lib/libtten.so

      DataStore=/home/orattadmin/TimesTen/tt_isra/info/DataStore/

      DatabaseCharacterSet=AL32UTF8

      PermSize=20000

      TempSize=2000

      AutoCreate=1

      Logging=1

      LogDir=/home/orattadmin/TimesTen/tt_isra/info/DataStore

      LogBufMB=64

      LogFileSize=64

      LogFlushMethod=2

      LogPurge=1

      Isolation=1

      LockLevel=0

      DurableCommits=1

      PLSQL=1

      Temporary=0

      PrivateCommands=1

      CkptFrequency=20

      CkptLogVolume=64

      Connections=64

      RecoveryThreads=8

       

      you see configration as

      CkptFrequency=20

      CkptLogVolume=64

      but bulk insert (with PL/SQL procedure)during not happen this config that not delete old log file  but only happen end insert statement then clean old log file.

      why is it like this?

       

      what is best practice for sys.odbc.ini file?

      i have Load very heavy, very transaction happen per second.

      my server confiration as below

      250GB RAM

      10TB HDD, 250GB SSD.

      2.6Ghz speed

      2Socket ,  2 CPU,   2x8 CORE

       

       

        • 1. Re: sys.odbc.ini file not read during bulk insert.
          ChrisJenkins-Oracle

          Hi Gasimov,

           

          Before I suggest why you are seeing the behaviour that you are seeing I'd like to make some comments on your sys.odbc.ini file, given that you say you have a very heavy transactional workload...

           

          1.    Never ever store any user files, especially database files, within the TimesTen install tree. You have both your checkpoint and log files stored in /home/orattadmin/TimesTen/tt_isra/info/DataStore. If you store the database files there they may get removed during a software uninstall or upgrade. You should relocate those files immediately to avoid unexpected loss of your database.

           

          2.   There is a serious problem here; you have DataStore set to /home/orattadmin/TimesTen/tt_isra/info/DataStore/, which appears to be a directory. The value for DataStore should be the pathname of a file without any extension. For example if you were to use /home/orattadmin/TimesTen/tt_isra/info/DataStore/tt_isra then you would see files such as /home/orattadmin/TimesTen/tt_isra/info/DataStore/tt_isra.ds0 and /home/orattadmin/TimesTen/tt_isra/info/DataStore//tt_isra.ds1. It is not valid to specify a directory for DataStore; I am surprised that this is working correctly. For LogDir, that value should be a directory within which we can create log files.

           

          3.   If you have enough SSD/flash storage the you should place both checkpoints and logs onto SSD/flash for best performance. If you do not have enough then you should try to at least put the logs onto flash/SSD storage. If you have to put both onto spinning disk then you should locate them on separate disks from each other and from any other high activity files.

           

          4.   LogBufMB=64 and LogFileSize=64 are the defaults but are much too small for high throughput systems. You should increase both values to at least 512 and maybe even 1024 for much better performance.

           

          5.   DurableCommits=1 is a very low performance configuration since every single commit operation has to write synchronously to disk. Unless your logs are on flash/SSD this will be very slow indeed. Most customers run with DurableCommits=0, which is the default, though of course then there is some risk of data loss in the event of a crash. You can mitigate that by use of replication.

           

          6.   LogFlushMethod=2 gives good performance if the logs are on flash/SSD or high performance RAID but not if they are on plain spinning HDDS. For the latter case best stick to the default value.

           

          7.   You should either configure the system for huge pages and then configure TimesTen to use them (best) or lock the TimesTen database into memory using MemoryLock=4 (second best). Running a large database without memory lock will result in lower performance.

           

          Now, why do logs accumulate while your 'bulk insert' is running. Can you perhaps post the text of your PL/SQL procedure? Anyway, here are some possibilities:

           

          a)    Logs can only be purged when they do not contain records for open transactions. If your PL/SQL procedure inserts a lot of rows as a single transaction (i.e. without any commits) then no logs can be purged until it completes. You will get much better behaviour (and much better insert performance) if you commit every 'n' rows (a good value of 'n' is 1024).

           

          b)   The checkpoint parameters you have are very aggressive; maybe checkpoints are taking much longer than 20 seconds to complete. You can check via 'call ttCkptHistroy;' from ttIsql.

           

          Regards,

           

          Chris

           

          • 2. Re: sys.odbc.ini file not read during bulk insert.
            Gasimov Hafiz

            Dear Chris.

            your questions my answer as beleow:

             

            "1.    Never ever store any user files, especially database files, within the TimesTen install tree. You have both your checkpoint and log files stored in /home/orattadmin/TimesTen/tt_isra/info/DataStore. If you store the database files there they may get removed during a software uninstall or upgrade. You should relocate those files immediately to avoid unexpected loss of your database."

             

            loss of my DATABASE NO PROBLEM,  because i fill TimesTen DB references on ORACLE DB. if data loss on TimesTen DB then i quickly references on ORACLE DB.. there is no problem here this case

             

             

            "2.   There is a serious problem here; you have DataStore set to /home/orattadmin/TimesTen/tt_isra/info/DataStore/, which appears to be a directory. The value for DataStore should be the pathname of a file without any extension. For example if you were to use /home/orattadmin/TimesTen/tt_isra/info/DataStore/tt_isra then you would see files such as /home/orattadmin/TimesTen/tt_isra/info/DataStore/tt_isra.ds0 and /home/orattadmin/TimesTen/tt_isra/info/DataStore//tt_isra.ds1. It is not valid to specify a directory for DataStore; I am surprised that this is working correctly. For LogDir, that value should be a directory within which we can create log files."

             

            i as soon as fix this mistake.

             

            "3.   If you have enough SSD/flash storage the you should place both checkpoints and logs onto SSD/flash for best performance. If you do not have enough then you should try to at least put the logs onto flash/SSD storage. If you have to put both onto spinning disk then you should locate them on separate disks from each other and from any other high activity files."

             

            no , not enaugh, i will  continue kwith HDD.

             

            "4.   LogBufMB=64 and LogFileSize=64 are the defaults but are much too small for high throughput systems. You should increase both values to at least 512 and maybe even 1024 for much better performance."

             

            Ok. set to 1024

             

            "5.   DurableCommits=1 is a very low performance configuration since every single commit operation has to write synchronously to disk. Unless your logs are on flash/SSD this will be very slow indeed. Most customers run with DurableCommits=0, which is the default, though of course then there is some risk of data loss in the event of a crash. You can mitigate that by use of replication."

             

            you say that  risk of data loss in the event of a crash.

            which situations?

             

            "Now, why do logs accumulate while your 'bulk insert' is running. Can you perhaps post the text of your PL/SQL procedure? Anyway, here are some possibilities:"

            begin

            for i in 1 .. 2000000

            loop

            insert into fcbsabm.abm_rsrv_hstr values

            ( i, 'PGW01;3727101369;205;1855', null, TO_DATE('2018/01/03 21:02:44', 'yyyy/mm/dd hh24:mi:ss'), '1111111111111', '3333333333333','444444444444','555555555555555','66666666666666','0.180411',TO_DATE('2018/01/03 21:02:44', 'yyyy/mm/dd hh24:mi:ss'));

            end loop;

            end;

            /

             

            "b)   The checkpoint parameters you have are very aggressive; maybe checkpoints are taking much longer than 20 seconds to complete. You can check via 'call ttCkptHistroy;' from ttIsql."

             

            Please share me best configration for ssy.odbc.in file

             

             

             

            • 3. Re: sys.odbc.ini file not read during bulk insert.
              ChrisJenkins-Oracle

              For (1), I still strongly recommend that you create a directory elsewhere outside of the TimesTen installation tree to store your database. You may of course ignore my advice at your own risk.

               

              A more general question; you say that you plan for a heavy transactional workload but you are chasing options (single spinning hard disk, DurableCommits=1) that will hugely reduce the potential performance. What is your actual objective here? I am a bit confused.

               

              The PL/SQL code snippet that you shared inserts 2M rows in a single transaction. That is not good from a performance perspective; you should check in the insert loop and issue a commit every 100 rows or so. Also, since this is one large transaction none of the transactions log files can get cleaned up until after the transaction is committed regardless of what checkpoint parameters you use.  Committing more frequently will allow checkpoints to purge log files more effectively.

               

              Checkpointing is a relatively expensive operation in terms of CPU and disk I/O so there is a balance to be struck between how often you checkpoint (the more frequently you checkpoint the faster will be recovery after a failure and the less log files may accumulate) and the system load arising from checkpointing and the effect that will have on performance. With your current configuration (single rotational HDD, synchronous commit) checkpoints will have a large impact on DML performance due to the heavy I/O contention that occurs every time there is a checkpoint. I cannot tell you what is correct for your case without a much deeper understanding of your application and your requirements.

               

              With DurableCommit=0, application commit operations are decoupled from the writing of the log data to disk (asynchronous flush). As a result the commit of transactions containing DML operations will be much, much faster (maybe 100x or more). The downside is that at any moment there will be some 'committed' changes in the log buffer that have not yet been written to disk and if the system crashes those 'committed' operations will  be undone during recovery. After recovery the database will be internally consistent (no partial transactions) but the most recent committed changes may be lost (it is not really possible to quantify exactly the amount of loss as there are many variables). This is the trade-off that you make between performance and data safety. Using replication can help protect data in this case but still there is the potential for some loss. Since you said for (1) that the loss of your database is unimportant then maybe your application can tolerate some data loss on failure in exchange for hugely improved performance.

               

              I would like to understand your overall objectives here. Currently I am trying to advise you without a clear understanding of the nature of your application, your performance requirements, your data safety requirements etc. TimesTen is all to deliver very high levels of performance for a wide variety of workloads but it does need a hardware configuration appropriate to the workload and performance requirements.

               

              Chris

              • 4. Re: sys.odbc.ini file not read during bulk insert.
                Gasimov Hafiz

                Dear Chris ,

                this PL/SQL statement only one example,

                I wanted to get your suggestions for the system with a heavy load.

                 

                I share configration my TimesTen but this TimesTen is PREPRODUCTION as PRODUCTION copy, and on PREPRODUCTION TimesTen few additional , example DurableCommits=1(i ask only is good or bad? ).

                now i want that you share me best configration TimesTen .ini file  with the following conditions

                 

                1. very hevy load

                2. nonstop transactions per second 500K may be more than 500K (uninterrupted insert, update, batch delete and etc) very heavy

                 

                 

                 

                my PRODUCTION TimesTen .ini file content as below:

                [TT_FCBS]

                Driver=/home/timesten/TimesTen/ttfcbs/lib/libtten.so

                DataStore=/u02/timesten/DataStore/TT_FCBS

                DatabaseCharacterSet=AL32UTF8

                PermSize=35000

                TempSize=4096

                AutoCreate=1

                Logging=1

                DurableCommits=0

                LockLevel=0

                Temporary=0

                LogBufMB=32

                PrivateCommands=1

                CkptFrequency=0

                CkptLogVolume=0

                Isolation=1

                Connections=64

                 

                what is wrong value? or which value change is need? or which value must add?

                • 5. Re: sys.odbc.ini file not read during bulk insert.
                  ChrisJenkins-Oracle

                  The general rule is 'large transactions BAD, small transactions GOOD' regardless of whether we are talking about PL/SL, application code or anything else.

                   

                  TT_FCBS]

                  Driver=/home/timesten/TimesTen/ttfcbs/lib/libtten.so

                   

                  # I have omitted all 'default' valus in order to focus on the important ones.

                  # All of the following recommendations are 'starting values' and you may need to tune them

                  # once you can observe the system in production.

                   

                  # The checkpoint files should be located somewhere with enough space (at least 75 GB)

                  # and separate from any other high activity files. Stirage ideally would be flash/SSD

                  # but if not the high performance RAID-10 (multiple spindles) is recommended

                  DataStore=/u02/timesten/DataStore/TT_FCBS

                   

                  # If you are not using flash/fast SSD storage for checkpoint files then you should locate

                  # transaction log files somewhere else (different storage not just different filesystem).

                  # Again if not flash/SSH then high performance RAID-10 with multipel spindles is recommended.

                  # Be sure you have *plenty* of space for the logs; runnign out of space in the log filesystem

                  #is not a good thing!

                  LogDir=/some/other/location/from/checkpoint/files

                   

                  # The following depend on your requirements

                  DatabaseCharacterSet=AL32UTF8

                  PermSize=35000

                  TempSize=4096

                   

                  # For heavy transactional workloads make log buffer at least 1 GB. You might need to increase this.

                  LogBufMB=1024

                  # Make LogFileSize same as LogBufMB

                  LogFileSize=1024

                  # Make LogBufParallelism equal to min(# CPU cores in system, 64)

                  LogBufParallelism=N

                   

                  # If you have flash/SSD or high performance RAID-10 for logs then set LogFlushMethod=2 otherwise

                  # leave it as the default

                  LogFlushMethod=2

                   

                  # There is no one 'correct' setting for the checkpoint parameters. It depends on your workload,

                  # hardware and filesystem layout. The following are probably a good starting point but you

                  # *will* need to tune these once you can observe the system is under load.

                  CkptFrequency=300

                  CkptLogVolume=4096

                   

                  # Is 64 connections enough? If not then you need to increase this and also check OS semaphore settings.

                  Connections=64

                   

                  # Only use PrivateCommands=1 if you observe contention on compile commands in the command cache

                  PrivateCommands=0

                   

                  # You should always specify a default ConnectionCharacterSet. I have made it the same as the

                  # DatabaseCharacaterSet since the default is US7ASCII which is probably not what you want.

                  ConnectionCharacterSet=AL32UTF8

                   

                  # Ideally you should configure the OS and Timesten for huge page support (see documentation)

                  # but secodn best option is just to lock memory. (Check memory lock limits in /etc/security/limits.conf)

                  MemoryLock=4

                   

                  Chris

                  • 6. Re: sys.odbc.ini file not read during bulk insert.
                    Gasimov Hafiz

                    Dear Chris , thanks for all reply,  Thank you for explaining to me patiently....

                    we will try that best configration on my REAL TimesTen ini file.