1 2 Previous Next 23 Replies Latest reply: Dec 17, 2013 12:40 AM by user10366531 RSS

Insert query slows in Timesten

user10366531 Newbie
Currently Being Moderated

Hello DB Experts ,

 

I am inserting bulk data with ttbulkcp command. my permsize is 20GB . Insert query gets slow . can anyone help me that how can i maximize throughput by ttbulkcp.

 

Regards,

  • 1. Re: Insert query slows in Timesten
    user10366531 Newbie
    Currently Being Moderated

    Can anyone please reply for the same.

  • 2. Re: Insert query slows in Timesten
    ChrisJenkins Guru
    Currently Being Moderated

    Please post:

     

    1.    Your full DSN definition for the datastore

     

    2.    The exact ttBulkCp command line that you are using to perform the load

     

    Thanks,

     

    Chris

  • 3. Re: Insert query slows in Timesten
    ChrisJenkins Guru
    Currently Being Moderated

    And also:

     

    3.    Output of ttVersion command

     

    4.    The amount of RAM installed in your machine where you are running TimesTen

     

    Chris

  • 4. Re: Insert query slows in Timesten
    user10366531 Newbie
    Currently Being Moderated

    1) DSN Defination :

    Driver=/opt/TimesTen/tt1122/lib/libtten.so

    DataStore=/opt/TimesTen/tt1122/TT_1122

    DatabaseCharacterSet=US7ASCII

    PermSize=25600

    TempSize=3054

     

    2)  ttbulkcp command -

    ./ttbulkcp -i -e Test.err -m 0 -Q 0 DSN=tttest TBLEDR /home/oracle/Newfolder/20131120/Test/EDR_AHMEDABAD_GOLIVE_pcrf-*

     

    3)

    TimesTen Release 11.2.2.2.0 (64 bit Linux/x86_64) (tt1122:53396) 2011-12-23T09:26:28Z

      Instance admin: root

      Instance home directory: /opt/TimesTen/tt1122

     

    4) Amount Of RAM on my machine -

    MemTotal:     132023944 kB

    MemFree:      34666996 kB

    Buffers:        512012 kB

    Cached:       94024648 kB

  • 5. Re: Insert query slows in Timesten
    ChrisJenkins Guru
    Currently Being Moderated

    Okay, here are things that you must do:

     

    1.    Download the latest release (11.2.2.6.0) from here:  Oracle TimesTen Software Downloads</title><meta name="Title" content="Oracle TimesTen Software D… and upgrade to that. The release that you are using (11.2.2.2.0) is extremely old and there have been 5 major maintenance release since then with many bugs fixed, and many performance and other improvements. You really should not be using such an old release.

     

    2.    One of the most important factors in good write performance is to size the log buffer properly. The default is 64 Mb which is very small and is almost always inadequate. I would suggest increasing it, and the log file size, to at least 256 MB and maybe more (LogBufMB=256, LogFileSize=256). While you are running your load use the ttIsql command to monitor the value of LOG_BUFFER_WAITS. If it is increasing then you need  to increase the log buffer (and log file size) further.

     

    3.    For a database of this size you really should be using MemoryLock=4. To implement this you may need to modify some values in /etc/security/limits.conf. Please see the TimesTen installation guide for details. Running without this parameter will result in reduced and  variable performance due to paging of the database shared memory segment.

     

    Things that you should do if possible/relevant:

     

    4.    If you have more than one disk then separate the checkpoint and log files to avoid I/O contention during periods of heavy write activity. You will do this by creating a directory on your alternate filesystem (located on separate disks) and then pointing the logs to that by adding the LogDir attribute, with a suitable value, to the DSN settings. In order to implement this you will need to drop and re-create the entire database so be sure to make a backup first and then restore it after the change.

     

    5.    If you have more than 4 CPU cores (or virtual CPUs) then increase LogBufParallelism accordingly (see the TimesTen database reference guide for more details).

     

     

    Please try these and see how things look then.

     

    Chris

  • 6. Re: Insert query slows in Timesten
    user10366531 Newbie
    Currently Being Moderated

    Thanks Chris i will try with above changes. I have observed that the datastore size is increasing although i have set

    PermSize=40906

    TempSize=10240

    the physical file size shows 

    40G     TT_1122.ds0

    40G     TT_1122.ds1

     

    Can you please share why the both file size are different.Why actual size also not showing me 40Gb.

  • 7. Re: Insert query slows in Timesten
    ChrisJenkins Guru
    Currently Being Moderated

    Firstly, only the 'Perm' region plus the DB header (~32 MB) is written to the checkpoint files so with those parameters the maximum size of a checkpoint file will be approximately 42926604288 bytes.

     

    Secondly, checkpoint files by default are sparse files. The size reported by 'ls' is the maximum offset that has ever been written to date. The block allocation may be less than this however. Since we checkpoint to the 'oldest' file each time 9which means checkpoints flip-flop between files) in general the files will not be the same size. You really don't need tp care/worry about that as long as you have allowed enough space for the maximum possible size for both files.

     

    If you want the full space for each file to be allocated up front (often a good idea in a production setup) then you can specify the DSN attribute Preallocate=1 at the time you initially create the database. Note that doing this will make the initial creation  slow as we have  to physically write 2 * 40 GB of data to disk. But once that is done the files never need to grow, should not become fragmented and you should not be able to run out of disk space for the checkpoint files.

     

    Chris

  • 8. Re: Insert query slows in Timesten
    user10366531 Newbie
    Currently Being Moderated

    Thanks chris for sharing this details.

     

    Please guide me for my below doubts .

     

    1)  Right Now i have specified datastore location and log file location in my sys.odbc file. which is generating .dso file. now it is assumed that there might be a risk if i don't want to write in .dso datafile. I want all my data in RAM only. Is it possible that no physical write at my disk. all data resides in memory only. the purpose of this is only that i want to save disk space.

     

    2) as per your above replied ,  Preallocate=1 thats good suggestion and i will take care for future. but in my development environment , although i have given permsize=40G then also at disk it is showing me  two datafile (.DSO ) with 40 GB each. this is again creating space issues. can you please share why two datafile is generating at same location ?

     

    3) earlier i had asked you to make bulk insert fast, I am not able to change LogBufParallelism parameter. I am having 32 Core CPUs. and 132Gb of RAM. so i would like to allocate 16 Core to LogBufParallelism. I have set LogBufMB=256Mb.

     

    Chris, the purpose of such exersices is to only calculate TPS on oracle and TPS on timesten. Oracle recommands that timesten is 9X faster then oracle . currently for bulk insertation , as of now i am only getting 15000 to 17000 TPS on timesten. so please guide me for further tunning in timesten.

  • 9. Re: Insert query slows in Timesten
    ChrisJenkins Guru
    Currently Being Moderated

    1.   You can avoid having any checkpoint files by setting the attribute Temporary=1. This must be specified at the time you create the database (so you will need to drop your existing database using ttDestroy). Note that if you use this setting (a) you will still have transaction log files being written (to LogDir) and (b) whenever the database is unloaded from memory it is automatically dropped. So each time you start up the database it will be completely empty. it is not possible to have a persistent TimesTen database without also having the checkpoint files.

     

    2.   We maintain two copies of the database on disk and checkpoint operations write to them alternately. This is necessary in order to safeguard the data; if there is a failure in the middle of a checkpoint operation then we still have the previous image from which we can recover. This is how TimesTen is designed to work; you need to have both files (unless you use a temporary database as per my reply in (1) above).

     

    3.    I don't understand what you mean by 'I am not able to change LogBufParallelism parameter'?

     

    Have you adopted all my other recommendations? Until you do that and see what the effects are there is not much more I can help you with.

     

    Chris

  • 10. Re: Insert query slows in Timesten
    user10366531 Newbie
    Currently Being Moderated

    Chris I am not working with 11.2.2.6.0 release. rest of the changes which you said i did all. when I am setting LogBufParallelism=16 and i am  connecting to DSN i am getting warning like LogBufParallelism=4 is already in use. i tried to unload RAM and then load ram but still this parameter is not getting reflected.

  • 11. Re: Insert query slows in Timesten
    ChrisJenkins Guru
    Currently Being Moderated

    You really, really should work with 11.2.2.6.0. 11.2.2.2.0 is very old and I do not recommend using it for any new projects. 11.2.2.6.0 will perform better as well. Is there some reason not to use it?

     

    The issue with changing LogBufParallelism is nothing to do with the software release. If you look at the error message in detail you will see that it says that only the instance administrator can change the value. You need to:

     

    1.   Ensure the database is not loaded in memory (check with ttStatus).

     

    2.   Change the value in the sys.odbc.ini file.

     

    3.   Load the database into memory as the instance administrator user (you can check who that is using ttVersion). Be sure not to specify any UID value when you connect or in the sys.odbc.ini.

     

    Chris

  • 12. Re: Insert query slows in Timesten
    user10366531 Newbie
    Currently Being Moderated

    Hi chris,

    I have upgrade Timesten to realase 11.2.2.6.0. I have changes all above parameters which you have suggested.

     

    connect "DSN=TT_1122;uid=test";712: Unable to lock data store memory segment, error 12The command failed.Done.

     

    defination of my DS file is

     

    Driver=/opt/TimesTen/y/lib/libtten.so

    DataStore=/opt/TimesTen/y/info/TT_1122

    DatabaseCharacterSet=US7ASCII

    LogBufMB=256

    LogFileSize=256

    LogBufParallelism=16

    PermSize=81920

    TempSize=10240

    Temporary=1

    #MemoryLock=4

     

    cat /etc/security/limits.conf

     

    *   softmemlock 115343360
    hardmemlock 115343360

     

    I have tried to upload CSV file via ttbullkcp but it still giving same TPS. can you provide your advise here ?

  • 13. Re: Insert query slows in Timesten
    ChrisJenkins Guru
    Currently Being Moderated

    After changing the 'memlock' values in /etc/security/limits.conf did you logout and then back in again before re-starting the TimesTen main daemon? These values are only picked up when you first login and the TT daemon inherits them from the OS user who starts it. Please log out, log in as instance admin user then stop and start the main daemon:

     

    ttdaemonadmin -stop

    ttdaemonadmin -start

     

    Then uncomment the MemoryLock setting and try loading the database into memory. it really is important that you get this working. Once that is all working can you please provide the following info:

     

    1.    The definition of the table you are loading data into, including indexes.

     

    2.    Whether the indexes (if any) are in place while you are loading the data.

     

    3.    The CPU type and speed.

     

    4.    The type of disk storage you are using for the filesystem containing the database.

     

    5.   The location of the CSV file that you are loading - is it on the same filesystem as the database files?

     

    6.   The number of rows of data in the CSV file.

     

    7.   Originally you said 'I am only getting 15000 to 17000 TPS'. How are you measuring this? Do you TPS (i.e. commutes per second) or 'rows inserted per second'? Note that by default ttBulkCp commits every 1024 rows so if you are measuring commits then the insert rate is 1024 x that.

     

    I'm also not clear why you are using the 'system' DSN TT_1122. This is reserved for internal use - you should define your own DSN or use one of the sample DSNs (such as sampledb_1122) after adjusting the attributes appropriately,

     

    Chris

  • 14. Re: Insert query slows in Timesten
    user10366531 Newbie
    Currently Being Moderated

    Hi Chris thanks for your reply.

     

    I have uncommented that memlock parameter is working now. I will not use system DSN now onwards. thanks for that suggestion .

     

     

    1.    The definition of the table you are loading data into, including indexes.

    My Comments : Table defination.The table doesnot having any primary key and any indexes.

     

    create table TBLEDR

    (snstarttime number,

    snendtime number,

    radiuscallingstationid number,

    ipserveripaddress varchar2(2000) DEFAULT '0',

    bearer3gppimsi varchar2(2000) DEFAULT '0',

    ipsubscriberipaddress  varchar2(2000),

    httpuseragent  varchar2(2000) DEFAULT '0',

    bearer3gppimei  varchar2(256) DEFAULT '0',

    httphost varchar2(2000) DEFAULT '0',

    ipprotocol  varchar2(256) DEFAULT '0',

    voipduration varchar2(256) DEFAULT '0',

    traffictype varchar2(256) DEFAULT '0',

    httpcontenttype varchar2(2000) DEFAULT '0',

    transactiondownlinkbytes number DEFAULT '0',

    transactionuplinkbytes number DEFAULT '0',

    transactiondownlinkpackets number  DEFAULT '0',

    transactionuplinkpackets number DEFAULT '0',

    radiuscalledstationid  varchar2(2000) DEFAULT '0',

    httpreferer varchar2(4000) DEFAULT '0',

    httpurl varchar2(4000) DEFAULT '0',

    p2pprotocol  varchar2(4000)  DEFAULT '0'

    );

     

    2.    Whether the indexes (if any) are in place while you are loading the data.

    My comments: No indexes are there.

     

    3.    The CPU type and speed.

    Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz .32 core .

     

    4.    The type of disk storage you are using for the filesystem containing the database.

    We are not using any external storage. we are using linux ext3 filesystem.

     

    5.   The location of the CSV file that you are loading - is it on the same filesystem as the database files?

    My comment - database files are resides on /opt partition. and yes the CSV files are also placed in same directories .those files are in /opt/Files.

     

    6.   The number of rows of data in the CSV file.

    My comment - in per CSV file there is around 50,000 Records.

     

    7.   Originally you said 'I am only getting 15000 to 17000 TPS'. How are you measuring this? Do you TPS (i.e. commutes per second) or 'rows inserted per second'? Note that by default ttBulkCp commits every 1024 rows so if you are measuring commits then the insert rate is 1024 x that.

     

    My comment- Now I have set timing on at bash prompt. lets say when i have run command ./ttbulkcp at that time i have note down the timing. now when the command complete , i am again note down the time. and then i am calculating the TPS. further in this, i have one file with ttbulkcp . I am having 50000 records in file. and out of these records around 38000 records gets sucsucced. and thus i am calculating TPS.

1 2 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points