This discussion is archived
10 Replies Latest reply: Nov 28, 2012 12:21 AM by ChrisJenkins RSS

Is this TPS expected?

914264 Newbie
Currently Being Moderated
My java application needs a 3000 tps of insertion but tps for my oracle is only 800 more or less.
it's far away from our target tps. So I employ TT to be the cache layer in front of oracle database.
TT will keep records in ram and async write them to the backend oracle automatically.
I create a dynamic AWT cache group which only refers to one root table and a child table.
Basically one record insertion will insert one record in the root table and the other in its child table.
I have set up two java programs that they both use jdbc api to insert records to the root table and its child table.
The difference is the underlying data source, one is oracle jdbc driver data source while the other is TimesTen driver data source.
I run them on the same server where TimesTen server is deployed once and once and find that tps of TT is only 1636 and tps of oracle is 568 averagely.
It means TT can insert 1636 records every second and oracle jdbc can insert 568 records every second.
They are both lower than our target tps.

My dsn for TT(TimesTen Release 11.2.2.4.1) is as the following:
-----------------------------------------------------------------------------
Driver=/home/timesten/TimesTen/tt1122/lib/libtten.so
DataStore=/home/timesten/data/database/my_ttdb
LogDir=/home/timesten/logs
PermSize=20000
TempSize=2000
DatabaseCharacterSet=AL32UTF8
OracleNetServiceName=amq_915
----------------------------------------------------------------------------
the output of monitor command is:
Command> monitor

TIME_OF_1ST_CONNECT: Thu Nov 22 16:15:43 2012
DS_CONNECTS: 58
DS_DISCONNECTS: 29
DS_CHECKPOINTS: 18
DS_CHECKPOINTS_FUZZY: 18
DS_COMPACTS: 0
PERM_ALLOCATED_SIZE: 20480000
PERM_IN_USE_SIZE: 758821
PERM_IN_USE_HIGH_WATER: 758821
TEMP_ALLOCATED_SIZE: 2048000
TEMP_IN_USE_SIZE: 65328
TEMP_IN_USE_HIGH_WATER: 65708
SYS18: 0
TPL_FETCHES: 3
TPL_EXECS: 4
CACHE_HITS: 1
PASSTHROUGH_COUNT: 609
XACT_BEGINS: 2549666
XACT_COMMITS: 2387904
XACT_D_COMMITS: 4
XACT_ROLLBACKS: 161761
LOG_FORCES: 41170
DEADLOCKS: 0
LOCK_TIMEOUTS: 0
LOCK_GRANTS_IMMED: 12556680
LOCK_GRANTS_WAIT: 12241
SYS19: 0
CMD_PREPARES: 199
CMD_REPREPARES: 0
CMD_TEMP_INDEXES: 0
LAST_LOG_FILE: 30
REPHOLD_LOG_FILE: 30
REPHOLD_LOG_OFF: 61008136
REP_XACT_COUNT: 860707
REP_CONFLICT_COUNT: 0
REP_PEER_CONNECTIONS: 1
REP_PEER_RETRIES: 0
FIRST_LOG_FILE: 30
LOG_BYTES_TO_LOG_BUFFER: 1729829168
LOG_FS_READS: 606045
LOG_FS_WRITES: 50263
LOG_BUFFER_WAITS: 0
CHECKPOINT_BYTES_WRITTEN: 19988480
CURSOR_OPENS: 37606
CURSOR_CLOSES: 37606
SYS3: 0
SYS4: 0
SYS5: 0
SYS6: 0
CHECKPOINT_BLOCKS_WRITTEN: 31723
CHECKPOINT_WRITES: 11458
REQUIRED_RECOVERY: 0
SYS11: 0
SYS12: 1
TYPE_MODE: 0
SYS13: 0
SYS14: 0
SYS15: 71938
SYS16: 0
SYS17: 0
SYS9:
--------------------------------------------------------------------
I believe tps for TT should be more larger than oracle jdbc but I cannot figure out what's the problem.
I have tried to use direct connection instead of client/server connection and tps is 2600 more or less, not my expection as well.
I really need your help or I have to look for other solutions.


Thanks

SuoNayi
  • 1. Re: Is this TPS expected?
    ChrisJenkins Guru
    Currently Being Moderated
    There are many areas that affect performance:

    1. TimesTen setup and tuning

    2. Hardware where TimesTen is running (versus hardware where Oracle is running)

    3. Application and how well optimised, or not, it is


    So, to try and get some more information on these I have a few questions:

    1. What hardware are you running TimesTen on? (number of cores, speed of cores, type of cores, RAM, type of disk storage)

    2. Same question for Oracle database server machione

    3. Is your application a single thread/process or multiple threads/processes?

    4. How does your application process the inserts? Does it prepare a single parameterised INSERT statement (just once) and then for each insert just set the input values and execute? Or does it build a whole SQL statement each time including the values and execute that? Does it use JDBC batch operations or not? How often does it commit while it is inserting?

    5. What does the table look like that you are inserting into? How many indexes are there on it?

    Generally speaking 3000 inserts/second is trivial for TimesTen so clearly there is room for some optimisation.

    Chris
  • 2. Re: Is this TPS expected?
    914264 Newbie
    Currently Being Moderated
    Hi Chris,
    I will supply more tests and details next week.
  • 3. Re: Is this TPS expected?
    914264 Newbie
    Currently Being Moderated
    Here is more details about the test:

    1.TimesTen database server:
    CPU: 24 cores, Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
    RAM: 32G

    2.Oracle database server:
    CPU: 16 cores, Intel(R) Xeon(R) CPU E5640 @ 2.67GHz
    RAM: 32G

    3.My java program is a single thread and I have just added multi-thread capability.

    4.Parameterized insert statement is used and every full insert operation will insert a record and its child record.
    Every full insert operation ends up with commit. As default TT will not use durable commit as well.

    5.Only two tables,a parent table and a child table ,they are related by a single foreign key.
    Every table has it's own primary key ID and no other index columns.

    After multi-thread tests were done, I have created a tps line chart but I cannot upload it.
  • 4. Re: Is this TPS expected?
    ChrisJenkins Guru
    Currently Being Moderated
    Hi,

    Thanks for the extra detail. A few clarifications/questions and some suggestions:

    1. These are regular tabels not AWT cache tables, right?

    2. In your sys.odbc.ini settings you should set LogBufMB=1024, LogFileSize=1024 and LogBufParallelism=24. You are currently using the defaults which are very low for a write intensive workload.

    3. In your sys.odbc.ini settings for thsi DSN you should set MemLock=4. Depending on your O/S platform (you didn't mention what that is)you may need to set soem other things (such as modifying values in /etc/security/limits.conf on Linux) to allow this to work.

    4. Please can you provide a ttSchema output so we can see full details on the two tables (Column definitions, key and index defintiions etc.).

    5. The JDBC URL that you are using to connect to TimesTen from the application.

    6. Let's stick to a single threaded test for now. Once that is optimised we can look at maximising scalability.

    Thanks,

    Chris
  • 5. Re: Is this TPS expected?
    914264 Newbie
    Currently Being Moderated
    Hi Chris, more detail is the following:

    1. only 2 AWT cache tables and 2 sequences.

    2. yes,the most of parameters are the defaults.

    3. OS is CentOS release 6.3 (Final) 64 bit and JVM is also of 64 bit.


    4. output for ttSchema:
    --------------------------------------------------------------------------------------------------------------------------------------
    -- Database is in Oracle type mode
    call ttGridCreate ('EPPGRID');
    call ttGridNameSet ('EPPGRID');

    create dynamic asynchronous writethrough global cache group CACHEUSER.EPP_DOMAIN_GROUP
    from
    GTLD_WHOIS_TEST.GTLD_DOMAIN (
    "ID" NUMBER(10) NOT NULL,
    ROID VARCHAR2(100 BYTE) INLINE NOT NULL,
    "NAME" VARCHAR2(255 BYTE) NOT INLINE NOT NULL,
    AUTHINFO VARCHAR2(16 BYTE) INLINE NOT NULL,
    TLD_ID NUMBER(10) NOT NULL,
    REGISTRANT VARCHAR2(32 BYTE) INLINE,
    CONTACT_ADMIN VARCHAR2(32 BYTE) INLINE,
    CONTACT_TECH VARCHAR2(32 BYTE) INLINE,
    CONTACT_BILLING VARCHAR2(32 BYTE) INLINE,
    CLID VARCHAR2(32 BYTE) INLINE NOT NULL,
    CRID VARCHAR2(32 BYTE) INLINE NOT NULL,
    CRDATE DATE NOT NULL,
    UPID VARCHAR2(32 BYTE) INLINE,
    UP_DATE DATE,
    TRDATE DATE,
    TRSTATUS VARCHAR2(255 BYTE) NOT INLINE,
    REID VARCHAR2(32 BYTE) INLINE,
    REDATE DATE,
    ACID VARCHAR2(32 BYTE) INLINE,
    ACDATE DATE,
    EXDATE DATE NOT NULL,
    ORIGINAL_PUNY VARCHAR2(255 BYTE) NOT INLINE,
    PREFER_PUNY VARCHAR2(255 BYTE) NOT INLINE,
    primary key ("ID"))
    AGING LRU ON,

    GTLD_WHOIS_TEST.GTLD_DOMAIN_STATUS (
    "ID" NUMBER(10) NOT NULL,
    DOMAIN_ID NUMBER(10) NOT NULL,
    DOMAIN_NAME VARCHAR2(255 BYTE) NOT INLINE NOT NULL,
    STATUS VARCHAR2(32 BYTE) INLINE NOT NULL,
    CREATE_TIME DATE NOT NULL,
    UPDATE_TIME DATE NOT NULL,
    primary key ("ID"),
    foreign key (DOMAIN_ID)
    references GTLD_WHOIS_TEST.GTLD_DOMAIN ("ID"));



    create sequence GTLD_WHOIS_TEST.SEQ_GTLD_DOMAIN
    increment by 1
    minvalue 1
    maxvalue 999999999999999
    start with 15660041
    cache 20;

    create sequence GTLD_WHOIS_TEST.SEQ_GTLD_DOMAIN_STATUS
    increment by 1
    minvalue 1
    maxvalue 999999999999999
    start with 15660041
    cache 20;
    --------------------------------------------------------------------------------------------------------------------------------------
    5. JDBC URL for direct model: jdbc:timesten:direct:dsn=my_ttdb
    JDBC URL for C/S: jdbc:timesten:client:dsn=tt_dsn_9_6

    Thanks again.
  • 6. Re: Is this TPS expected?
    ChrisJenkins Guru
    Currently Being Moderated
    Thanks for the extra detail. I think I see the issue...

    - Why are you using a GLOBAL cache group? Do you need cache grid functionality? Unless you need grid functionality and have two or more cache members connected together in a cache grid you should not use a global cache group. There is significant additional overhead with a global cache group even when there is only one grid member.

    - Do you need a DYNAMIC cache group? With a dynamic AWT cache group each INSERT to a child table has to perform an existence check against Oracle. This will severely limit the insert performance. If you do not need the DYNAMIC features then using a non-DYNAMIC cache group will avoid this overhead.

    Chris
  • 7. Re: Is this TPS expected?
    914264 Newbie
    Currently Being Moderated
    Hi Chris, I am using the GLOBAL cache group and cache grid in order to setting up the active standby pair to provide high availability for my application.
    After the DYNAMIC cache group is created I issue a load cache statement to load the data from oracle into the memory of Timesten, so there's no more overhead to check parent existence,right?
  • 8. Re: Is this TPS expected?
    ChrisJenkins Guru
    Currently Being Moderated
    Hi,

    You do not need to use cache grid to have a replicated cache; they are different and (potentially) complimentary capabilities. For a replicated cache you can just use regular (not global) cache groups and then define an active/standby pair replication scheme for your pair of cache datastores. I recommend changing to a non-global cache group. It will avoid some potential issues and reduce overhead.

    A DYNAMIC cache group is one where, for certain types of access (full key equality) in SELECT/UPDATE/DELETE and for certain types of iNSERT a 'cache miss' (no matching row found in TimesTen) will cause a dynamic fetch of the required records from Oracle to be loaded into the cache and also returned to the application (in the SELECT case).

    If you are preloading all the data you need using LOAD CACHE GROUP then you likely do not need a DYNAMIC cache group either.

    Just to try and verify if it is the global/dynamic modifiers that are impacting your performance, can you (temporarily) please change to a regular AWT cache group (recreate without the global and dynamic keywords) and rerun your test.

    Thanks,

    Chris
  • 9. Re: Is this TPS expected?
    914264 Newbie
    Currently Being Moderated
    Chris, I have switched to the regular AWT cache group by remove the keywords "global" and "DYNAMIC" and rebuilt the active standby pair.
    The new AWT cache group does not involve the cache grid anymore.

    My configuration for the dsn is the following:
    ------------------------------------------------------------
    [eppdb]
    Driver=/home/timesten/TimesTen/tt1122/lib/libtten.so
    DataStore=/home/timesten/data/database/eppdb
    #LogDir=/home/timesten/logs/eppdb
    PermSize=20000
    TempSize=2560
    DatabaseCharacterSet=AL32UTF8
    Connections=200
    ConnectionCharacterSet=AL32UTF8
    LogBufMB=256
    LogFileSize=1024
    #24 cores cpu
    LogBufParallelism=12
    ReplicationParallelism=3
    ReceiverThreads=2
    RecoveryThreads=10
    OracleNetServiceName=amq_915
    ------------------------------------------------------------
    When the value for ReplicationParallelism is 6, there is a TPS downgrade according to the preceding tests(the global and DYNAMIC cache group).
    The value for ReplicationParallelism is 3 performs better TPS than the preceding tests(the global and DYNAMIC cache group).
    Why the TPS downgrade is present when ReplicationParallelism is set to be 6 and this value seems reasonable(LogBufParallelism / 2) here?
    Here is my statement for creation of active standby pair.
    -----------------------------------------------------------
    CREATE ACTIVE STANDBY PAIR eppdb ON "baal", eppdb ON "diablo"
    RETURN TWOSAFE
    STORE eppdb ON "baal" DISABLE RETURN ALL 5 RESUME RETURN 20
    STORE eppdb ON "diablo" DISABLE RETURN ALL 5 RESUME RETURN 20;
    -----------------------------------------------------------
    So as a newbile I'm not sure how to tune the value for ReplicationParallelism for the best performance.
  • 10. Re: Is this TPS expected?
    ChrisJenkins Guru
    Currently Being Moderated
    You tune it by trying different values until you find one hat is optimal for your workload on your hardware. So if 3 works best for your case then 3 is the best value :-)

    Chris

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points