On January 27th, this site will be read-only as we migrate to Oracle Forums for an improved community experience. You will not be able to initiate activity until January 30th, when you will be able to use this site as normal.

    Forum Stats

  • 3,889,515 Users
  • 2,269,752 Discussions
  • 7,916,763 Comments

Discussions

How to set DDLCommitBehavior

energyd
energyd Member Posts: 7
edited May 14, 2016 10:34AM in TimesTen In-Memory Database

TimesTen: Release: 11.2.2.6.12

We are writing some scripts to automate schema changes(through ddl and dml files) during our software releases. We are using ttIsql in our script. We want the behavior to be that whenever there is an error in any of the ddl/dml, timesten could exit and rollback. So that we can be sure that either all the scripts are committed or none is.

So, I guess we need to set Autocommit=0 and DDLCommitBehavior=1. (Please correct me if I'm wrong)

While we could set autocommit in the ttIsql connection string, we cannot find how to set DDLCommitBehavior correctly. Please advice how can we set DDLCommitBehavior using ttIsql.

Thanks,

D

Best Answer

  • ChrisJenkins-Oracle
    ChrisJenkins-Oracle Senior Director, In-Memory Technology United KingdomMember Posts: 3,412 Employee
    edited May 12, 2016 11:12AM Answer ✓

    As I said, uncommitted data is not stored in the Temp area nor in a temp table. The only thing relating to uncommitted transactions that is stored in Temp are locks and unless the transactions is truly enormous or Temp space is set very small this should not be an issue. You should not need to worry about it. But...

    Very large transactions are problematic even without replication. As changes occur in the database as part of a transaction, TimesTen keeps track of freed up space (old versions of modified rows, deleted rows etc.) via chaining of log records. This 'chain' information is held in a memory buffer in the database. As part of 'commit' TimesTen walks this chain to 'reclaim' the freed up space and make it available for other uses. Provided that the number of changes in a transaction remains small this chain is held entirely in memory and so commit is very fast. Once the number of changes in a transaction exceeds what will fit in the buffer then we fall back to reading the actual log records for the transaction from disk, in reverse order. As you can imagine this is very, very slow compared to in-memory processing and gets worse the larger the transaction is. In addition this reverse I/O introduces contention to regular log I/O which is normally large block sequential I/O so it can impact other transactions not just the one committing.  TimesTen does allow you to tune the size of the 'commit buffer' (see the ODBC attribute CommitBufferSizeMax, the ALTER SESSION command and the builtin procedures ttCommitBufferStats and ttCommitBufferStatsReset in the TimesTen documentation) to allow for larger transactions to commit without the need to read reclaim info from disk. But using these is an iterative approach since you cannot 'calculate' how much memory will be needed but rather you have to measure it (or just assign a very large maximum size).


    The best/easiest thing is still to keep transactions to a reasonable size.


    Chris

    energyd

Answers

  • ChrisJenkins-Oracle
    ChrisJenkins-Oracle Senior Director, In-Memory Technology United KingdomMember Posts: 3,412 Employee
    edited May 11, 2016 2:38AM

    Hi,

    You cannot set Autocommit in the connection string. Autocommit is not an ODBC attribute; it is an API setting. You must disable auto commit using the ttIsql command 'autocommit 0;' as the first line of every script. DDLCommitBehavior is an ODBC attribute so it can be set in a connection string:

    ttIsql -f my script.sql -connstr "DSN=mydsn;DDLCommitBehavior=1"

    However, some important caveats regarding DDLCommitBehaviour:

    1.    This option will be deprecated in the next major release and removed in the release after that. We do not recommend anyone to start now planning to use DDLCommitBehaviour=1.

    2.    If you have PLSQL=1 (which is the default if you have not explicitly set it to 0) then you cannot use DDLCommitBehaviour=1.

    3.    If you really must use DDLCommitBehaviour=1 then you should set it in the connection string just for those connections that really need this obsolete behaviour. Do not enable it globally in the DSN attributes.

    I would encourage you to find an alternative to using DDLCOmmitBehaviour=1; perhaps running a script that explicitly undoes the DDL changes .

    Chris

  • energyd
    energyd Member Posts: 7
    edited May 12, 2016 10:38AM

    Thanks for your reply Chris. I have another question related to uncommitted transaction and temp table size. If we turn autocommit off, the uncommitted transaction will accumulate in some temp space before we issue a commit.

    How could we see, measure and set that temp space size? We are worrying that very large transaction is gonna surpass the temp space limit, and blow up the db.

    Also, is this temp space also used to hold temp data during replication? Will a very large transaction cause the replication to fail? How do we monitor it?

    In the log location I found the following file:

    1.1M Apr 17 00:01 DB.assert

    1.3K May 11 13:46 DB.trace

    2.7M May 11 13:46 DB.inval

    1.0G May 11 13:50 DB.res0

    1.0G May 11 13:50 DB.res1

    1.0G May 11 13:50 DB.res2

    6.1G May 11 14:41 DB.ds1

    1.0G May 11 14:43 DB.log1510

    6.1G May 11 14:51 DB.ds0

    199M May 11 14:54 DB.log1511

    I'm using the DB.log.XX files to monitor the temp data size when I test a large transaction. But sometime I found the DB.log.xx file size going up by 400M while I only set < TempSize, 350 >, while the db is working fine. How come?

    Our config:

    < CacheAwtMethod, 1 >

    < CacheAwtParallelism, 1 >

    < CacheGridEnable, 1 >

    < CacheGridMsgWait, 60 >

    < CkptFrequency, 600 >

    < CkptLogVolume, 0 >

    < CkptRate, 0 >

    < CommitBufferSizeMax, 0 >

    < Connections, 2000 >

    < DDLCommitBehavior, 0 >             -> could be temporarily set to 1

    < DDLReplicationAction, INCLUDE >

    < DDLReplicationLevel, 2 >

    < DataStore,xxxx >

    < DynamicLoadEnable, 1 >

    < DuplicateBindMode, 0 >

    < DurableCommits, 1 >

    < DynamicLoadErrorMode, 0 >

    < Isolation, 1 >

    < RangeIndexType, 1 >

    < LockLevel, 0 >

    < LockWait, 10 >

    < LogAutoTruncate, 1 >

    < LogBufMB, 1024 >

    < LogBufParallelism, 20 >

    < LogDir, xxx/xxx >

    < LogFileSize, 1024 >

    < LogFlushMethod, 1 >

    < LogPurge, 1 >

    < Logging, 1 >

    < MemoryLock, 4 >

    < NLS_LENGTH_SEMANTICS, BYTE >

    < NLS_NCHAR_CONV_EXCP, 0 >

    < NLS_SORT, BINARY >

    < OracleNetServiceName, <NULL> >

    < PLSCOPE_SETTINGS, IDENTIFIERS:NONE >

    < PLSQL, 0 >

    < PassThrough, 0 >

    < PermSize, 7400 >

    < PermWarnThreshold, 90 >

    < Preallocate, 0 >

    < PrivateCommands, 0 >

    < QueryThreshold, 0 >

    < RACCallback, 1 >

    < ReceiverThreads, 1 >

    < RecoveryThreads, 1 >

    < ReplicationApplyOrdering, 0 >

    < ReplicationParallelism, 1 >

    < ReplicationParallelismBufferMB, 64 >

    < ReplicationTrack, -1 >

    < SQLQueryTimeout, 300 >

    < TempSize, 350 >

    < TempWarnThreshold, 90 >

    < Temporary, 0 >

    < TypeMode, 0 >

    Thanks a lot,

  • ChrisJenkins-Oracle
    ChrisJenkins-Oracle Senior Director, In-Memory Technology United KingdomMember Posts: 3,412 Employee
    edited May 12, 2016 10:35AM

    When you have autocommit disabled, which is the normal and best practice for any application, it is up to the application (or your scripts) to issue commit operations at suitable points (e.g. to delimit database transactions). It is very, very important to keep transactions small. You should never update/insert/delete more than a few thousand rows in any one transaction. Very large transactions will perform very poorly and are also highly problematic for replication. This has nothing directly to do with Temp space but is just due to internal mechanisms of TimesTen. Large transactions do hold locks and locks are allocated from Temp space so there is an impact there but the primary reason for keeping transactions small is performance and robustness of replication. Replication does not use significant amounts of temp space; uncommitted transactions that have been captured from the logs for replication are stored in the replication agent's private memory. Again, very large transactions are bad news in a replicated environment.

    So, you need to ensure that your applications, scripts etc. issue commit appropriately and then you will be fine.

    Chris

  • energyd
    energyd Member Posts: 7
    edited May 12, 2016 11:14AM

    Thanks Chris,

    I understand the risk of very big transaction. My concern now is only related to the temporary table size. I just need a way to monitor the size of the temporary table used to hold uncommitted data. Let's just first forget the replication part(I would stop the replication agent before I apply any schema change, and destroy secondaries and duplicate from primary). For each sql file I want to apply, I would first turn autocommit=0, DDLCommitBehavior=1 without any commit, if it succeeded w/o any error, I would proceed to turn on autocommt and apply the script. Our dml change is usually small, the bigger concern is DDL changes (like add a column to a big table or create a materialized view)

    I just want to be sure the primary is ok.

    So when I turn autocommit=0 and DDLCommitBehavior=1, how do I know the exact size of the temp data during a transaction(either purely ddl or purely dml)?

    In the log location I found the following file:

    1.1M Apr 17 00:01 DB.assert

    1.3K May 11 13:46 DB.trace

    2.7M May 11 13:46 DB.inval

    1.0G May 11 13:50 DB.res0

    1.0G May 11 13:50 DB.res1

    1.0G May 11 13:50 DB.res2

    6.1G May 11 14:41 DB.ds1

    1.0G May 11 14:43 DB.log1510

    6.1G May 11 14:51 DB.ds0

    199M May 11 14:54 DB.log1511

    I'm using the DB.log.XX files to monitor the temp data size when I test a large transaction. But sometime I found the DB.log.xx file size going up by 400M while I only set < TempSize, 350 >, while the db is working fine. So I guess either the log is not good indicator of the size or TempSize is not?


    Also I noticed that dssize is not helpful either,  TEMP_IN_USE_HIGH_WATER and TEMP_IN_USE did not change.


    Please help~


    Thanks a lot!

  • ChrisJenkins-Oracle
    ChrisJenkins-Oracle Senior Director, In-Memory Technology United KingdomMember Posts: 3,412 Employee
    edited May 12, 2016 11:12AM Answer ✓

    As I said, uncommitted data is not stored in the Temp area nor in a temp table. The only thing relating to uncommitted transactions that is stored in Temp are locks and unless the transactions is truly enormous or Temp space is set very small this should not be an issue. You should not need to worry about it. But...

    Very large transactions are problematic even without replication. As changes occur in the database as part of a transaction, TimesTen keeps track of freed up space (old versions of modified rows, deleted rows etc.) via chaining of log records. This 'chain' information is held in a memory buffer in the database. As part of 'commit' TimesTen walks this chain to 'reclaim' the freed up space and make it available for other uses. Provided that the number of changes in a transaction remains small this chain is held entirely in memory and so commit is very fast. Once the number of changes in a transaction exceeds what will fit in the buffer then we fall back to reading the actual log records for the transaction from disk, in reverse order. As you can imagine this is very, very slow compared to in-memory processing and gets worse the larger the transaction is. In addition this reverse I/O introduces contention to regular log I/O which is normally large block sequential I/O so it can impact other transactions not just the one committing.  TimesTen does allow you to tune the size of the 'commit buffer' (see the ODBC attribute CommitBufferSizeMax, the ALTER SESSION command and the builtin procedures ttCommitBufferStats and ttCommitBufferStatsReset in the TimesTen documentation) to allow for larger transactions to commit without the need to read reclaim info from disk. But using these is an iterative approach since you cannot 'calculate' how much memory will be needed but rather you have to measure it (or just assign a very large maximum size).


    The best/easiest thing is still to keep transactions to a reasonable size.


    Chris

    energyd
  • energyd
    energyd Member Posts: 7
    edited May 12, 2016 11:46AM

    Ok, Chris. Maybe what I really should ask about is the "commit buffer" size instead of temp size.

    During the schema change process, we would stop all other activities of the DB and stop all other java processes on that host. The slowness of DB at that time won't be an issue. Also our dml changes are usually very small, the bigger concern is it fails DDL changes (like add a column to a big table or create a materialized view). 

    I noticed our CommitBufferSizeMax is set to 0 by default, which means all the "commit" related data are stored on disk. Which means as long as the disk have enough space, it won't fail the transaction right?

    "Once the number of changes in a transaction exceeds what will fit in the buffer then we fall back to reading the actual log records for the transaction from disk", by "actual log records", do you mean the DB.log files I posted above? I'm seeing a 400M increase in that file during my "big ddl" testing(there is no other activity on the db instance during that time). So is it a good "measure" of space we need for the commit?

  • ChrisJenkins-Oracle
    ChrisJenkins-Oracle Senior Director, In-Memory Technology United KingdomMember Posts: 3,412 Employee
    edited May 13, 2016 5:18AM

    No, CommitBufferSizeMax=0 means 'use the default size' (128 KB) - see the documentation. TimesTen always uses an in-memory buffer.

    With DDLCommitBehaviour=0, each DDL statement is a separate transaction (there is an implicit commit before and after the DDL statement). So although your sequence of DDL+DML may result in 400 MB of log records each individual DDL will be (much) less than that. Adding new columns to an existing table does not rebuild the entire table; it is done as an in-place ALTER so the amount of log for this is pretty small. Not sure about creating a materialised view; you'd have to experiment with that to see. I imagine that will generate a fair bit of log depending on how large the underlying tables are. Basically you need to look teach individual DDL operation and see which is the 'biggest' in terms of log generation and then size for that.

    Chris

    energyd
  • energyd
    energyd Member Posts: 7
    edited May 13, 2016 2:09PM

    Thanks, Chris. So the log file(specified by the attribute 'LogDir') is the right place to look for how much space we need for test execute ddls(when autocommit=0 and DDLCommitBehabior=1)?

  • ChrisJenkins-Oracle
    ChrisJenkins-Oracle Senior Director, In-Memory Technology United KingdomMember Posts: 3,412 Employee
    edited May 14, 2016 10:34AM

    Yes, provided there isn't anything else going on at the time then the amount of log data generated should give you some idea (but it isn't exact). I will ask you one last time; please don't use DDLCommitBehaviour=1. If you build a process based on this then it won't work in the next major release of TimesTen since we are desupporting DDLCommitBehaviour=1.

    Chris

This discussion has been closed.