This content has been marked as final. Show 12 replies
Transaction logging. The log buffer is flushed continually, not when it gets full (flushing it only when it becomes full would expose the database to massive data loss if there was a failure). Also, you cannot run a real system with such a large checkpoint interval. You need to checkpoint 'frequently enough' which in many cases means more often than the default of 600 seconds. In addition some components, such as the replication agent, issue durable commits fairly frequently to ensure there are points of guaranteed recoverability.
Although TimesTen is an in-memory database it provides the same guarantees relating to data safety as a disk based database. However, if everything is properly configured and the hardware is up to the job it can still deliver very high performance. So, some questions:
1. How many disks does this machine have and what type are they?
2. Have you placed the checkpoint files and transaction log files on separate disks to each other and to any other high activity files?
3. During your test run can you run the 'monitor' command from ttIsql several times at 60 second intervals and post the output.
Chris,nice to see you again. I posted more information about my environment some days ago and I hope you can remember that.
Can you tell me when or what's the situation Timesten will flush the transaction log?
I worry about the checkpoint operation will impact on performance a lot so I assign a large value for the checkpoint interval.
If 2 minutes is more reasonable I will try that.
Because it's an in-memory database so the performance should be amazing while more consideration about data safety guarantee
must take more cost and degrade performance.
My boss has asked me the question twice why the in-memory database has the poor performance
and I can not explain anything to him because it's only a black-box to me but thanks for Chris's explanation.
Before I can provide the answers to your question,can you tell me:
1. How to retrieve the disk type and use what tool or command?
2. checkpoint files and transaction log files are placed on the same disk and directory.
3. I will post the output tomorrow mornning.
By 'disk type' I mean information such as are they SATA, SCSI or SAS? Are they 5400 rpm, 7200 rpm, 10,000 rpm or 15,000 rpm? Is the controller a standard controller or soem sort of enhanced RAID controller with large cache etc. You need to check the hardware spec. for the machine; I do not know if there is any tool that will give you this info.
TimesTen will flush the transaction log buffer to disk whenever there is any data in there than needs to be flushed. So under a continuous heavy write workload (such as inserts to an AWT cache table) the flushing will be continuous too.
Checkpointing can be a CPU and I/O intensive operation but it is essential for data safety and to prevent excessive accumulation of transaction log files. So you need to have it occurring with a reasonable frequency. It is also essential that you separate the checkpoint fiels and the transaction log files onto separate physical disks or disk groups to avoid I/O contention. You can create the datastore with the LogDir attribute to make this separation.
It is easy to make a non-transactional, non-persistent in-memory database but the use for such a thing is quite limited. TimesTen has much wider usage possibilities and with the correct configuration and adequate hardware it will still give very high performance. The 'monitor' output may reveal something nas to why you are seeing much lower than expected performance so when we have that we shall see what it shows.
Chris, my disk type is SAS and disk speed is 10000 rpm.
If timesten will flush the transaction log buffer always does the disk type and speed is one of the key point for performance?
I will try to give a smaller value for the checkpoint interval and separate the checkpoint files and the transaction log files onto separate physical disks.
You can download the output for monitor command here(executed every several minutes but not exactly 1 minute),
Transaction log flushing is a background activity that is outside the application transaction path (unless you use DurableCommits=1 of course) so it does not have a direct impact on performance. However, it does have an indirect impact in several ways depending on your setup:
1. If the volume of log data being generated by the workload cannot be flushed to disk as quickly as it is being generated then this exerts a type of 'back pressure' on the logging mechanism which can degrade throughput. This kind of situation is evidenced by a non-zero and increasing value for SYS.MONITOR.LOG_BUFFER_WAITS. I do not see any evidence of this in your 'monitor' output but fo course you do nto have checkpointing working properly at the moment. If the logs and checkpoint fiels are on the same disk then when a checkpoint occurs it can significantly impede logging due to I/O contention.
2. When replication and/or AWT are being used the replication agent tries to capture changes from the in-memory log buffer. If the changes cannot be captured/propagated/applied/acknowledged fast enough compared to the workload then the repagent will 'fall behind' and a replication backlog will build up. Once this happens the repagent has to read the logs on disk to capture changes,. Not only is this much slower than capturing from the in-memory log buffer but it also changes the I/O pattern on the logs from large block sequential writes to a mix of large block sequential writes and large block sequential reads. Effectively the disk is now having to do random access instead of sequential access. This has a major impact on the I/O on the disk and will significantly impact performance. Your monitor output shows evidence of this being a serios issue in your setup. You can see that LOG_FS_READS (Which should ideally be zero) is both large and increasing across the samples. Also, the delta between LAST_LOG_FILE (the number of the last used log file) and the REPHOLD_LOG_FILE is increasing rapidly showing that replication/AWT is falling further and further behind.
If you are using async replication then the issue could be either with replication or AWT. If you are using sync (RETURN TWOSAFE) replication then the issue is with AWT propagation to Oracle. I can't recall your exact setup; maybe you can remind me? Of course RETURN TWOSAFE replication will also ultimately limit the throughput that you can achieve.
At the end of the day there is no magic here and it all comes down to the laws of physics. If you need 100% data safety then you need either synchronous replication or synchronous commit to disk and both will limit the maximum performance that can be achieved. If you can accept soem possibility for data loss on failure then you can use async replication but the performance of capture/propagate/apply will still set an upper limit on the performance that can be sustained. If you are propagating all changes to Oracle then Oracle will become the limiting factor even with async propagation.
So, depending on your requirements you need to choose what is appropriate. And you may have to increase hardware configuration or even consider alternative overall architectures if you cannot achieve all your requirements with the current architecture.
thanks for the detail information.
I'm using sync replication policy for the active standby pair.
here is my DSN configuration:
Can you tell me what options are unreasonable?
This mostly looks reasonable. Personally I would increase LogFileSize to make it larger, say 1 GB. Also, you have not explicitly set CacheAWTParallelism so it should be defaulting to 2*ReplicationParallelism (i.e. 6). You should check with ttRepAdmin -showstatus to be sure you do have 6 AWT propagaters.
How many concurrent application processes/threads are working on TimesTen when you measure the 4000 tps rate? What is the workload like? Is it just INSERTs or are there other operations too?
I suspect that the overall limit of 4000 TPS is due to the synchronous replciation. You may need to use more concurrent application threads/ processes to increase this number.
However, even at 4000 TPS the propagation to Oracle is completely unable to keep up. You need to look at tuning Oracle (and maybe adding more hardware) in order to enable Oracle to keep up with this ingest rate. Until this issue is resolved there is no point trying to increase the rate within TimesTen as it will just exacerbate this problem.
Chris, in fact LogFileSize is 1GB at first and I tune its value to 512MB for tests.I will make sure if there are 6 AWT propagaters tomorrow.
My application has a connection pool and its max size is 100 and 100 threads(thread pool) are working on processing the requests sent from the loadrunner.
The loadrunner simulates 200 users to send requests to my application concurrently.The workload logic is written in a procedure
which check the presence of the related records at first, if no presence then obtain 3 sequences and eventually insert 3 records into 3 cache tables in 2 diffent cache groups.
I can post the procedure to your inbox if you want to discover what's going on.
Only when the application is deployed on the same server as the active database I can obtain the 4000+ tps(connect by unix domain socket),
while 3000+ tps when the application is deployed on the other server where the standby database stay.
At the begining my application directly connect to the oracle database that is slow and tps is very low but we have no plan or cost to purchase/change the new independent RAID.
So the incipient motivation is to employ timesten to be the cache layer before the oracle database to provide a amazing tps .
If the oracle database is the biggest problem it will end up with sadness.
Thank you, Chris.and I want keep a hope the performance would be better under your help.
After our dba tune some settings for the oracle database, my application that directly connects to the oracle database obtains a larger TPS than before.
The tps is 3000+ and satisfies the requirement. My boss told me that it's time to stop researching and testing timesten database.
Whatever I still need think to Chirs for much help, thanks.