This discussion is archived
7 Replies Latest reply: Sep 1, 2012 2:13 AM by ChrisJenkins RSS

Best Hardware Specifications for TimesTen

951138 Newbie
Currently Being Moderated
What is the best hardware specifications for TimesTen environment? i'm my use case i have 500GB to 1TB amount of data.
  • 1. Re: Best Hardware Specifications for TimesTen
    ChrisJenkins Guru
    Currently Being Moderated
    Given the information you have provided this is really a 'how long is a piece of string' question. Hardware spec depends on many factors:

    1. How much data you want to hold in TimesTen and how much memory that will need. You cannot easily determine this just from the raw data size. You need to create the relevant tables and indexes in TimesTen and then use the ttSize tool to estimate the memory requirements. If the data occupies 500 Gb in oracle DB then you will typically need at least 1.5 - 2 GB in TimesTen for the same data, maybe more.

    2. What workload you will run against TimesTen; is it query only or read/write? How many concurrent connections? How many SELECT per second? Simple or complex SELECT? How many DML per second? What persistence level?

    3. What TimesTen features you will use; cache from Oracle? Replication?

    4. TimesTen should be deployed on the same machine as the application so what are the hardware requirements for the application excluding TimesTen?

    Without some reasonable quantitative information around these questions the only answer we can give is 'hardware that is adequate to the requirements'.

    Chris
  • 2. Re: Best Hardware Specifications for TimesTen
    Gennady Sigalaev Journeyer
    Currently Being Moderated
    Hi Chris,
    If the data occupies 500 Gb in oracle DB then you will typically need at least 1.5 - 2 GB* in TimesTen* for the same data, maybe more.
    Are you sure :) ?

    Regards,
    Gena
  • 3. Re: Best Hardware Specifications for TimesTen
    ChrisJenkins Guru
    Currently Being Moderated
    Whoops, lost a few zeros there! I should have said 1500 - 2000 GB.

    Thanks for spotting that Gennady!

    Chris
  • 4. Re: Best Hardware Specifications for TimesTen
    957803 Newbie
    Currently Being Moderated
    Hi Chris,

    I would also be very interested in a good answer to this question! But I fully understand the need for input data :)

    The amount of raw data that our database is going to need to chew on is still quite uncertain, but what I'm more interested in is understanding the underlying processes, and finding the upper limit.

    It's a telco database where subscriber usage information is stored. I have calculated based on the data model that the maximum of raw data is around 5KB per subscriber. However, I saw (using the ttSize tool) that the total size of the database is a lot higher than that, and now I'm trying to figure out how much higher.

    How does the number of overhead bytes increase with an increase of #records in the database? Is there a linear (or exponential) guideline I can use for that? All tables have primary keys (of course), does that positively (or negatively) influence the amount of overhead?

    The number of reads / writes to the database are around 5 to 1, so for every 5 reads there is 1 write (this is a gross estimate though). Some tables are only or mostly read, others have more like a 1:1 read/write ratio. How does the number of select (or complex select) statements influence the size of the database? In which portion of the database is that memory used (the temporary partition? the permanent partition? both? somewhere else?). All writes to the database happen through stored procedures (we're using TimesTen 11.2.2).

    We also have bidirectional replication to another site for redundancy / failover purposes. I assume that the memory required by the replication agents is mostly based on the load on the DB, not the size of the database. Is there a way I can make a meaningful prediction of the memory consumption by the replication processes? If, say, I know that I get 500 writes to a specific table per second, and one record in that table is 70 raw bytes, I can be pretty sure that at least 70*500 bytes of memory will be used by the transaction log buffer until the transactions are transmitted to the other site. Or do I only need to look at the 'size' of the transaction, say: the length of the SQL statement?

    Also, an important factor: due to licensing constraints the databases are installed on a central server, with multiple application servers talking to it. The servers are very close to each other from a network point of view, but still: there is networking overhead. I know what that does to the response time from the application server's perspective, but does that also influence the memory consumption? Of course there are some open connections, but perhaps there is more going on under the hood..

    My apologies, I'm asking a lot of questions at the same time! I hope you can help me with some of these? :)

    Thanks a million in advance!

    Kind regards,
    Pieter
  • 5. Re: Best Hardware Specifications for TimesTen
    ChrisJenkins Guru
    Currently Being Moderated
    Hi Pieter,

    Sorry for the delay in replying; I have been travelling on business and on vacation.

    Firstly, data overhead. There is a (small) fixed overhead per table/index and beyond that memory usage depends on table scheme (datatypes, column sizes etc.). Overhead is pretty much linear not exponential. ttSize is the best tool for getting a reasonably accurate estimate of memory requirements for persistent data. Primary keys are indexes and so occupy some memory for that reason but there is no special overhead other than that and for most setups (e.g. if replication is used) primary keys are essential.

    SELECT operations do not occupy any persistent memory (PermSize). Complex SELECTS may occupy some memory in the temporary region (TempSize) for purposes of materialising result sets, sort space etc. ALso, TimesTen maintains an internal cache of prepared statements which also occupies memory in the temporary region. Locks are also allocated from the temporary region. The only way to size the temporary region is empirically; start with some 'guestimate', run your workload and observe the memory usage metrics in SYS.MONTOR and then adjust memory parameters accordingly.

    Depending on the overall load on the database and the hardware used you will likely need to increase the size of the log buffer and log files beyond the (very small) default of 64 MB. If you do not do so you may face a logging bottleneck which will impact overall database performance and also affect replication performance. The tuning objective here is to keep SYS.MONITOR.LOG_BUFFER_WAITS and SYS.MONITOR.LOG_FS_READS to zero under all workload conditions (if possible).

    TimesTen replication only propagates committed transactions. As the repagent mines the transaction log buffer/log files it maintains lists of 'to be replicated' transactions in memory. As commits are seen for these transactions they are packaged and added to the replication queue. The repagent memory requirement depends on the number and size of these in progress, uncommitted transactions. However, the repagent uses memory for many things so a simpel formula such as you suggest will only give a very rough indication. Again, the only way to get a good handle on this is to observe it under real workload conditions. Note that if something occurs that causes the repagent to allocate a lot of memory (such as a single very large transaction - something to be avoided in a replicated environment!) then the repagent cannot release that memory back to the O/S (there is no mechanism available in any O/S to do this) and so that (virtual) memory will remain allocated to the repagent until the repagent is restarted.

    Using client/server connectivity to TimesTen will of courses significantly impact overall performance due to network round trips. There is some additional server side memory requirements for the server processes that handle client/server connections. You can reduce this by configuring the TimesTen server for multi-threaded operation rather than the default process based mode.

    I hope this information is helpful but of course to get to an actual hardware sizing still needs a lot more information and some actual experiments :-)

    Chris
  • 6. Re: Best Hardware Specifications for TimesTen
    957803 Newbie
    Currently Being Moderated
    Hi Chris,

    Thanks a lot for the detailed information, very insightful! Good to know that overhead grows linear with the raw data.. that makes it easier to extrapolate. I am especially interested in one thing you said:

    You can reduce this by configuring the TimesTen server for multi-threaded operation rather than the default process based mode.

    This refers to the fact that TimesTen can accept multiple (client) threads on the same connection? Is this a big performance winner? Currently I am hoping (though I haven't verified this..) that the client processes have a connection pool large enough for each thread to have its own connection. I have configured the Connections attribute in sys.odbc.ini to support this (currently set to 1024 connections), and appropriately configured the kernel semaphores.

    We are currently having performance issues and I'm trying to find out where the bottleneck is (and if it is even with the database, or that our client application is giving us headaches..) I have increase the transaction log buffer size and the LogBufParallelism, TempSize and PermSize should be big enough (otherwise we'll notice), but I'll look at the other parameters as wel.. If I get really stuck I might contact you again if that's ok :).

    One more question: I noticed that the replication agent process' memory map (using pmap) is around 5GB, even under 0 load. I assume it just reserves a big chunk of memory for itself to be used if needed. That amount probably grows if the process needs more memory right? Can I configure it so that it doesn't allocated more than, say, 10GB?

    Thanks again for your help & explanations!

    Kind regards,
    Pieter
  • 7. Re: Best Hardware Specifications for TimesTen
    ChrisJenkins Guru
    Currently Being Moderated
    Hi Pieter,

    The 'Connections' attribute in general should be left at the default. A TimesTen datastore can support at most 2000 application connections; setting Connections reduces this number (in your case to 1000). The only benefit of this is that it reduces the number of O/S semaphores required. On modern O/S and hardware (memory) systems this saving is negligible.

    By default, the TimesTen server process (ttcserver, used only for client/server connections) works in a 'one process per connection' model whereby every client connection will have a dedicated server processes handling it on the server side. This can be quite inefficient at the O/S level if you have a lot of connections. You can specify that the server process will handle multiple client connections per process by using a sperate thread for each connection rather than a dedicated process. You can control the maximum number of connections a single process will handle and the maximum number of server processes per database using the attributes MaxConnsPerServer (default is 1 - single threaded) and ServersPerDSN. Using the multi-threaded server is just a resource optimisation; there is no connection pooling going on and each client connection still maps to a dedicated local (direct mode) connection from a ttcserver process.

    The replication agent does a lot of complex processing and may require large amounts of (virtual) memory under some circumstances. There is no way to directly control this since if it were to be limited then this could cause replication/AWT to not work.

    Please feel free to come back with more questions if you cannot get to the bottom of your performance problems.

    Regards,

    Chris

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points