This discussion is archived
4 Replies Latest reply: Feb 26, 2013 11:54 PM by ChrisJenkins RSS

Memory Size for a 452 GB TT Datastore

user616555 Newbie
Currently Being Moderated
We are sizing a large TimesTen database to cache data from Oracle RAC DB. We are trying to estimate the total memory that TT will require.
The total ttsize for the cached tables is 452 GB. It will have Two Read Only Cache Groups refreshed once daily. Expecting to support 500 transactions per second and a response time of 2 secs.Select queries (with joins) only.
Is there a way to size the total memory TT will occupy (temp size, log buffer and other overheads alongwith the tablesize we have estimated using ttsize) ?

Edited by: user616555 on Feb 17, 2013 6:43 PM
  • 1. Re: Memory Size for a 452 GB TT Datastore
    user616555 Newbie
    Currently Being Moderated
    Correction: Expected to support 5000 transactions per second (oops missed a zero there!!)
  • 2. Re: Memory Size for a 452 GB TT Datastore
    ChrisJenkins Guru
    Currently Being Moderated
    The log buffer size chosen primarily affects write workloads (it has no impact on query operations). If your database will contain only the two cache groups that you mention then the only write operations will be (a) initial loading of the cache groups (large volume, highly intensive write operations), (b) incremental autorefresh (lower volume medium intensity write operations) and maybe (c) dynamic load operations (if you are using dynamic cache groups) (low volume, low intensity write operations). The default size for LogBufMB is 64 MB which is really quite small. Given that we are talking here about a very large memory system (realistically at least 3/4 TB RAM) I would recommend a larger log buffer; 512 MB or even 1 GB. ALso be sure to set LogBufParallelism suitably based on the number of CPU cores in the machine and the maximum number of concurrent application connections. Indications that the log buffer is too small is an increasing value for SYS.MONITOR.LOG_BUFFER_WAITS. This should start at zero and remain at zero while cache group load and refresh operations are occurring.

    Sizing Temp is really an empirical exercise; there is no way to 'calculate' or even 'estimate' how much will be needed. It depends very heavily on the kinds of operations being run. For OLTP / simple query workloads the temp space requirements are usually fairly modest but for analytical / complex query workloads they may be significant (sort space, materialised intermediate tables, etc.). Start by allocating a size of 10% of PermSize, run your workload and observe SYS.MONITOR.TEMP_IN_USE_HIGH_WATER. This value represents the highest usage of Temp space since the database was last loaded into memory (i.e. started up). If your workload consistently uses much less Temp space than you have allocated then you can consider reducing the allocation. If you are constantly at 90% utilisation then you should increase it. If Temp is grossly undersized then soem queries may fail with an error indicating that they cannot allocate enough temp space; this obviously means you need to increase it (or tune the queries to use less temp space if possible).

    The only other 'overhead' you need to consider is additional PermSpace for incremental refresh. When data in readonly cache groups is being modified by incremental autorefresh (not initial load) then the usual TimesTen versioning comes into effect. This means that while a refresh is in progress any rows that are updated will exist as two copies in memory and any rows that are deleted will not be truly freed until the refresh completes. So depending on the volume of data that might be modified in any one incremental refresh operation you need to make soe allowance for memory for that too.

    Chris
  • 3. Re: Memory Size for a 452 GB TT Datastore
    user616555 Newbie
    Currently Being Moderated
    Chris,

    Thank you very much for your comments. This is very helpful.
    The size of 452 GB is the total 'TTSize' which I obtained by creating the tables and indexes in TimesTen and estimating for projected no of rows using 'ttsize' utility.
    So roughly the size would be -
    permSize (452 GB) + tempSize (10% of 450 = 45 GB) + logBuffer (1 GB) + 20MB = approx. 500 GB
    If we assume worst case of 100% refresh of all rows then another 500 GB

    So anytime we should have atleast 2X (where X is permSize) RAM (1 TB in this case) available for TimesTen on the server.

    I am not sure I understand the "(realistically at least 3/4 TB RAM)". Kindly elaborate.

    Regards,
  • 4. Re: Memory Size for a 452 GB TT Datastore
    ChrisJenkins Guru
    Currently Being Moderated
    An incremental autorefesh that ended up refreshing 100% of all rows in all tables with this volume of data is not only highly unrealistic but would also likely cause many problems not least imposing a very heavy load on the Oracle database. As suchI had not considered it as a factor in sizing the memory. If you ever need to refresh the entire data stet then you are much better off doing it in a planned way and performing an explicit full refresh 9which works differently and will not require such large memory overhead).

    So, for all realistic/viable scenarios you should be fine with 3/4 TB RAM.

    Chris

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points