At this moment when I checked the disk space on Exalytics(where our TimesTen is installed), it showed available space as "0".
I cleared some space on the disk, about 50GB+ then tried to connect the respective DSN, which triggered following Error in ttisql:
Command> connect "dsn=tt_xyz";
*836: Cannot create data store shared-memory segment, error 22*
*703: Subdaemon connect to data store failed with error TT836* The command failed. Command>
# Controls the maximum shared segment size, in bytes
#kernel.shmmax = 68719476736
kernel.shmmax = 1099511627799
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 2048 64000 256 64
net.ipv4.tcp_rmem = 16777216 16777216 16777216
net.ipv4.tcp_wmem = 16777216 16777216 16777216
net.ipv4.tcp_mem = 16777216 16777216 16777216
net.core.optmem_max = 16777216
net.ipv4.ip_local_port_range = 9000 65500
# Changed from 10000 to 55000 by ME on Aug 6 2012
# Changed by ME to 150000 on Aug 24, 2012
vm.nr_hugepages = 410000
#Added by ME on Aug 6 2012
vm.hugetlb_shm_group = 500
I'm unable to connect to this DSN anymore.
Does increasing LogFileSize will help? or is the problem related with shmmax and shmni in sysctl.conf?
Please help me with this issue, as we are in the middle of a Demo.
Can you please confirm if you are using huge pages (post output of cat /proc/meminfo, output of cat /etc/security/limits.conf, output of ttVersion and output of cat /u01/app/TimesTen/tt1122/info/ttendaemon.options). I suspect that you may have an old shared memory segment 'left over' which is consuming your memory. You may be able to verify this using ttStatus and ipcs.
Could you also confirm how you 'cleared some space on the disk'? Hopefully you did not delete any TimesTen related files...
I really appreciate your help in this. But unfortunately we had to destroy the DSN, as it has been using up memory at peaks.
We have decided to re-create the DSN and reload the tables, using Columnar Compression.
Would this be any helpful? Are there any other alternatives to reduce amount of space taken by tables in TimesTen.
I have another question.
After destroying the Datastore successfully, the entries for it in sys.odbc.ini were retained.
Can I simply edit the file and remove it?
If you are on an Exalytics system then Columnar Compression is a very good way to save memory (and potentially also boost performance in some cases). Also, data type mapping can yield useful savings too. We have a tool that can help with type mapping and deciding what to compress and this will likely be included (initially as a non-production utility) in an upcoming TimesTen release.
With regard to destroying the datastore; ttDestroy simply removes the physical database files from disk and updates the main daemon catalog. It does not remove the entry in sys.odbc.ini (no TimesTen tool ever modifies that file). Once the database is destroyed you can simply edit the sys.odbc.ini file to remove the DSN or, if you actually want to re-create the database, you can just edit the parameters and then connect to the DSN as the instance administrator to re-create the database with the new settings.
We don't (yet) have any formal documentation on this but there are three kinds of type mapping that you might consider.
'Standard' numeric type mapping. This consists of mapping certain types of NUMBER column to a compatible TimesTen native integer type. The native integer types are (a) smaller (in terms of memory requirements) and (b) faster (as they are hardware implemented native types as opposed to the software implemented NUMBER type). Types defined in Oracle as NUMBER(p,s) can be mapped as follows for the case where s=0.
p < 5 -> TT_SMALLINT
p >= 5 and p < 10 -> TT_INTEGER
p >= 10 and p < 19 -> TT_BIGINT
This mapping is safe and depending on the schema may result in useful savings.
The other two mappings are more aggressive and should be used with caution.
1. Analyse the data that you want tyo move to TimesTen in the source database and for NUMBER columns check (a) if they really have any fractional values, (b) if they have any negative values and (c) the maximum stored value and then apply the mapping I mentioned previously based on this analysis. You can also then consider additional mappings:
Value >= 0 and <= 255 -> TT_TINyINT
p >= 19 but < 22 -> NUMBER(p,0)
2. Analyse the data in VARCHAR2 and/or NVARCHAR2 columns to determine the length of the maximum stored value. If it is < the defined maximum length of the column reduce the size of the column in TimesTen.
These mappings are less safe in that if the data is subsequently modified in Oracle it may no longer meet the criteria used for the mapping and a subsequent load of the data into TimesTen may give errors. In that case you would have to regenerate the tables in TimesTen.
As I mentioned, we have a tool which can do all this for you and in addition recommend/generate compressed TimesTen table definitions (if the source data is in an Oracle database of version 10gR2 or newer). If you contact your local Oracle office and ask them to get in touch with me we may be able to let you have this tool in advance of it's official release. We are looking for real world feedback on it to help improve it for future iterations.