This content has been marked as final. Show 6 replies
Not sure why you would want to keep a little-used query in cache (is there some specific problem with that query?), but you could increase the amount of memory allocated (default is 1Gb) via --cmem so the response should always remain in cache - see the performance tuning guide for more information on adjusting this setting: http://docs.oracle.com/cd/E28910_01/MDEX.622/pdf/PerfTuningGuide.pdf .
Hi,1 person found this helpful
AFAIK we can only decide how much memory to allocate for the Dgraph cache. To a large extent, the contents of the Dgraph cache are self-adjusting: what information is saved there and how long it is kept is decided automatically.
How big is this index? What is the specific query you wish to always remain cached? What is the use case for this query? Yes the cache is self-adjusting (thankfully, I wouldn't want to have to write the underlying algorithms for each project), but provided you have the spare memory and the index doesn't exceed that amount, then theoretically at least the result should remain cached indefinitely, right?
If it is an absolute showstopper, you could get the underlying query from the ./logs/dgraphs/DgraphX/DgraphX.reqlog and set up a scheduled task/cron job to execute this query directly against the dgraph (via wget or curl, note for Windows OS this gets installed as part of Platform Services via a gnu utility in %ENDECA_ROOT%\utilities) every X minutes or hours or whatever. Heavy-handed, but it is the only way I can think of that would guarantee it remaining in cache.
The index is quite large, so this is for a few scenarios where the queries take ~20 seconds if not cached and ~1-2 seconds if they are cached.
Thank you all for the advice!
~20 seconds, ouch. Is it an analytics query?
No, it isn't an analytics query. It does return ~500,000 records though.