This content has been marked as final. Show 2 replies
As you have correctly pointed out, the sizing of temporary tablespace depends on your ontology size, complexity, rules chosen, and amount of data already in the semantic network. It is very hard to recommend a
fixed number. Our own experience with loading and inferencing LUBM benchmarks is that a temporary tablespace
of size 300GB is enough to handle a few billion triples.
Does this help?
Thanks. More than anything you helped acknowledge that with the complexity of the ontology, existing data, etc. there's no easy answer for sizing the temporary tablespace. Since we used 300Gig loading just 20MM triples and you found in your benchmarks that that should have been enough to load a few Billion triples - let's hope that we've reached the upper limit of our size requirements for our complex set-up.
Thanks for taking the time to respond.