4 Replies Latest reply on Nov 19, 2009 12:13 AM by 696067

    incremental inferencing questions

      I am very confused about how to make incremental inferencing work. Here is what I tried in java code:

      Attachment attachment = Attachment.createInstance(new String[]{}, "OWLPRIME",

      graph = new GraphOracleSem(oracle, "mc_model", attachment);

      The trouble is, when I try to do something like this:

      ModelOracleSem oracle_sem_model = new ModelOracleSem(graph);
      OntModel model = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM, oracle_sem_model);
      Individual d = model.createInstance(NS+"New_Individual");
      parentIndividual.addProperty(transitiveProp, d);

      The command graph.commit() takes around 5 seconds, the same amount of time it if the entailments are created without incremental inferencing. Is there something I am doing wrong? Can I use the OntModel APIs together with Incremental inferencing or do I have to manually add the statements to the model? BTW, I checked the SEM_APIS change tracking property and it seems enabled and has a timestamp

      Edited by: alexi on Nov 13, 2009 7:18 AM
        • 1. Re: incremental inferencing questions

          It looks like incremental inference is indeed being used. Note that there is some overhead involved with doing incremental and non-incremental inference in general, so even with small models it might take a few seconds to finish. Some of the overhead is caused by the large number of rules in the OWLPRIME rulebase, so if you don't need all of them you can selectively disable some components using the GraphOracleSem.performInference(String components) procedure.

          Also, it depends on your dataset. For instance, if you're adding only one triple, but that triple declares some heavily used property to be transitive, then that addition might trigger many additional inferences and updating the inferred graph will take more time.

          Regarding OntModel APIs and incremental inference, depends on the loading method you use. Incremental inference works best with incremental loading. Please refer to Section 2.2.9 of the Semantic Technologies Developer's Guide for more details.

          • 2. Re: incremental inferencing questions
            That is very strange. I am basically inserting 'd' into a 5-level-deep tree of Transitive links. The tree has around 5300 nodes but 5000 of them are leaves.
            It takes 5 seconds for the inference to be done whether I insert one leaf like 'd' or 100 of them.
            By doing a performInference with only SCOH, SPOH, and TRANS it goes down to 3 seconds (further removing TRANS goes down to 2 seconds but I absolutely need to use TRANS).

            How come it takes the same 3-5 seconds to infer 5 transitive links and 500 of them?
            • 3. Re: incremental inferencing questions

              Can you let us know:
              a) how large is the asserted dataset and how many triples are generated with inference (from scratch) ?
              b) what is the performance target for the incremental inference calls?

              Regarding b), if your inference performance target is on the order of milliseconds, you might want to try out PelletDB [1].


              [1] http://clarkparsia.com/pelletdb/
              1 person found this helpful
              • 4. Re: incremental inferencing questions
                Thanks Vlad, it looks like in-memory forward-chaining is more of what I am looking for in this regard.
                As to your questions:
                1) I basically have a 5 level deep tree where every node is transitive to the root, with 5000 leaves and nodes of 200,100,30,1 respectively which to my understanding is 15530 transitive links.
                2) My performance target is indeed in the milliseconds but that's when it comes to adding 1-50 leaves.

                Thank you for your answers.