Skip to Main Content

DevOps, CI/CD and Automation

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

32 BITS 12C ODBC not working with WINDOWS 10

user2618876Jun 8 2016 — edited Jul 15 2016


Hi Everyone,

this is my first post, please give me some help to give me a good first experience, thanks in advance.

Here is the issue I am facing, my company is about to upgrade all computer to windows 10, I tested the Oracle 12c client odbc 32 bits in windows 10, unfortunately, it does not work, it come back with an error unable to find dll SQORA32.DLL even the dll does exist. It does not have such problem if using Oracle 11g 32 bits under same env.

our product only work with 32 bits odbc driver, that is why I need to mention 32 bits here.

Does anyone has a solution for this.

Thanks

Patrick

Comments

715399
Answer
Hi,

It looks like incremental inference is indeed being used. Note that there is some overhead involved with doing incremental and non-incremental inference in general, so even with small models it might take a few seconds to finish. Some of the overhead is caused by the large number of rules in the OWLPRIME rulebase, so if you don't need all of them you can selectively disable some components using the GraphOracleSem.performInference(String components) procedure.

Also, it depends on your dataset. For instance, if you're adding only one triple, but that triple declares some heavily used property to be transitive, then that addition might trigger many additional inferences and updating the inferred graph will take more time.

Regarding OntModel APIs and incremental inference, depends on the loading method you use. Incremental inference works best with incremental loading. Please refer to Section 2.2.9 of the Semantic Technologies Developer's Guide for more details.

Cheers,
Vladimir
Marked as Answer by 696067 · Sep 27 2020
696067
That is very strange. I am basically inserting 'd' into a 5-level-deep tree of Transitive links. The tree has around 5300 nodes but 5000 of them are leaves.
It takes 5 seconds for the inference to be done whether I insert one leaf like 'd' or 100 of them.
By doing a performInference with only SCOH, SPOH, and TRANS it goes down to 3 seconds (further removing TRANS goes down to 2 seconds but I absolutely need to use TRANS).

How come it takes the same 3-5 seconds to infer 5 transitive links and 500 of them?
715399
Hi,

Can you let us know:
a) how large is the asserted dataset and how many triples are generated with inference (from scratch) ?
b) what is the performance target for the incremental inference calls?

Regarding b), if your inference performance target is on the order of milliseconds, you might want to try out PelletDB [1].

Cheers,
Vlad

[1] http://clarkparsia.com/pelletdb/
696067
Thanks Vlad, it looks like in-memory forward-chaining is more of what I am looking for in this regard.
As to your questions:
1) I basically have a 5 level deep tree where every node is transitive to the root, with 5000 leaves and nodes of 200,100,30,1 respectively which to my understanding is 15530 transitive links.
2) My performance target is indeed in the milliseconds but that's when it comes to adding 1-50 leaves.

Thank you for your answers.
Alexi
1 - 4
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Nov 11 2020
Added on Jun 8 2016
14 comments
54,869 views