This content has been marked as final. Show 8 replies
Could you please cut & paste the exact missing class exception?
In my own setup, I have in my classpath the following (Jena 2.5.6 is assumed).
Note that JELIB points to Jena lib directory. PLIB points to Pellet lib directory.
yes, I'm sorry I didn't do it in the first post...
Exception in thread "main" java.lang.NoClassDefFoundError: com/hp/hpl/jena/sparql/engine/main/StageBasic
The exception comes up from this line:
GraphOracleSem graph = new GraphOracleSem(oracle, modelName);
I've got jena.jar 2.5.6 in the classpath, as well as arq.jar, commons-logging-1.1.1, icu4j_3_4, log4j-1.2.12, sdordfclient, concurrent, iri, xercesImpl, ojdbc5.
I don't need Pellet.
Ok, I've downloaded the jena.jar again and now it seems to work...
Obviously, the jar file was incomplete. Strange, but okay..
That is good news.
I got the following stack trace when I ran my test code with Oracle Jena adapter and Jena 2.5.7.
I took a look at the arq.jar and it looks that the StageBasic class has been removed from Jena 2.5.7 (Arq2.6.0). Is this a known issue? If so any plan for the Oracle Jena adapter upgrade for the fix? Thanks,
Please use Jena 2.5.6.
Jena 2.5.7 is known to be incompatible with Jena Adaptor 2.0.
How about the second question:
"Another question, that I have, is: what is the best way to store n3 data via jena adapter in an oracle database?"
I am trying to load large amount of data (hundreds of megabytes).
Is the Jena approach the right way to go? Will all that data fit into an in-memory model? Are there any alternatives to make it more scalable for bigger files?
Thanks in advance
Just want to clarify, N3 or N-TRIPLE?
Assume the data is too big to fit in memory (depends on your machine configuration)
- If it is N-TRIPLE, you can directly use 11g bulk loader to load it. If there is long literals, you have to load them yourself.
- If it is N3, then you have to use a tool (either Jena or something) to convert into N-TRIPLE, and then do the above step. The conversion step may be tricky because the data is huge.
Assume the data can fit in memory (given that the data size is hundreds of MBs (not GBs))
- You should be able to load the file using OracleBulkUpdateHandler.addInBulk API