1 Reply Latest reply on Jul 21, 2008 10:48 PM by Sdas-Oracle

    Does anybody know how Oracle load large N-Triple file into Oracle 11g R1?

    649337
      Does anybody know how Oracle load large N-Triple(NT) file into Oracle 11g R1 by using sql*loader according to their benchmark results?

      Their benchmark results indicate they have over come the large data set problem.

      http://www.oracle.com/technology/tech/semantic_technologies/htdocs/performance.html

      It means they have loaded LUBM 8000(1.068 Billion+ Triples) into Oracle successfully, but there is no detailed steps provided. For instance, 32-bit or 64-bit platform they used, only one NT file being used corresponding to one dataset or several NT files?

      Is there any exception occured during the loading process if the NT file beyond 60GB? When using jena to generate NT file against LUBM(8000), the size of the NT file would definitely beyond 60GB.

      We are dividing such large NT file into several small ones? Is it a good approach? I'm hesitating to do so!
        • 1. Re: Does anybody know how Oracle load large N-Triple file into Oracle 11g R
          Sdas-Oracle
          A Linux 32-bit platform was used for bulk-load of LUBM-8000 1.106 billion (before duplicate elimination) RDF triples into Oracle.

          Multiple gzipped N-Triple files were used to hold the LUBM-8000 data. zcat was used on all these files together to send the complete data into a named pipe. SQL*Loader used this named pipe as the input data file to load the data into a staging table in Oracle. Once the staging table was loaded, the sem_apis.bulk_load_from_staging_table API was used to load the data into Oracle Semantic Store.

          (Additional details in http://www.oracle.com/technology/tech/semantic_technologies/htdocs/performance.html )

          Thanks.