This content has been marked as final.
Show 2 replies
-
1. Re: Bulk loading in 11.1.0.6
Sdas-Oracle Oct 9, 2009 2:51 AM (in response to 630782)Bulk-append is slower than bulk-load because of incremental index maintenance. The uniqueness constraint enforcing index cannot be disabled. I'd suggest moving to 11.1.0.7 and then installing patch 7600122 to be able to make use of enhanced bulk-append that performs much better than in 11.1.0.6.
The best way to load 200 million rows in 11.1.0.6 would be to load into an empty RDF model via a single bulk-load. You can do it as follows (assuming the filenames are f1.nt thru f60.nt):
- [create a named pipe] mkfifo named_pipe.nt
- cat f*.nt > named_pipe.nt
on a different window:
- run sqlldr with named_pipe.nt as the data file to load all 200 million rows into a staging table (you could create staging table with COMPRESS option to keep the size down)
- next, run exec sem_apis.bulk_load_from_staging_table(...);
(I'd also suggest use of COMPRESS for the application table.) -
2. Re: Bulk loading in 11.1.0.6
630782 Oct 12, 2009 2:54 PM (in response to Sdas-Oracle)Thanks for the help. I combined all the files and loaded it in a single batch. It took significantly less time to complete the load.
Weihua