This content has been marked as final. Show 2 replies
Bulk-append is slower than bulk-load because of incremental index maintenance. The uniqueness constraint enforcing index cannot be disabled. I'd suggest moving to 188.8.131.52 and then installing patch 7600122 to be able to make use of enhanced bulk-append that performs much better than in 184.108.40.206.
The best way to load 200 million rows in 220.127.116.11 would be to load into an empty RDF model via a single bulk-load. You can do it as follows (assuming the filenames are f1.nt thru f60.nt):
- [create a named pipe] mkfifo named_pipe.nt
- cat f*.nt > named_pipe.nt
on a different window:
- run sqlldr with named_pipe.nt as the data file to load all 200 million rows into a staging table (you could create staging table with COMPRESS option to keep the size down)
- next, run exec sem_apis.bulk_load_from_staging_table(...);
(I'd also suggest use of COMPRESS for the application table.)
Thanks for the help. I combined all the files and loaded it in a single batch. It took significantly less time to complete the load.