Skip to Main Content

APEX

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

CSV Upload of more than 50 columns in APEX

DooliusJun 12 2014 — edited Jun 23 2014

I'm needing a way to upload a CSV file that has more than 50 columns.

Everything that I have been able to find have only allowed 50 columns because they are using collections I think.

I have created a table that I want to upload the CSV data into instead of a collection thinking that will allow me to exceed the 50 column limit.

The table and file will have the exact same structure.

Can anyone help with this or point me in the right direction?

Thanks

Steven

APEX: 4.0.2

THEME: SAND

SERVER: Oracle-Application-Server-10g/10.1.3.1.0 Oracle-HTTP-Server

This post has been answered by Doolius on Jun 20 2014
Jump to Answer

Comments

TexasApexDeveloper

There are different routines to do this:

  1) Using a file upload to a blob/clob and then cutting the file up into the required pieces: http://avdeo.com/2008/05/21/uploading-excel-sheet-using-oracle-application-express-apex/

  2) Using an external table with your database, thus you can have a CSV file and query it like a table... External tables in Oracle 

      Oracle external tables – A few examples | Oracle Apps Notes

Thank you,

Tony Miller
LuvMuffin Software
Ruckersville, VA

Doolius

Thanks for the reply.

I've actually tried to implement the first URL in my application, but I get the error:

ORA-01006: bind variable does not exist

And haven't figured out why that is coming up just yet.

Here is the code if someone wants to take a look at it:

DECLARE

v_blob_data       BLOB;

v_blob_len        NUMBER;

v_position        NUMBER;

v_raw_chunk       RAW(10000);

v_char      CHAR(1);

c_chunk_len   number       := 1;

v_line        VARCHAR2 (32767)        := NULL;

v_data_array      wwv_flow_global.vc_arr2;

v_rows number;

v_sr_no number := 1;

BEGIN

delete from TEMP_UPDATE;

-- Read data from wwv_flow_files</span>

select

blob_content

into v_blob_data

from wwv_flow_files

where name like '%Book2%' or name like '%book2';

--last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)

--and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);

v_blob_len := dbms_lob.getlength(v_blob_data);

v_position := 1;

-- Read and convert binary to char</span>

WHILE ( v_position <= v_blob_len ) LOOP v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);

v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk)));

v_line := v_line || v_char;

v_position := v_position + c_chunk_len;

-- When a whole line is retrieved </span>

IF v_char = CHR(10) THEN

-- Convert comma to : to use wwv_flow_utilities </span>

v_line := REPLACE (v_line, ',', ':');

-- Convert each column separated by : into array of data </span>

v_data_array := wwv_flow_utilities.string_to_table (v_line);

-- Insert data into target table </span>

EXECUTE IMMEDIATE 'insert into TEMP_UPLOAD (ONE,TWO,THREE,FOUR,FIVE) values (:1,:2,:3,:4,:5)'

USING v_sr_no, v_data_array(1), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5);

-- Clear out

v_line := NULL; v_sr_no := v_sr_no + 1; END IF; END LOOP; END;

Thanks for the other URLs.

Steven

Anton Scheffer

In line 36 you have 5 bind variables.

In line 37 you are "using" 6 values.

Doolius

Ahh duh! Thank you!

That fixed that issue, but now I'm getting an error that the table or view doesn't exist.

Only thing I really changed was where I added TEMP_UPDATE and also the INSERT statement.

I'll have to look into that more tomorrow.

Thanks for the help so far!

Steven

Doolius

I fixed the other error and the spreadsheet gets uploaded to my table.

However, it gets loaded incorrectly.

Spreadsheet

ONETWOTHREEFOURFIVE
dogyellowskygrasscat
pigblueclouddirtlion
llamaredwindwaterpanther

Table

ONETWOTHREEFOURFIVE
ONETWOTHREEFOURFIVEdog
ONETWOTHREEFOURFIVEdog
ONETWOTHREEFOURFIVEdog

Is there something wrong with the LOOP or the way the values are put into the arrays?

Working Code:

DECLARE

v_blob_data       BLOB;

v_blob_len        NUMBER;

v_position        NUMBER;

v_raw_chunk       RAW(10000);

v_char      CHAR(1);

c_chunk_len   number       := 1;

v_line        VARCHAR2 (32767)        := NULL;

v_data_array      wwv_flow_global.vc_arr2;

v_rows number;

--v_sr_no number := 1;

BEGIN

delete from TEMP_UPDATE;

-- Read data from wwv_flow_files</span>

select

blob_content

into v_blob_data

from wwv_flow_files

where name like '%Book2%' or name like '%book2';

--last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)

--and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);

v_blob_len := dbms_lob.getlength(v_blob_data);

v_position := 1;

-- Read and convert binary to char</span>

WHILE ( v_position <= v_blob_len )

LOOP

v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);

v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk)));

v_line := v_line || v_char;

v_position := v_position + c_chunk_len;

-- When a whole line is retrieved </span>

IF v_char = CHR(10) THEN

-- Convert comma to : to use wwv_flow_utilities </span>

v_line := REPLACE (v_line, ',', ':');

-- Convert each column separated by : into array of data </span>

v_data_array := wwv_flow_utilities.string_to_table (v_line);

-- Insert data into target table </span>

EXECUTE IMMEDIATE 'insert into TEMP_UPDATE (PCFN,TCN,BOOKING_NUMBER,CUSTOMER,CARRIER_SCAC) values (:1,:2,:3,:4,:5)'

USING v_data_array(1), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5);

-- Clear out

--v_line := NULL; v_sr_no := v_sr_no + 1;

END IF;

END LOOP;

END;

Steven

Anton Scheffer

add the "Clear out" again, at least the v_line:= null;

Doolius

Ahh thank you! Works perfectly now.

I think the only other question that I have right now is that I added another column to my table called USERNAME.

I want to be able to transfer the username of the user who uploaded the csv file (aka the CREATED_BY column of the wwv_flow_files table)

Unless the username, but instead will equal the 'apex_public_user' name. (not entirely sure how the wwv_flow_files table works)

I thought that adding:

EXECUTE IMMEDIATE 'insert into TEMP_UPDATE (PCFN,TCN,BOOKING_NUMBER,CUSTOMER,CARRIER_SCAC, USERNAME)

values (:1,:2,:3,:4,:5,:6)'

USING v_data_array(1), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5),CREATED_BY;

to the insert statement would do it, but it wouldn't accept it.

Then I added a variable:

v_sr_user VARCHAR(20) := :APP_USER

and changed the code to:

EXECUTE IMMEDIATE 'insert into TEMP_UPDATE (PCFN,TCN,BOOKING_NUMBER,CUSTOMER,CARRIER_SCAC,USERNAME)

values (:1,:2,:3,:4,:5,:6)'

USING v_data_array(1), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5), V_SR_USER;

It set the value of USERNAME to 'nobody'.

How can I track what records were updated by what user when the data is extracted from the CSV file so I can have a report to show the user of only the records they uploaded?

Thanks Steven

Doolius

Scratch that.

I made a new column on the report where the user downloads the spreadsheet to track the USERNAME.

Final Code for others:

DECLARE 

v_blob_data       BLOB; 

v_blob_len        NUMBER; 

v_position        NUMBER; 

v_raw_chunk       RAW(10000); 

v_char      CHAR(1); 

c_chunk_len   number       := 1; 

v_line        VARCHAR2 (32767)        := NULL; 

v_data_array      wwv_flow_global.vc_arr2; 

v_rows number; 

v_sr_no number := 1; 

BEGIN 

delete from TEMP_UPDATE; 

-- Read data from wwv_flow_files</span> 

select 

blob_content 

into v_blob_data 

from wwv_flow_files 

where name like '%Book2%' or name like '%book2'; 

--last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER) 

--and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER); 

v_blob_len := dbms_lob.getlength(v_blob_data); 

v_position := 1; 

-- Read and convert binary to char</span> 

WHILE ( v_position <= v_blob_len ) 

LOOP 

v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); 

v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk))); 

v_line := v_line || v_char; 

v_position := v_position + c_chunk_len; 

-- When a whole line is retrieved </span> 

IF v_char = CHR(10) THEN 

-- Convert comma to : to use wwv_flow_utilities </span> 

v_line := REPLACE (v_line, ',', ':'); 

-- Convert each column separated by : into array of data </span> 

v_data_array := wwv_flow_utilities.string_to_table (v_line); 

-- Insert data into target table </span> 

EXECUTE IMMEDIATE 'insert into TEMP_UPDATE (PCFN,TCN,BOOKING_NUMBER,CUSTOMER,CARRIER_SCAC,USERNAME) values (:1,:2,:3,:4,:5,:6)' 

USING v_data_array(1), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5), v_data_array(6); 

-- Clear out 

v_line := NULL; v_sr_no := v_sr_no + 1; 

END IF; 

END LOOP;

DELETE FROM WWV_FLOW_FILES WHERE NAME LIKE '%Book2%';

DELETE FROM TEMP_UPDATE WHERE PCFN LIKE 'P%';

END;

I added in code at the end to remove the file from the WWV_FLOW_FILES table after the data is extracted.

And also added in code to remove the line with the column headers in it because I wasn't sure how to increment in order to not take out the column headers.

If anyone knows how please update the code accordingly.

Thanks

Steven

Mike Kutz

External Tables would be the fastest.

But, in order for them to work in a multi-user environment like APEX, you'd have to

  • copy the file to the server (new file name)
  • grab a lock
  • ALTER TABLE -- without losing the lock
  • parse, bind, and execute the desired SQL
  • remove file from server/wwv_flow_files (as desired)
  • release lock

I've got a Process Plugin that does the above that seems to be ready.

It could probably use some code-review before I post it on apex-plugins.

(Let me know if someone beat me to it.)

MK

Doolius

I wouldn't mind trying that process out.

What I was going to try and do is (because I don't know what the file name of the file the user uploaded will be ) is every couple of hours, have a process that removes records in the wwv_flow_files table that have a USERNAME of apex_public_user and a created_on date of 10 minutes ago from the time it runs.

I figured that that would take care of emptying out the wwv_flow_files table of junk from the users, but still leave records in there for users who are currently uploading files.

I'm still looking for a way to skip the first row on upload.

My way of deleting rows in my final table that started with a column name worked fine until I tried to add in a date column.

So if anyone has a solution for that I would greatly appreciate it.

Steven

Anton Scheffer

It isn't that hard to skip the first in your CSV-parsing

And it isn't that hard too to add a ON-submit process after the file uploading, which can use the filename of the uploaded file.

Doolius

Any chance you could show me how to skip the first row?

In the comments it said to add code before the while loop, so I did but it's not working like expected.

Code below: (Added code has spaces before and after it)

DECLARE 
v_blob_data       BLOB; 
v_blob_len        NUMBER; 
v_position        NUMBER; 
v_raw_chunk       RAW(10000); 
v_char      CHAR(1); 
c_chunk_len   number       := 1; 
v_line        VARCHAR2 (32767)        := NULL; 
v_data_array      wwv_flow_global.vc_arr2; 
v_rows number; 
v_sr_no number := 1; 
BEGIN 
delete from TEMP_UPDATE; 
-- Read data from wwv_flow_files</span> 
select 
blob_content 
into v_blob_data 
from wwv_flow_files 
where
last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER) 
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER); 
v_blob_len := dbms_lob.getlength(v_blob_data); 
v_position := 1; 
-- Read and convert binary to char</span>

v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); 
v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk))); 
v_line := v_line || v_char; 
v_position := v_position + c_chunk_len; 
-- When a whole line is retrieved </span> 
IF v_char = CHR(10) THEN 
-- Convert comma to : to use wwv_flow_utilities </span> 
v_line := REPLACE (v_line, ',', ':'); 
-- Convert each column separated by : into array of data </span> 
v_data_array := wwv_flow_utilities.string_to_table (v_line);
end if;

WHILE ( v_position <= v_blob_len ) 
LOOP 
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); 
v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk))); 
v_line := v_line || v_char; 
v_position := v_position + c_chunk_len; 
-- When a whole line is retrieved </span> 
IF v_char = CHR(10) THEN 
-- Convert comma to : to use wwv_flow_utilities </span> 
v_line := REPLACE (v_line, ',', ':'); 
-- Convert each column separated by : into array of data </span> 
v_data_array := wwv_flow_utilities.string_to_table (v_line); 
-- Insert data into target table </span> 
EXECUTE IMMEDIATE 'insert into TEMP_UPDATE (PCFN,TCN,USERNAME,ORIGIN,BOOKING_NUMBER,CARRIER_SCAC,RDD) values (:1,:2,:3,:4,:5,:6,:7)' 
USING v_data_array(1), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5), v_data_array(6), v_data_array(7) ; 
-- Clear out 
v_line := NULL; v_sr_no := v_sr_no + 1; 
END IF; 
END LOOP;
DELETE FROM WWV_FLOW_FILES WHERE last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER) 
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
--DELETE FROM TEMP_UPDATE WHERE PCFN LIKE 'P%';
--UPDATE TEMP_UPDATE SET USERNAME = REPLACE(USERNAME, ' ', '');
END;

As far as the on submit process, I have a username passed into a variable, would it be possible to change the filename to whatever I wanted to when the user uploads it?

Anton Scheffer

add a variable

v_first_line_done boolean := false;

add after line 43

if v_first_line_done then

add before line 53

else

  v_first_line_done := true;

end if;

What's the point in changing the filename before you gonna delete it?

Did you know that you can get the name of the uploaded file from your file browse item, something like:

file_name := apex_util.get_session_state( 'P1_BROWSE' );

Doolius

Thanks. I'll give that a try.

And the reason I wanted to change the name of the file to match the user that uploaded it is because the code looks like it just deletes the most recently uploaded/updated file.

I haven't been able to test the process yet to see what happens if multiple users upload files at the same time and/or a second after to see if it will still work correctly.

Mike Kutz

I just looked at the WHERE clause....

That implementation is sub-optimal.

Instead, use:

WHERE name = :P5_FILE_BROWSE_ITEM;

Yes, this works.  Even for multiple simultaneous people uploading the same file under the same account.

Observations have confirmed what someone else had said in a different thread:

Due to security, today's web browsers change the file name from:

C:\Users\username\Desktop\New Folder\blah\blah\blah\Actual File Name.csv

to:

{some random number/characters}/Actual File Name.csv

MK

Doolius

I'm getting the following error when trying to implement that code:

ORA-01858: a non-numeric character was found where a numeric was expected

DECLARE 

v_blob_data       BLOB; 

v_blob_len        NUMBER; 

v_position        NUMBER; 

v_raw_chunk       RAW(10000); 

v_char      CHAR(1); 

c_chunk_len   number       := 1; 

v_line        VARCHAR2 (32767)        := NULL; 

v_data_array      wwv_flow_global.vc_arr2; 

v_rows number; 

v_sr_no number := 1;

v_first_line_done boolean := false; 

BEGIN 

delete from TEMP_UPDATE; 

-- Read data from wwv_flow_files</span> 

select 

blob_content 

into v_blob_data 

from wwv_flow_files 

where name = :P2_FILE_UPLOAD;

/* last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER) 

and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER); */ 

v_blob_len := dbms_lob.getlength(v_blob_data); 

v_position := 1; 

-- Read and convert binary to char</span>

v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); 

v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk))); 

v_line := v_line || v_char; 

v_position := v_position + c_chunk_len; 

-- When a whole line is retrieved </span>

IF v_char = CHR(10) THEN 

-- Convert comma to : to use wwv_flow_utilities </span> 

v_line := REPLACE (v_line, ',', ':'); 

-- Convert each column separated by : into array of data </span> 

v_data_array := wwv_flow_utilities.string_to_table (v_line);

end if;

WHILE ( v_position <= v_blob_len ) 

LOOP 

v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); 

v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk))); 

v_line := v_line || v_char; 

v_position := v_position + c_chunk_len; 

-- When a whole line is retrieved </span>

IF v_char = CHR(10) THEN

  if v_first_line_done then 

  -- Convert comma to : to use wwv_flow_utilities </span> 

  v_line := REPLACE (v_line, ',', ':'); 

  -- Convert each column separated by : into array of data </span> 

  v_data_array := wwv_flow_utilities.string_to_table (v_line); 

  -- Insert data into target table </span> 

  EXECUTE IMMEDIATE 'insert into TEMP_UPDATE (PCFN,TCN,USERNAME,ORIGIN,BOOKING_NUMBER,CARRIER_SCAC,RDD) values (:1,:2,:3,:4,:5,:6,:7)' 

  USING v_data_array(1), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5), v_data_array(6), v_data_array(7) ; 

  -- Clear out 

  v_line := NULL; v_sr_no := v_sr_no + 1;

  else

   v_first_line_done := true;

  end if;

END IF; 

END LOOP;

DELETE FROM WWV_FLOW_FILES where name = :P2_FILE_UPLOAD;

--DELETE FROM TEMP_UPDATE WHERE PCFN LIKE 'P%';

--UPDATE TEMP_UPDATE SET USERNAME = REPLACE(USERNAME, ' ', '');

END;

Doolius

If a user happens to swap around columns is there a way to check the order the columns before they are inserted into the table and/or match the columns in the spreadsheet (or the first value of every array) with the columns in the table?

Mike Kutz

Parse the first row into an associative array storing the "column name" as the KEY and the column number as the VALUE.

ORACLE-BASE - Oracle Associative Arrays

eg

v_column_hash( v_data_array(i) ) := i;

Then, on subsequent rows, extract out the value based on what you need.

eg

v_data_array( v_column_hash('PCFN') ), ....

On the other hand, you could always create your own generic Global Temporary Table and expand the code that I gave you earlier..

MK

Doolius

A little confused on that.

I know I need to make a new variable for the v_column_hash, but other than that I'm not sure what/where to add the code to keep track of the first value of each array.

I'm thinking that it needs to go after the line

v_data_array := wwv_flow_utilities.string_to_table (v_line);

And probably needs to be in it's own loop?

As for the second line of code, I believe that would be implemented like:

USING v_data_array(v_column_hash('PCFN')), v_data_array(v_column_hash('TCN')), v_data_array(v_column_hash('USERNAME'))....

Is any of that correct?

Doolius

Turns out I no longer have to match spreadsheet column to table column.

I just need to first check the order of the columns and if it is out of order then reject the spreadsheet.

The code below is my attempt to check the order of the columns, but of course it isn't correct and won't compile.

Hopefully I'm on the right track but can anyone offer any assistance with this?

DECLARE 

v_blob_data       BLOB; 

v_blob_len        NUMBER; 

v_position        NUMBER; 

v_raw_chunk       RAW(10000); 

v_char      CHAR(1); 

c_chunk_len   number       := 1; 

v_line        VARCHAR2 (32767)        := NULL; 

v_data_array      wwv_flow_global.vc_arr2; 

v_rows number; 

v_sr_no number := 1;

v_first_line_done boolean := false;

v_error_cd number :=0; 

BEGIN 

delete from TEMP_UPDATE; 

-- Read data from wwv_flow_files</span> 

select 

blob_content 

into v_blob_data 

from wwv_flow_files 

where name = :P2_FILE_UPLOAD;

/* last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER) 

and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER); */ 

v_blob_len := dbms_lob.getlength(v_blob_data); 

v_position := 1; 

-- Read and convert binary to char</span>

WHILE ( v_position <= v_blob_len ) 

LOOP 

v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); 

v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk))); 

v_line := v_line || v_char; 

v_position := v_position + c_chunk_len; 

-- When a whole line is retrieved </span>

IF v_char = CHR(10) THEN 

  -- Convert comma to : to use wwv_flow_utilities </span> 

  v_line := REPLACE (v_line, ',', ':'); 

  -- Convert each column separated by : into array of data </span> 

  v_data_array := wwv_flow_utilities.string_to_table (v_line);

  -- Insert data into target table </span>

   IF(v_first_line_done != true)THEN (

    v_first_line_done := true;

    IF(v_data_array(1) = 'PCFN' AND

    v_data_array(2) = 'TCN' AND

    v_data_array(3) = 'USERNAME' AND

    v_data_array(4) = 'ORIGIN' AND

    v_data_array(5) = 'BOOKING_NUMBER' AND

    v_data_array(6) = 'CARRIER_SCAC' AND

    v_data_array(7) = 'RDD') THEN (

     v_error_cd := 0;

    ELSE

     v_error_cd := 1;

    END IF;

  ELSIF(v_first_line_done = true AND v_error_cd = 0) THEN 

   EXECUTE IMMEDIATE 'insert into TEMP_UPDATE (PCFN,TCN,USERNAME,ORIGIN,BOOKING_NUMBER,CARRIER_SCAC,RDD) values (:1,:2,:3,:4,:5,:6,:7)'

   USING v_data_array(1), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5), v_data_array(6), v_data_array(7) ; 

   -- Clear out 

   v_line := NULL; v_sr_no := v_sr_no + 1;

  END IF;

END IF; 

END LOOP;

DELETE FROM WWV_FLOW_FILES where name = :P2_FILE_UPLOAD;

IF(v_error_cd = 1) THEN

INSERT INTO temp_update (USERNAME, ERROR_DESC)

USING :P1_USER_ID, 'Spreadsheet column order incorrect.';

END IF;

--DELETE FROM TEMP_UPDATE WHERE PCFN LIKE 'P%';

--UPDATE TEMP_UPDATE SET USERNAME = REPLACE(USERNAME, ' ', '');

END;

It doesn't seem to like my IF statement that checks the values of the first line of the spreadsheet. I'm sure that's not the only issue though.

Steven

Anton Scheffer

Any reason why you use a ( after then?

Doolius

Ahh duh.

Thank you thank you.

Compiles fine and seems to work correctly surprisingly.

Doolius
Answer

FINAL CODE:

DECLARE 

v_blob_data       BLOB; 

v_blob_len        NUMBER; 

v_position        NUMBER; 

v_raw_chunk       RAW(10000); 

v_char      CHAR(1); 

c_chunk_len   number       := 1; 

v_line        VARCHAR2 (32767)        := NULL; 

v_data_array      wwv_flow_global.vc_arr2; 

v_rows number; 

v_sr_no number := 1;

v_first_line_done boolean := false;

v_error_cd number :=0; 

BEGIN 

delete from TEMP_UPDATE where username = :P1_USER_ID; 

-- Read data from wwv_flow_files</span> 

select 

blob_content 

into v_blob_data 

from wwv_flow_files 

where name = :P2_FILE_UPLOAD; 

v_blob_len := dbms_lob.getlength(v_blob_data); 

v_position := 1; 

-- Read and convert binary to char</span>

WHILE ( v_position <= v_blob_len ) 

LOOP 

v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); 

v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk))); 

v_line := v_line || v_char; 

v_position := v_position + c_chunk_len; 

-- When a whole line is retrieved </span>

IF v_char = CHR(10) THEN 

  -- Convert comma to : to use wwv_flow_utilities </span> 

  v_line := REPLACE (v_line, ',', ':'); 

  -- Convert each column separated by : into array of data </span> 

  v_data_array := wwv_flow_utilities.string_to_table (v_line);

  -- Insert data into target table </span>

  IF(v_first_line_done != true)THEN

   v_first_line_done := true;

   IF(v_data_array(1) LIKE '%PCFN%' AND

   v_data_array(2) LIKE '%TCN%' AND

   v_data_array(3) LIKE '%USERNAME%' AND

   v_data_array(4) LIKE '%ORIGIN%' AND

   v_data_array(5) LIKE '%BOOKING_NUMBER%' AND

   v_data_array(6) LIKE '%RDD%' AND

   v_data_array(7) LIKE '%CARRIER_SCAC%') THEN

    v_error_cd := 0;

   ELSE

    v_error_cd := 1;

   END IF;

  ELSIF(v_first_line_done = true AND v_error_cd = 0) THEN 

   EXECUTE IMMEDIATE 'insert into TEMP_UPDATE (PCFN,TCN,USERNAME,ORIGIN,BOOKING_NUMBER,RDD,CARRIER_SCAC) values (:1,:2,:3,:4,:5,:6,:7)'

   USING v_data_array(1), v_data_array(2), v_data_array(3), v_data_array(4), v_data_array(5), v_data_array(6), v_data_array(7) ; 

   -- Clear out 

   v_line := NULL; v_sr_no := v_sr_no + 1;

  END IF;

END IF; 

END LOOP;

DELETE FROM WWV_FLOW_FILES where name = :P2_FILE_UPLOAD;

IF(v_error_cd = 1) THEN

INSERT INTO temp_update (USERNAME, ERROR_DESC)

VALUES (:P1_USER_ID, 'Spreadsheet column order incorrect.');

END IF;

END;

Thanks for all the help everyone!

Steven

Marked as Answer by Doolius · Sep 27 2020
Scott Wesley

I realise I'm late to this party, but successfully implementing a csv->table pl/sql process is always an accomplishment.

I can't go past this plugin for processing any form of spreadsheet, but yes it does use collections - obviously.

- Process Type Plugin - EXCEL2COLLECTIONS

You can reverse engineer the raw code and insert into your own tables instead of a collection. I did this successfully in an apex3.2 environment.

1 - 24
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Jul 21 2014
Added on Jun 12 2014
24 comments
11,947 views