Forum Stats

  • 3,872,069 Users
  • 2,266,376 Discussions
  • 7,911,043 Comments

Discussions

Accessing files copied by UTL_FILE.FCOPY

662454
662454 Member Posts: 5
edited Oct 3, 2008 5:51AM in General Database Discussions
Hi folks,

I ran into a strange (at least to me) problem. I have
implemented a small document management system into an existing
Oracle10g database application on a W2K server machine.
DMS requires copying files from one
folder to another. Physical files (originals as well as copies) are
located on a mapped resource. The mapped drive refers to a local drive, thus filepath is identical on server and client machines that refer to this resource, too.
This drive is mapped by group policy startup script so
SYSTEM user which oracle service is running on "sees" that resource.
Thus, copying files with utl_file.fcopy is no problem.
But I cannot access any copied file with it's application. An MSExcel
file won't open at all. Error message says, network drive wasn't
available or file was write protected. A pdf files is opened by Adobe
Reader but showing empty pages (in correct number, by the way).
Network drive is available and, as far as I can see, there is no lock
on any file. Original files open without problems.

Do you have any idea?

Regards
Claus.
Tagged:

Answers

  • 662454
    662454 Member Posts: 5
    Result of my own research is, that utl_file.fcopy isn't meant for copying binary files as xls or pdf files are. Copied file's length had changed a bit which can be explained by fcopy putting CR+LF characters after each "line". Whatever line it finds in a binary file. Thus, it's not suitable for my needs.

    The DBMS_FILE_TRANSFER package won't do either, because it requieres directory objects as source and destination. Because physical file structure provides a special folder for each file and its versions, I would have to dynamically (re-)create directory objects for each copy process. This could be done by dynamic DDL but can't be done because of the whole process being started by a trigger which doesn't allow the auto commit of DDL statements. Yes, I could wrap that in an autonomous transaction but, well, that seems to be no good practise. And more, the copy method of that package relies on a very specific file size in exact multiples of 512 bit (I didn't quite get the reason for that).

    Seems, I will have to put all that files (which are more than 1000 an more than 2GB) in BLOBs. Wonder, what problems that will give to me...

    Yet, any hints and opinions are still welcome.

    Regards
    Claus.
  • 662454
    662454 Member Posts: 5
    Problem solved by using DBMS_SCHEDULER ablilities.

    Claus.
  • Rnr-Oracle
    Rnr-Oracle Member Posts: 1,269 Employee
    To expand on what I assume the solution was (for reference).

    The poster probably used the dbms_scheduler get_file and put_file functions which are new in 11g.

    They allow files to be copied to/from the operating system as a particular user. The username and password of the operating system must be stored in the database as a credential object using dbms_scheduler.create_credential.

    Hope someone finds this useful,
    Ravi.
  • 662454
    662454 Member Posts: 5
    Hi there

    We are driving a 10g version, so I wasn't able using the get_file/put_file methods. No, I just created an executable type job and provided a rather lengthy string as job action.
    This action string contains calling cmd.exe witch again calls copy.bat.
    Like that:
    "c:\winnt\system32\cmd.exe /q c:\copy.bat path\to\sourcefile path\to\destinationfile > nul"

    No other parameters for cmd.exe should be provided, in contradiction to what other examples say. Copy.bat is a wrapper batch file for native copy command.

    That's all.

    Claus.
This discussion has been closed.