Our application needs an automatic process to generate export files and data pump in 10g works great.
However we ran into problems when we tried to move the export to other servers. The export files cannot be read by other users while the log file can. The generated files are like below:
test.dmp 111,111 rw-rw---- oracle
test.log 2,357 rw-rw-r-- oracle
The application can NOT log in as the "oracle" user. Is there a setting in datadump that we can set to make export files "readable"?
Could you provide a bit more detail as to what you are trying to achieve. what the Data Pump client command line looks like that you're using and what errors you are seeing?
Exactly what other "users" are you referring to that can read the log files but not the dump files? I assume you are trying to import these dump files on another server machine using Data Pump impdp. The only user that can read them is the oracle server. In order to read them on behalf of a database user, you have to define a directory object that points to the path containing the dump files and then grant READ access to the directory object for those database users.
The "users" I am referring to are operating system users, in this case, linux users.
No, I'm not trying to import the files but to move it to another folder. This is a requirement from our application. But the application can't login into the linux server as the "oracle" user.
The export files are generated by dbms_datapump package. A pl/sql function is created to call the dbms_datapump package. There is no errors but the export file generated is not readable by other linux users.
Edited by: user11139073 on May 6, 2009 4:09 PM
Ok, then if you're asking whether there's a way for Data Pump to automatically change the permissions and/or ownership of the exported dump files, then the answer is no. Currently that cannot be done.
Either you'll have to do this manually or perhaps you could make use of the dbms_file_transfer.copy_file() procedure to move the dump files into a location on the server that is accessible to the linux users.
Thanks John for your reply.
Our problem is more complex. Instead of moving the export file to a different folder, we need to move the file to a different server. We use the dbms_scheduler to call a shell script. The shell script will scp the file to the remote server.
The problem is that when dbms_scheduler calls the shell script, it uses the "nobody" user. As you can see, "nobody" can't move the export file because the export file is not readable by other users. We are aware that we could change the "nobody" to "oracle" by changing some config file. But this causes some security concerns that we are not considering risking.
We were hoping there is something we can change in the export setting to make the file readable. If not, we may have to move the file manually since the automation seems impossible.
I have run into the same problem - expdp writes dump files with UNIX owner=oracle, group=dba, and the file in our case is owner=rw, group=r, world, no permisisons:
-rw-r----- 1 oracle dba 176607232 May 21 16:40 testfile_schema.dmp
One suggestion, but it does involve a security risk. Add the 'nobody' unix user to the dba group, and then the nobody user can at least 'read' the dump file. The security guys might not like that though - it gives the 'nobody' user the ability to delete a lot of oracle files which are group writable. But it does avoid using the oracle unix user.
Another way out suggestion, depending on your script and system, is maybe to have your UNIX system administrator grant the 'nobody' user the ability to execute the 'sudo su - oracle chmod 644' command, so the nobody user can change the permissions of the file. I'm not a UNIX sysadmin, but I know they can grant the ability of a specific user the ability to execute very specific commands as another user (say oracle, or maybe another local unix user in the dba group, since it looks like your dumps are group writable).
I realize this is an old posting but thought this additional information may be of use to anyone that finds it as I did while looking for work arounds for this issue.
If your dump directory is owned by an application user you can issue a "chmod g+s <dump directory name>". This will cause newly created files (including .dmp files created by expdp) to inherit the group from the dump directory rather than end up with the "dba" group. So then you are in control. Initially the owner ofthe file will still be "oracle" but once you compress or copy the file it will be owned by the user who executed that command. At this point the "oracle" user won't be able to read the file so you will have to issue a "chmod o+r <file name>" to allow impdp to work for the file.