This content has been marked as final. Show 9 replies
What are they is probably a mistake made by someone who didn't understand OFA .
<br> My crystal ball says they are probably archive log files. Post the name of one of the files. And what is your SID name?
It would be nice to know the part in front of '.dbf'. $ORACLE_HOME/dbs is used as default location - by several Oracle processes.
What's Oracle version?
It looks like someone created tablespace or add datafile without full path, one of the 'bad' thing is if your parameter
is not set, default location is $ORACLE_HOME/dbs
so when you create datafile without full path, Oracle will put it under there.
You can check your v$datafile to verify. Also look into your alert.log file see when that's happened.
I would look at the following to find out what's writing to your $ORACLE_HOME/dbs.
Check for archive logs:
SQL> show parameter log_archive_dest;
Check for datafiles:
SQL> select name from v$datafile;
I've also seen this happen before. It seems to me it had something to do with running RMAN and not having something set properly.
How big are the .dbf files? and what are the names of the files?
Also, you can do a strings on the file and then pipe it to more.
$ strings <filename> | more
You should be able to gleen enough intelligible information out of the header of the file this way to determine what the file is.
Finally, compare the date of the file with those of the regular database data file. See if these are actual live database files also by looking in dba_data_files, but my guess is that they are probably not actual data files.
Wow, I should have known better than to start a thread and then go to lunch. Sorry I’m taking so long getting back to all of you.
This is Oracle 11g. Our current production system is 10g.
Here’s a little background info. We have a flat file base ERP. Each night we copy .txt files from the ERP to the Oracle server. We have external tables, table_name_load, mapped over the .txt files. At midnight, we run a massive collection of inserts into regular tables from the load tables. We then use this day-old set of data as the basis for reports and several ApEx applications.
You’re right, I can’t claim to have a good understanding of Optimal Flexible Architecture (OFA). I’m a developer first and I’m working on DBA skills.
The files are all named something like:
There are over 300 of them and they are about 45M to 50M each. (The whole directory is about 14G.)
SID is oraprod
I specified locations for each of the three datafiles for our main tablespace (on three different drives), a secondary “hot” tablespace with the most frequently accessed datafile, three index datafiles for our main index tablespace, the “hot” index tablespace, temp, undo, and the control files.
The only things that I did not specify a location for are: sysaux; system; and users.
SQL> select name from v$datafile;
13 rows selected.
Based on what I’ve read in the 2 Day DBA, I put my flash recovery area all alone on /d04 and I have USE_DB_RECOVERY_FILE_DEST set.
The files were created at two times: one during my initial data load yesterday; the other during the scheduled job this morning starting at midnight.
After checking our production system, I realize that we have a whole bunch of these same kinds of files in $ORACLE_HOME/dbs there also. However, I only have them from last night, not two separate times.
I’m concerned that the older files on the new system aren’t clearing out like the files on the production system are.
Most likely Archive Log Files <br>
Typically, I set up my format as arch_SID_%t_%s_%r.ARC. With the sid in there, at least you can differentiate the files.
<br> Archive log files are usually cleaned up when you take a backup,
RMAN or user-managed. IF you do user managed, you need to do the cleanup. You mentioned 'the new system ' which would indicate to me that you may be missing some cronjobs (or scheduled jobs) to do the backups.
Message was edited by:
%r is version dependent and is sensitive to compatible parameter. I haven't 'experimented' with some of the other variables, so they may be dependent too.
Yes, I think you're right about them being archive log files. I scheduled a recurring backup while I was going through the Universal Installer but I don't remember it asking about deleting archive log files whenever it finished. I have deleted that backup job and created another one, via Enterprise Manager, that should backup and then delete the archive log files.
Thanks for all your input.
P.S. I like your suggestion about the name format. Something like .ARC makes much more sense than .dbf
Message was edited by: Gregory
Yes, I'm pretty confident that the files are archive log files, not only their names beginning with arch, but also by the times when the files were generated coinciding with time of your increased activity (data loads, cronjobs).
You can verify your setting for the archive logs by logging into SQL*Plus as sys and then showing the parameter for 'dest'
SQL> sho parameter dest
The above will return several destination parameters and then you can see which one is set for the $ORACLE_HOME/dbs directory.
To change this (which I would recommend setting to the same drive as you have your Recovery_Flash_Area), is to type in the following command (as an example):
alter system set log_archive_dest_1='LOCATION=/d04/arch/oraprod' scope=spfile;
Then you will need to bounce your database for the change to take affect.
I also agree with 'dbtoo' regarding the naming convention for archive logs.
I typically set mine to <sid>_%t_%s_%r.arc. Putting arch as a prefix is redundant if you are using .arc as a suffix.
Hope this helps...