Skip to Main Content

SQL & PL/SQL

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Getting ORA-01403 in dba_segments query during a trigger

JP KrepsAug 20 2009 — edited Aug 20 2009
Hi, Everyone --

I need to set up a process which will automatically start Oracle auditing on a table when it's created on a particular tablespace.

I've created an "after create on database" DDL trigger which will fire when (ORA_DICT_OBJ_TYPE = 'TABLE').

In this trigger, I use the DBMS_STANDARD package trigger attribute functions in a query against the dba_segments view to check if the new
table has been created in the targeted tablespace. Here' the query:

SELECT DISTINCT s.tablespace_name
INTO tablespace_name
FROM sys.dba_segments s
WHERE s.owner = ORA_DICT_OBJ_OWNER
AND s.segment_name = ORA_DICT_OBJ_NAME
AND s.segment_type IN ('TABLE', 'TABLE PARTITION')
AND s.tablespace_name = target_tablespace_name;

The only purpose of this query is to make sure that that table is contained in the targeted tablespace identified by the "target_tablespace_name" constant variable used in the WHERE clause. If the query runs without error, then my trigger creates a DBMS_SCHEDULER job that executes an "audit" command to start Oracle auditing on the new table.

To test the trigger, within the trigger code, I put the query inside a PL/SQL block with a "WHEN OTHERS" exception clause that will send formatted error detail to the alert log. Then I successfully create a test table on the targeted tablespace. The trigger fires, but the output shows that the query in my trigger failed with an "ORA-01403: no data found" error, indicating that the dba_segments view contains no records showing that the new table is on the targeted tablespace.

But when, in an already opened PL/SQL session, I immediately run the same query (with the appropriate literals in place of the trigger attribute functions) then the query works with the name of the tablespace returned as expected!

I tried an experiment with the trigger running the same query against the dba_tables view instead of the dba_segments view. It worked without error. The only problem is that if your create a partitioned table, the "tablespace_name" column in the dba_table view is NULL (which makes sense when the various table partitions can be contained in different tablepsaces). So, unfortunately, in the case of creating partitioned tables, the purpose of the query would be defeated.

Why does the trigger work when it queries the dba_tables view, but fails when it queries the dba_segments view? Is there a timing issue which causes a lag between the time the table is created and the time that the dba_segments view is updated? Would the trigger fire inside this time lag, thus causing the dba_segments query to fail? Or is there another explanation?

Thanks in advance for any advice you can give me!

Comments

onkar.nath

V$SQL should give you the query based on session id.

If the table is used so much then you should consider having an index on the same. you should also consider tuning the query in question but first get the query. If you can enable trace on the session for the given period of time then try enable 10046 trace which should give you the exact query and its plan.

Onkar

observer_83

Hi Smohib,

If you already located suspected query and want to extract the plan based on the partial query text, use below

select sql_text from v$sqltext where hash_value='SQL_HASH_VALUE' order by piece;

sql_hash_value, you can query from v$session as below

select sql_hash_value,prev_hash_value /* prev_hash_value is historic value of the same sql */ from v$session where sid='&sid';

then, extract the plan of the query and share ..

Smohib
Thanks observer_83 & onkar.nath for the tips, will look into them once I get remote connection from client, mean while I just want to know as to what is the below statement about.
DECLARE job BINARY_INTEGER := :job; next_date TIMESTAMP WITH TIME ZONE := :mydate; broken BOOLEAN := FALSE; job_name VARCHAR2(30) := :job_name; job_owner VARCHAR2(30) := :job_owner; job_start TIMESTAMP WITH TIME ZONE := :job_start; window_start TIMESTAMP WITH TIME ZONE := :window_start; window_end TIMESTAMP WITH TIME ZONE := :window_end; BEGIN begin PROCMRRSTOCK; end; :mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
This is the query in AWR report which takes 51% of database time to get executed(ONCE), I will have to tune this process.
Smohib

Hi,

I found out that I cant generate explain plan for "declare" statement... it says "SELECT" keyword missing....

the 2nd query which I told, has "UPDATE MOHIBSTOCK SET MRRNO=:B2 WHERE MRRID=:B1" in AWR report, I doubt I will get sql_fulltext for this too..

also please let me know in AWR which sections I need to check which will lead to slow performance?

Thanks,

Mohib

Harmandeep Singh

To get the full query

select sql_text from dba_hist_sqltext where sql_id='<text>';

1. Query which has very high executions , multiple times a day, create the necessary index so that you have less load on buffer cache , which in turns improves overall application performance.

2. Generate the trace for the reports which are running slow to further analyze the issue.

It seems like at the start of day at 6:30 AM, since there is no data in cache for report it is taking time.

In evening , since most of report data is cached , it is running fast.

At 10, it is slow, for the same reasons of caching as well as overall load on box.

3. Check the buffer cache/ SGA advisories too , to see if increase in SGA is required.

Thanks,

Harman

Smohib

Hi,

@Harmandeep/ observer_83 / onkar.nath

Thanks for the tip, but unfortunately I got the same output from dab_hist_sqltext i.e.; "UPDATE MOHIBSTOCK SET MRRNO=:B2 WHERE MRRID=:B1"

Anyways I successfully created a index on the table "MOHIBSTOCK" & its better , the report is being generated in about 10-30 mins will have to collect more information from client after observation for a few more days.

Yes will look into Buffer Cache & SGA too, but here there is issue about RAM...very less RAM is configured & client is not ready to upgrade RAM for time being...anyways I will look into it.

I never wanted to bring Listener or anything related to DB down, hence brought down the application server down (restarted) so that users who have not ended their session are disconnected & hence the table is free & I will create index, but that dint happen checked from v$session & killed the process using "MOHIBSTOCK" table(used by SYS) & then successfully created index on the table.

Although things are working fine, just want to know what I did was correct? killing a process run by SYS?

any link where in I can find what happens if SYS process is killed? how it has impact on other processes of database?

Please help me on this.

Thanks all for ideas

Mohib

observer_83

Smohib,

Normally index creation in production is done when application server is down or when there is no activity on that table, If you are sure that no user are logged in and changing that table properties then you may create index online too.

Here creating index on "MRRID" will be helpful.Once the necessary RAM is added you can alter SGA,buffer cache and shared pool accordingly.

Thanks

GV

observer_83

Also killing any user defined(Known) process on session level is safe. if it is an application user then you should create index with that user and not with "sys".

Thanks

GV

Smohib
Answer

GV,

Yes we brought down the application server, still I could find ONLY SYS using the table, hence killed that process & created index.

had tried creating online index too..but got error "resource busy" thats the reason opted above steps....

yes had created index logging in as user, not as sys as it was created by developer will confirm again.

Mohib

Marked as Answer by Smohib · Sep 27 2020
1 - 9
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Sep 17 2009
Added on Aug 20 2009
4 comments
885 views