This content has been marked as final. Show 21 replies
If there is a need for greater flexibility and/or better precision, one create their own forecasting report(s) by using directly the underlying OEM repository tables/views.
I had a presentation on this topic a couple of years ago:
Thanks for that great query, Fran....
edit.. except I realised it doesn't actually work.. rtime is varchar2, so maxing that is not valid and the results are incorrect
Edited by: Dropbear67 on Feb 25, 2013 7:34 PM
Here is an edit of fran's script that should work across year boundaries and also takes into account datafile autoextensibility.
Assumption is that AWR snaps are hourly.
Edited by me to add the fact that it takes into account datafile extensibility
select tsname as "Tablespace", tsize as "Curr Size(Mb)", tused as "Used (Mb)", tfree as "Free (Mb)", TMax as "Max Size (Mb)", val1 as "Delta 1hr", val2 as "Delta 1dy", val3 as "Delta 1wk", val4 as "Delta 1mnt", round((tused / tMax)*100,2) as "%Used of Max", round(((tused+val3) / tMax)*100,2) as "%UsedMax_+1wk", round(((tused+val4) / tMax)*100,2) as "%UsedMax_+1mnt", case when ( ( (tused+val3)/tMax)*100 < 80) then 'NORMAL' when ( ( (tused+val3)/tMax)*100 between 80 and 90) then 'WARNING' else 'CRITICAL' end "Status +1Wk", case when ( ( (tused+val4)/tMax)*100 < 80) then 'NORMAL' when ( ( (tused+val4)/tMax)*100 between 80 and 90) then 'WARNING' else 'CRITICAL' end "Status +1Mnth" from ( select tsname, round(tablespace_size*t2.block_size/ 1024/1024,2) TSize, round(tablespace_usedsize*t2.block_size/1024/1024,2) TUsed, round((tablespace_size-tablespace_usedsize)*t2.block_size/1024/1024,2) TFree, (select sum(decode(b.autoextensible,'YES',b.maxbytes, b.bytes))/1024/1024 from dba_data_files b where b.tablespace_name = t2.tablespace_name group by b.tablespace_name) TMax, round(val1*t2.block_size/1024/1024,2) val1, round(val2*t2.block_size/1024/1024,2) val2, round(val3*t2.block_size/1024/1024,2) val3, round(val4*t2.block_size/1024/1024,2) val4 from (select distinct tsname, t1.snap_id, rtime, tablespace_size, tablespace_usedsize, tablespace_usedsize-first_value(tablespace_usedsize) over (partition by tablespace_id order by t1.snap_id rows 1 preceding) val1, tablespace_usedsize-first_value(tablespace_usedsize) over (partition by tablespace_id order by t1.snap_id rows 24 preceding) val2, tablespace_usedsize-first_value(tablespace_usedsize) over (partition by tablespace_id order by t1.snap_id rows 168 preceding) val3, tablespace_usedsize-first_value(tablespace_usedsize) over (partition by tablespace_id order by t1.snap_id rows 720 preceding) val4 from (select t1.tablespace_size, t1.snap_id, t1.rtime,t1.tablespace_id, t1.tablespace_usedsize-nvl(t3.space,0) tablespace_usedsize from dba_hist_tbspc_space_usage t1, dba_hist_tablespace_stat t2, (select ts_name,sum(space) space from recyclebin group by ts_name) t3 where t1.tablespace_id = t2.ts# and t1.snap_id = t2.snap_id and t2.tsname = t3.ts_name (+)) t1, dba_hist_tablespace_stat t2 where t1.tablespace_id = t2.ts# and t1.snap_id = t2.snap_id) t1, dba_tablespaces t2 where t1.tsname = t2.tablespace_name and t1.snap_id = (select max(snap_id) from dba_hist_snapshot) and t2.contents = 'PERMANENT');
can you explain how it will work across year boundaries? in terms of datafile extensibility.
Kindly elaborate your understanding on new edited script, thanks.
Answer in my opinion is to
use dba scheduler jobs for keeping track of database growth and on basis of scheduled dba growth data, we can get this.
we can increase awr retention period of our database to keep record of past few months data instead of just one month or else.
select 'ONLINE_REDOLOG' as TYPE, sum(BYTES*MEMBERS)/1048576 as SIZE_MB from v$log union
select 'DATAFILES', sum(BYTES)/1048576 from v$datafile union
select 'TEMPFILES', sum(BYTES)/1048576 from v$tempfile union
select 'CONTROLFILES', sum(BLOCK_SIZE*FILE_SIZE_BLKS)/1048576 from v$controlfile union
select 'FLASH_RECOVERY_AREA', VALUE/1048576 from v$parameter where NAME='db_recovery_file_dest_size';
I like this bit of SQL, but if you are using a cluster note the join to dba_hist_tablespace_stat, this view includes the instance_number column.
It looks to me that you just have to specify a fixed value for this column to get the correct result.