This content has been marked as final. Show 26 replies
From a query? where rownum<100
Can you explain the purpose of it?
No, I need an ORA- to be raised when trying to insert new rows after a limit is reached
-- should have refreshed before posting.
Message was edited by:
But why? I mean, what's the whole purpose of such requirement?
The purpose is to limit the number of rows for a table.
There is no Oracle provided row limit for a table.
Under dictionary managment you could have assigned a maxextents parameter value (dba_tables.max_extents) to the table effectively limiting its size but with locally managed tablespaces every table can extend to the limits of tablespace space available since you will run out of space before the table runs out of extents.
So potential techniques to limit table size are
1- limit the owner's total space usage via tablespace quota. While this will not place a hard limit on a specific table it can limit the total space used by a schema. If the schema has only one table then along with the space needed by the index(es) you can sort of limit the size
2- add a not null numeric column to the table with a fixed number of digits (say 5) then place a unique constraint or at least unique index on this column. Use a sequence to populate the column. This will limit the maximum number of rows to the range of the sequence, 99999 in my example. A before insert trigger can be used to populate the sequence.
HTH -- Mark D Powell --
your answer at point 1 is clear and reflect my current situation, locally managed tbs containing many other tables.
It looks like point 2 is a good trick. I'm going to test it
thanks a lot mpowel01
You can limit the number of rows using a trigger. Below is quick and dirty example that has not been fully tested.
test@ORCL> create table rowlimit (col1 number);
test@ORCL> create or replace trigger limit_rows
2 before insert
3 on rowlimit
5 v_count number;
7 select count(*)
8 into v_count
9 from rowlimit;
11 if v_count >= 4 then
12 raise_application_error(-20000, 'Table can have no more then 4 rows');
13 end if;
test@ORCL> insert into rowlimit values(1);
1 row created.
test@ORCL> insert into rowlimit values(2);
1 row created.
test@ORCL> insert into rowlimit values(3);
1 row created.
test@ORCL> insert into rowlimit values(4);
1 row created.
test@ORCL> insert into rowlimit values(5);
insert into rowlimit values(5)
ERROR at line 1:
ORA-20000: Table can have no more then 4 rows
ORA-06512: at "TEST.LIMIT_ROWS", line 9
ORA-04088: error during execution of trigger 'TEST.LIMIT_ROWS'
test@ORCL> select count(*) from rowlimit;
Sorry, Eric, you failed to consider multiuser concurrency.
If you insert 4 rows, don't commit, and connect via another session, you'll be able to insert another 4 rows. Then commit both sessions, and the table has 8 rows.
You are absolutly correct.
It looks like point 2 is a good trick. I'm going toDon't forget that a sequence can and will have gaps, so when you reach number 99999, you can't be sure that you have 99999 rows in your table.
I think you have two options:
1) add an extra column to your table and populate this in a before insert/update/delete row level trigger.
2) create a materialized view of select count(*) total_number from <base table>, make it fast refresh on commit, and add a check constraint on it saying (total_number <= 100000)
Hope this helps.
Yeah, multi-user is going to kill ya ... only way to pull it off is to lock the table first ..
That is, serialize the process. Lock everyone else out until you finish.
Yeah, I know ... by nature we don't really want to do that ... sigh
But I think the only sure-fire way is to:
a) Lock table in exclusive mode
b) count the rows ...
c) allow the insert or not ...
Serializing it is the only way I can see pulling this off ... (restricting the size of the table via extents is the only other way, but as mentioned, it's not foolproof either ..)
Someone pointed out that sequences can have gaps but any table that is to be limited to a specific number of rows is unlikely to have a high insert rate so nocache or a home-grown sequence (anonymous transaction using a one row update table) would probably work instead of caching the sequence.
It is also possible that all that is needed is an upper limit so a few holes caused by a small cache size would be OK. The sequence could cycle to allow reuse of the keys if the data will be deleted before the sequence cycles.
Other ideas that someone may want to take apart or play with.
Range partitioned table on fixed size key. No partition below key range nor one above.
Pre-assigning the key 1 - 99,999 to the rows and then having the application find and use the next available key. All columns except the key would have to be nullable or have defults provided. An anonymous transaction using select for update would be required to handle tracking the key but this is easy to code and from experience this kind of routine works fine under a moderate concurrent user load.
Adding to the fun
-- Mark D Powell --