This content has been marked as final. Show 5 replies
I think oracle counts the space allocated to the datafile rather than the space used by objects. Why don't you create the objects in the abiec tablespace as it has enough free space. Or you could try to resize the datafiles in the abiec tablespace by issuing the following command:
alter database datafile '< data file name> ' resize <new size>;
Of course you can only resize the files upto the HWM.
...and if the HWM is too high for you, and you can't lower the HWM enough easily by shrinking, you could try to "defragment" the existing tablespace by moving objects near the HWM into a different tablesapce so you could lower it. Obviously, you have to gain a bit of free space somewhere. My suggestion would be the indx tablespace, assuming you have only index data in there. Indexes can be recreated, so you could drop them and drop that tablespace to gain these 1900m to do some "defragmentation" to lower the HWM of your abiec tablespace - which has really grown quite large for a XE tablespace...
To "defragment", you can use statements like
And after resizing it, do the same backwards from <new_tablespace> to /abiec/ again, if you want to keep things together.
ALTER TABLE <tablename> TABLESPACE <new_tablespace>; ALTER INDEX <indexname> REBUILD <new_tablespace>;
To find out best candidates, use a statement like
SELECT owner, segment_type, segment_name FROM dba_extents WHERE tablespace_name ='ABIEC' ORDER BY block_id DESC;
oracle 11g xe is completely stupid as it claims unused space.
Here is what I did:
->drop tablespace abiec including contents and datafiles;
->shutdown and restart db.
->CREATE TABLESPACE abiec BLOCKSIZE 8192 datafile SIZE 3500M autoextend on next 10m EXTENT MANAGEMENT LOCAL AUTOALLOCATE ONLINE PERMANENT SEGMENT SPACE MANAGEMENT AUTO;
->create user borabiec identified by ...;
->alter user borabiec default tablespace abiec;
->grant connect, resource to borabiec;
->alter user borabiec quota unlimited on abiec;
->alter user borabiec quota unlimited on indx;
imp file=/disk2/ablast.dmp userid=superuser/... fromuser=borabiec touser=borabiec ignore=y indexes=n statistics=none
=> import ok no warning
=> If I look at abiec tablespace: 8830 Mb, 8343 used.
then for all table in tablespace abiec : enable row movement, shrink space compact, shrink space cascade (there are indexes for primary keys, foreigh keys, constraints), shrink space;
set pages 200
set head off
select 'alter table '||owner||'.'||table_name||' enable row movement;' from dba_tables where owner='BORABIEC';
select 'alter table '||owner||'.'||table_name||' shrink space compact;' from dba_tables where owner='BORABIEC';
select 'alter table '||owner||'.'||table_name||' shrink space cascade;' from dba_tables where owner='BORABIEC';
select 'alter table '||owner||'.'||table_name||' shrink space;' from dba_tables where owner='BORABIEC';
and execute results.
At this point, if I look at abiec tablespace: 8830 Mb, 3001 used. Oracle used more than 2 times the space needed to store data (tablespace didn't even need to be extended).
At oracle training, I was told: when importing into an assm tablespace, storage clause of objects inside dmp file is ignored. It seems that either it is not true (and I will be able to see if this causes the problem by re-exporting then re-importing), either that oracle uses twice the space needed, and you can't claim it afterwards: file size cannot be reduced, and you cannot move all tables to other tablespace as you don't have 3Gb available. Real bullshit. I will be able to do it table by table, hopefully I have less than 50.
if you re-export the tables after a shrink space compact/shrink space cascade/shrink space, drop the tablespace, re-create it, and re-import, the ts size is 3.5Gb and not 9Gb.
So import into an assm tablespace takes into account the storage clause of the .dmp file, you have to shrink, export, drop, create, import to shrink the tablespace.
Furthermore, even if space is NOT used in XE (5Gb used after shrinks), XE tells you exceed the 11gb quota.
Well, that's not very surprising for me. Either there was a misunderstanding in your training or somebody really made a mistake there.
But you could use another approach that you outlined: shrink before creating the initial dump. I'd recommend this anyway, at least if the dump is not just for regular backup purposes.
I understand your anger about that way to enforce 11 GB limit, and that you'll have to take some efforts to regain assignable tablespace capacities.
But after all, I personally wouldn't like the database to do a limit check based on every row. Imagine how this would slow down the performance! A file size limit on the other hand only needs to be checked when you change it - which happens comparatively seldom. Therefore, I think this quota approach is very reasonable. And I think you can come quite close to actually have 11 GB when you keep some restrictions in mind.