Skip to Main Content

Oracle Database Express Edition (XE)

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

XE-18c: 12gig "user data" limitation

User_0RN30Apr 9 2020 — edited Apr 15 2020

XE-18c has a limitation on the amount of data you can put into it.

Can somebody explain how this works?

Or point me to information about this?

We are trying to fit a some data into XE-18c on a laptop,

but are hitting the 12gig wall.

We are trying to reduce our data footprint.

And we now believe we are below the 12gig limitation,

but still running into the error.

Thanks for any help.

Comments

yoonas

Hi,

Here the documentation on limitations

4 Licensing Restrictions

Regards,

Yoonas

Dude!

How it works? Who knows. You'll have to ask the XE software developer. Deleting data does not reduce the allocated tablesspace footprint. You also have to defragment and shrink the datafile. Your best option is probably to export the user tablespace, then create a new one of a smaller size, and import the data.

User_0RN30

Yea............I read that.

I am not sure what they mean by "user data"...........or how it is measured.

yoonas
how it is measured.

I am not aware of the exact calculation.

but are hitting the 12gig wall.

How did you do your calculation?

My guess is user created objects space utilization, I have not used XE in a production environment  yet.

Updated

There is something here Oracle XE Database Has Reached the 4GB Limitation Causing Login Failure Or Inability to Save Data (ORA-12952) (Doc ID 1347255.1)

User_0RN30

If you feed a large amount of data into the XE database using IMP or datapump,

it will fail at some point with an ORA-xxxxx message.

The ORA-xxxxx message is specific to the 12gig limit.         

Mike Kutz

Your calculations are probably not including Indexes (or the required empty spacing)

I believe 18c XE includes the Adv. Compression feature

You might want to try this idea:

  • Import just the tables (and index DDLs... no data)
  • Alter table compress high for OLTP (or appropriate syntax)
  • import data via SQL*Loader
    • I don't know, right off, if compression works with datapump.  I'd try that first.

My $0.02

MK

g.myers

This is what I use to judge where the space is used. Run it from the root container rather than one of the PDBs as you might not see everything otherwise.

If you have filled up SYSTEM or SYSAUX you are pretty much stuffed as it is almost impossible to tidy either of those up once you've hit the space limit.

An EXPDP uses a chunk of SYSTEM space so avoid doing that (If you want a full database snapshot, clone the PDB)

I'd also recommend disabling the audit trail (ALTER SYSTEM SET audit_trail=none SCOPE=SPFILE; ) or at least switching it to store in the OS rather than in the database.

select d.con_id, case when instr(file_name,'/',1,6) > 0 then substr(file_name,instr(file_name,'/',-1)) else file_name end filename,
       d.file_id, d.tablespace_name, round(d.bytes/(1024*1024*1024),1) gb,
       round(d.bytes/(1024*1024)) mb,
       d.online_status, d.autoextensible, nvl(p.pdb_name,'CDB$ROOT') dbname,
       round(sum(d.bytes/(1024*1024*1024)) over (),1) - (12) tot_gb_over, file_name
from cdb_data_files d
        left outer join CDB_PDBS p on d.con_id = p.pdb_id
where tablespace_name != 'UNDOTBS1'
--and d.con_id = 1
order by d.con_id, gb desc;

1 - 7

Post Details

Added on Apr 9 2020
7 comments
667 views