This content has been marked as final. Show 6 replies
as the error message says, you can't recover EE to XE directly. XE has some "hardcoded" things in it that also concern database-level objects, so XE software can only run an XE-created database.
Another option I though of, if I were to do a reinstall of XE, and then data pump the info from the EE to XE? Would that work?That's the way to do it. And you could have saved the time, especially what you've invested in your step 5). ;)
Please note that XE ships with APEX and recovering an older APEX version (or even HTMLDB) to XE using a dump probably won't work or at least cause you some severe headaches. I'd suggest to use the recommended (and documented) way to migrate an APEX instance
- transfer the schema(s) used by your HTMLDB workspace(s) using a dump
- export and import your workspaces
- export and import your applications
Please consult the APEX documentation that corresponds to you source and target APEX release. If you don't want to upgrade your APEX to a recent version (right now), uninstall the APEX shipped with XE and install the desired version from the [url http://www.oracle.com/technetwork/developer-tools/apex/application-express/all-archives-099381.html]Prior Release Archives.
Another side note for dumping: Probably your old EE database instance didn't use the UTF-8 charset used in XE. Depending on the data stored in your application schemas, you might need to convert the (default) byte-based boudaries for text fields to the character-based definition.
(Example: A column defined with "my_text VARCHAR2(100)" would be able to store 100 bytes per default semantics. In UTF-8, characters may use more than one byte to be stored, so the data you store in a single-byte-charset might exceed the limit when being stored with UTF-8 charset. You need to modify the column to be defined as "my_text VARCHAR2(100 CHAR)", so the limit would be 100 characters, no matter whether a character is represented by a single or multiple bytes.)
This might also affect your applications if they use PL/SQL code with byte-based variable definitions.
Just wondering, if the DB files are about 1Gb, should my dump file be 157Mb?I don't know, that depends on the actual amount of data stored in your database.
The filesize is no reliable measure for this. To give you a simple example: You have a "order entry" schema where you started storing your data 10 years ago. You initialized your data files with just a few megs and let Oracle do the extension automatically, so it grew and grew and is about 1 GB today. Now you decide that you don't need data that is older than two years in that schema and delete all outdated data. After you did that, your data files still be that 1 GB, though you only use about 20 percent of it. Oracle doesn't shrink your data files automatically...
To get a good estimation, you can run something like
to see how much space is actually used in your data files. Note that this doesn't necessarily mean that your dump has to have that size. There are a few aspects (like compression) that could cause a difference...
SELECT f.file_name, CEIL( f.bytes / 1024 / 1024 ) "SIZE_MB", CEIL( f.bytes / 1024 / 1024 ) - FLOOR( SUM( NVL( fs.bytes, 0 ) ) / 1024 / 1024 ) "USED_MB", FLOOR( SUM( NVL( fs.bytes, 0 ) ) / 1024 / 1024 ) "FREE_MB", CEIL( f.maxbytes / 1024 / 1024 ) "MAX_MB" FROM dba_free_space fs, dba_data_files f WHERE fs.file_id(+) = f.file_id GROUP BY f.file_name, f.bytes, f.maxbytes, f.blocks, f.maxblocks, f.autoextensible ORDER BY f.file_name;