About 1, I think you can use FILESIZE parameter to let the network copy faster.
2: At schema level, no special cares afaik.
3: Nom as you mentioned parallel parameter is not supported in SE.
4: Can you use TTS ? The SE allows you to use import TTS. So you can export TTS metadata + copy datafiles to new environmnet, and then import the TTS metadata.
Hope it helps
Yes, you will have to create a single export file. The import will only be successful if the schema is only using features that are available in the SE version - if you are using features available in the EE version (e.g. table partitioning), the import will fail. 25G is a relatively small size and the export/import process should run quickly, assuming you have sufficient hardware resources.
Thank you Srini & Rod for the comments..
I have tried expdp/impdp, import and it failed for table row loading for 4 tables with not all rows being imported.
Ora-02374: conversion error loading table TAB1 ...
Ora-12899: value too large for column Commnets (actual: 4015, maximum: 4000)
I checked nls_characterset and it's different
1. charactersets are different
2. table column data too large for column
wondering what are my options to overcome this issue (after some research on OTN):
1. use traditional export / import and try again
2. use alter table and modify column comments in acutal dump file (use export / import)
3. verify what all tables have issue and modify column for them (will take lot of time since need to figure max column length for failed data)
Any other options/tips from experts?
The problem is in the AL32UTF8 characterset any character which is not a standard 'english' character with take up 3 bytes rather than just 1 and result in the problem you see.
For example an Ü character is only one byte in WE8IS08859P1 but 3 bytes in AL32UTF8.
You could solve this in a couple of ways:
1) change your characterset to WE8ISO8859P1 in your destination database then the data will happily fit.
2) change the table definitions to be varchar2(4000 char) rather than varchar2(4000 byte) (so basically use char semantics rather than byte semantics - you are sayng the data can be 400 characters as opposed to 4000 bytes - for al32utf8 this makes a big difference where for other characterset the two are essentially the same)
I think the export will be hardcoded with explicit byte/char semantics though it may not be - you could try redoing the import with the db parameter nls_length_semantics set to char - this means the default becomes char rather than byte semantics.
Personally if it's not too much trouble I'd probably recreate the new database with a charset that matches the original, you may get into some client issues otherwise....
your comments really do make sense and on right track to solve my issue.
I have used expdp/impdp for this schema test move to different db so not sure whether I can change nls_length_semantics to char database wide.
Is there a parameter to inform impdp to use nls_length_semantics just temporarily?
I don't think there is a specific setting for this as a 'datapump command' it can be changed via alter session though so you could put this statement:
alter session set nls_length_semantics='CHAR';
into a logon trigger.
The setting really only affects object creation so you could just set it database wide during the import then change it back. After the object is created it should retain whatever it was created with.
What I'm not sure of though is what DDL would actually be in the dumpfile, if the ddl explicitly says char/byte then this will override the database setting. You could try an import to a sqlfile to confirm what the ddl text is?
Is there Oracle function to know
1. what's the max row length of a comment column after converting bytes to char?
trying to see if I can convert bytes to char length and modify table column on source before import?
1. Import meta data only
2. Alter table modify column comment varchar2(5000)
3. Import data
but for step 2. I need to know max column length in char.
Interestingly I have both datatype and oracle metalink doco open on other tabs.
I'm also trying to use simplest solution without changing much in actual target database, since there is only 1 schema move involve in this task.
Source characterset is WE8IS08859P1 & Target characterset is AL32UTF8.
so how exactly do you propose to import all data?
Sorry for my lack on knowledge in this topic since I have faced this first time in my 5 yrs of dba career.
Target database is already working database with many users..so re-create db is not an option for me.
That's why I was hesitant to change directly anything on db level...
User which I'm moving doesn't exist on target so I can play around dropping recreating user.
okie... I think we should be able to fix problem rows..
Is there any way to identify which row gives problem during import since logfile doesn't provides rowid?