Hi there,
we have a company that implements storing Unicode data in Oracle in the following way:
A plain VARCHAR2 on a non-Unicode DB (charset is actually WE8MSWIN1252) receives UTF-8 coded data.
As client and server have the same setting for NLS_LANG, no conversion takes place, and the app will run fine.
(in my eyes, a clean way to set this up would be utilizing NVARCHAR fields for this, but this is no option)
But: how can I do query based on these columns without getting garbage for each non-ASCII character?
I imagine setting up views for that purpose, but I need the syntax on how to re-interpret the UTF-8 data coming from a VARCHAR2 field.
I tried the following:
SELECT CONVERT(column, 'WE8MSWIN1252', 'AL32UTF8') FROM table where ...
This will give me the right data on a client with cp 1252 set up, with the restriction to 8 bit output.
Now I would like to have a Unicode-capable application like SQL*Developer to be fully capable of dealing with the Unicode data, but I guess, for that to work, I would need the DB to deliver a NVCHAR2 output from the above query?
Any help and comments appreciated.
Tom
Message was edited by: snmdla