I have a PJC that we wrote to allow us to read and write files to the file system. We are now trying to move our forms to support UTF-8 to allow characters from Asian countries to show up properly. The forms are working fine and we can read and write Asian characters to and from the database.
So now for the problem.
In our code we call set_custom_property on the bean to write a chunk of text to the file. This works fine BUT we are loosing the non ascii characters in the process.
The string abc汉语/漢語123 ends up as abc????123. (There should be Chinese characters in the first one there if you can't see them.)
I have verified the string is correct just before the set_Custom_property call (showing it with a message). In the bean, in the setProperty method I used System.out.println to dump out the value parameter to the console.
Basically something is happening between the call to set_custom_property and the beans setProperty methods that is messing up the encoding.
Before you ask we cannot convert to something like webutil as that would require recoding hundreds of forms. I believe webutil can handle UTF-8 encoded characters so there must be something I'm doing wrong for it to now work for us.
Sorry forgot the version. We are using Forms 11g 22.214.171.124. Character set is set to AL32UTF8 in forms and database (so it should be able to anything and does since we can read and write to Chinese characters to the database). Also the Chinese characters were just some random stuff I got of Google.
Did some more digging and I figured out that if I set the Runtime Parameters to "-Dfile.encoding=utf8" in the Java control panel the applet runs and outputs utf8 as expected. The problem is I cannot possibly expect our users to make this change themselves (too many countries and languages to deal with). I've been looking at how we can set this setting for the users with no luck so far.
The problem isn't actually writing the file in UTF-8. The problem is that between the call to SET_CUSTOM_PROPERTY in the form and the setProperty method being called in the PJC the encoding is getting lost. As a result I simply cannot get the characters from the original string into the Java bean. I have a feeling the fix for this will have to come from Oracle.
You read my mind. :) I tried it quickly and it does work (we already have base64 file writing to handle non-text files so it was easy to test). I just need to work on the code to split the varchar2's into a couple parts to handle the increase in size of the encoded string in case someone sends in a string that hits the varchar2 size limit.