I was pleasantly surprised with my first real interaction with JavaFX Script. Of course, I had to test at least one of its i18n features, so I picked something simple, Unicode text in the script.

My goal was simple. Find out whether I could successfully create a simple "Hello, world" using non-ASCII characters. I picked Japanese since I'm familiar with that language. The translation accuracy doesn't matter much as long as it generated hiragana and kanji text, so I typed my first script into NetBeans after creating a JavaFX Project:

import javafx.ui.*;

Frame {
    title: "Hello World JavaFX"
    width: 200
    height: 50
    content: Label {
        text: "こんにちは、世界!"
    visible: true

Then I saved this script as HelloWorld.fx. I suspected that this wouldn't quite work, knowing in advance that NetBeans would save the file using my host system's native charset. In my case, that's 8859-1 or the Mac OS X approximation called MacRoman or somesuch -- The silly deviations of 8859-1 across different OSes is another topic entirely. In the editor it looks ok so far -- I haven't reloaded the file from the drive -- but I quietly anticipated that the on-screen version and the file-system copy were slightly out of sync at this point. Poor NetBeans...it just blindly tried to save those Japanese chars in an inferior charset, and it's just not going to work.

Running the script confirmed my thoughts:


Fortunately, I know exactly what's wrong. It's the charset conversion when I save the file. Converting the Japanese text to 8859-1 loses data; see all those '?' marks in the above image. That's the giveaway. I have to escape the non-ASCII Japanese characters in my script. It's a pain, a big pain, but I'm used to this sort of problem in both property files and code source files. So I just escape the chars as shown here:

    content: Label {
        text: "\u3053\u3093\u306b\u3061\u306f\u3001\u4e16\u754c\uff01"

Now I try again. Save, recompile, run, and...nice.


This wasn't a complete test of JavaFX Script's ability to display Unicode characters, but it does clarify a few things for me. A script's character encoding has the same limitations as a Java resource file. For now at least, the easiest way to get non-ASCII characters into a JFX script is to escape them.

It's not easy to remember the Unicode codepoint value for all characters. That's simply not the way people type characters into any editor. So you'll probably have to use the JDK toolnative2ascii to convert your source file to an escape-encoded file. You need to use an editor that can save a file in a charset that supports the characters you're trying to enter. Sounds reasonable, right? You'd be surprised how many tools don't pay attention to charsets, and they rarely warn you when you save a file using a charset that doesn't support what you've tried to type. I suspect NetBeans saves to my host charset, not UTF-8 or SJIS which can represent these Japanese characters. I'll check more on this to find out if there's a way to tell NetBeans that a particular file should be in a specific charset...or better yet, maybe there's a way to have NetBeans save the file using \uXXXX encoding for all non-ASCII characters. Hmmm...maybe that's a plugin opportunity for someone. But it's definitely a problem for now, and if it's just a limitation of my NetBeans knowledge, I'll followup on this in a subsequent blog.

For now I'm just happy to report that you can embed Unicode chars into your JavaFX Script files. Just remember to encode them with the native2ascii tool first.

I hate to add this to my blog, but I feel compelled. I want everyone to know that I use NetBeans every single workday. I like it, and I'm not picking on it. This problem with encodings isn't specific to NetBeans, and I'm not blaming NetBeans. I hate explaining that...but I'm trying to avoid the inevitable backlash against me if I don't explain that I'm not being negative.