This content has been marked as final. Show 8 replies
I would think that this statement,
says to convert "strData" into bytes using the Shift_JIS charater encoding. Once in byte format, it's not like UTF-8 will understand it since it was encoded with something else. Thus when you create a new string object and say "use UTF-8 to decode the bytes" you end up getting garbage.
result = new String(strData.toString().getBytes("Shift-JIS"),"UTF-8");
Could you tell any solution for this issue?
Your solution would be to completely get rid of that stringFormat(...) method entirely. There is no meaning to it. Eigther you work with a java String object, or you work with the encoded bytes. To encode a string into bytes and decode it into the same String seems pointless.
A java String object is perfectly capable of displaying every unicode character in existence. This includes the complete set of japanese characters. That is, instead of doing
what happens if you do,
Phrase colm_2 = new Phrase(report.stringFormat(strCatgdes),font);
Edited by: Maxideon on Dec 30, 2008 10:41 PM
Phrase colm_2 = new Phrase(strCatgdes,font);
Wait a minute... Are you trying to identify the japanese characters in "strCatgdes"?
Im getting from the table by using this below result set.
strCatgdes = rs.getString("CC_DES");
While viewing the data, Im seeing the correct Japanese chars. But displaying in the PDF only its giving like this "?????" characters.
as the others said, there is no needing to convert the String into another encoding. Strings in Java are all already in Unicode. You have to use a Font that has glyphs for the japanese chars. With this Font you have to work in for example
new Paragraph(String string, Font font)
See [http://itextdocs.lowagie.com/tutorial/fonts/getting/index.php] for examples.
new Phrase(String string, Font font)
At present Im using the iText-2.0.1.jar only. Do I need to use both iTextAsian.jar and iTextAsianCmap.jar?
Thanks & Regards
dhilip wrote:I don't know. I don't use CJK but other fonts. But I've googled for you and found a concrete example:
Do I need to use both iTextAsian.jar and iTextAsianCmap.jar?
Seems only iText.jar and iTextAsian.jar needed. The iTextAsianCmaps.jar contains additional CharMaps, it seems.
Still I have the problem in displaying the Japanese characters in pdf.
This is my code
BaseFont bf = BaseFont.createFont("HeiseiKakuGo-W5", "UniJIS-UCS2-H",BaseFont.EMBEDDED);
Font font1 = new Font(bf, 9);
Phrase colm_2 = new Phrase(strCatgdes,font1);
I tried with different japanese fonts like HeiseiMin-W3, HeiseiKakuGo-W5 and KozMinPro-Regular with the encodings UniJIS-UCS2-H, UniJIS-UCS2-HW-H
Actual "strCatgdes" value is 日本語. But its displaying like this "æ��¥æ��¬èª��" in pdf.
Could you please tell me the suggestion, if you know?
Thanks in Advance