Discussions
Categories
- 196.7K All Categories
- 2.2K Data
- 235 Big Data Appliance
- 1.9K Data Science
- 449.8K Databases
- 221.6K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 31 Multilingual Engine
- 549 MySQL Community Space
- 477 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3K ORDS, SODA & JSON in the Database
- 532 SQLcl
- 4K SQL Developer Data Modeler
- 186.9K SQL & PL/SQL
- 21.3K SQL Developer
- 295.4K Development
- 17 Developer Projects
- 138 Programming Languages
- 292.1K Development Tools
- 104 DevOps
- 3.1K QA/Testing
- 645.9K Java
- 28 Java Learning Subscription
- 37K Database Connectivity
- 153 Java Community Process
- 105 Java 25
- 22.1K Java APIs
- 138.1K Java Development Tools
- 165.3K Java EE (Java Enterprise Edition)
- 17 Java Essentials
- 158 Java 8 Questions
- 85.9K Java Programming
- 79 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.2K Java SE
- 13.8K Java Security
- 203 Java User Groups
- 24 JavaScript - Nashorn
- Programs
- 395 LiveLabs
- 37 Workshops
- 10.2K Software
- 6.7K Berkeley DB Family
- 3.5K JHeadstart
- 5.6K Other Languages
- 2.3K Chinese
- 170 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 230 Portuguese
Unicode

156844
Member Posts: 18
We have two versions of the same database, one that uses a Latin characterset, the other AL32UTF8.
Our client uses Microsoft's ADO stack, and we see a difference in payload size when retrieving data through the Oracle OLE DB provider.
My assumption is that this is due to the documented behavior of the OLE DB Provider where:
OraOLEDB works in two modes, Unicode mode and non-Unicode mode. When the client character set is not a superset of the server character set, OraOLEDB automatically enables the Unicode mode. In this mode, OraOLEDB stores the data in its cache in the UCS2 encoding scheme. The user should ensure that the database's character set is UTF8 in order to prevent any data loss.
If the client character set is a superset of the server's, the provider operates in the non-Unicode mode. This mode provides slightly better performance as it does not have to deal with larger character buffers required by the UCS2 encoding.
Anyone know a way to work around this ? I am looking for a consistent sizing of single byte characters in both environments - my observation that the UTF16 representation doubles my payload.
Any insight appreciated - thanks.
Our client uses Microsoft's ADO stack, and we see a difference in payload size when retrieving data through the Oracle OLE DB provider.
My assumption is that this is due to the documented behavior of the OLE DB Provider where:
OraOLEDB works in two modes, Unicode mode and non-Unicode mode. When the client character set is not a superset of the server character set, OraOLEDB automatically enables the Unicode mode. In this mode, OraOLEDB stores the data in its cache in the UCS2 encoding scheme. The user should ensure that the database's character set is UTF8 in order to prevent any data loss.
If the client character set is a superset of the server's, the provider operates in the non-Unicode mode. This mode provides slightly better performance as it does not have to deal with larger character buffers required by the UCS2 encoding.
Anyone know a way to work around this ? I am looking for a consistent sizing of single byte characters in both environments - my observation that the UTF16 representation doubles my payload.
Any insight appreciated - thanks.
This discussion has been closed.