I assume you are using DG4ODBC. This converts the remote columns to an Oracle datatype depending on the information it gets from the ODBC driver being used. The ODBC driver will convert the datatype in the non-Oracle database to an ODBC datatype which it then passes to DG4ODBC.
So, for example, if the non-Oracle datatype is 'timestamp' and the ODBC driver converts this to the ODBC datatype SQL_TIMESTAMP then DG4ODBC will convert this to the Oracle DATAE datatype.
Similarly, if the DOUBLE datatype is converted to the ODBC datatype SQL_DOUBLE then DG4ODBC will convert this to the Oracle datatype FLOAT(53).
This is detailed in the DG4ODBC documentation -
Oracle® Database Gateway for ODBC
11g Release 2 (11.2)
in the section -
Mapping ODBC Data Types to Oracle Data Types
Could you post the following information -
1. An ODBC trace and a DG4ODBC trace from the select of both columns so we can see how the conversion is being handled ?
2. What is the full name and version of the non-Oracle database being accessed and the ODBC driver being used ?
3. What is the non-Oracle create statement for the table giving the problem ?
4. What is displayed in SQLPLUS for -
- do not put a semi-colon at the end as it will give an error.
Edited by: mkirtley on Apr 19, 2013 2:17 PM
Yes, I am using DG4ODBC in this instance.
As to the non-Oracle DB information - this ODBC driver acts as a proprietary data access tool for aggregated data on an industrial backend; there's actually no RDBMS here to work with. That's one of my challenges. But as I noted, this driver is being leveraged by other client applications without issue. That certainly doesn't mean it's perfect - but I'm trying to track down what exactly dg4odbc is tripping up on so that I can feed it back to my Dev group so they can troubleshoot the driver.
Here's the table description that Oracle provides:
Name Null? Type
For some reason, Oracle thinks the columns are VARCHAR2 - which is probably a big part of the problem.
The trace file can be found at the link below.
I'm getting nothing for an ODBC trace, for some reason, for any of my ODBC connections. I'll keep digging on that issue, but I thought I'd share what I do have for now.
Edited by: crispy8888 on 19-Apr-2013 07:11
Thanks for the update and trace file.
The Dg4ODBC trace shows the columns as -
Entered hgopcda at 2013/04/19-10:37:51
Column:1(Timestamp): dtype:93 (TIMESTAMP), prc/scl:23/3, nullbl:1, octet:0, sign:1, radix:0
Exiting hgopcda, rc=0 at 2013/04/19-10:37:51
Entered hgopcda at 2013/04/19-10:37:51
Column:2(Value): dtype:8 (DOUBLE), prc/scl:15/0, nullbl:1, octet:0, sign:1, radix:0
DTY NULL-OK LEN MAXBUFLEN PR/SC CST IND MOD NAME
91 DATE Y 16 16 0/ 0 0 0 0 Timestamp
8 DOUBLE Y 8 8 53/ 0 0 0 0 Value
so it isn't clear why Oracle then dipslays these as VARCHAR2 columns.
According to the gateway documentation the 'DOUBLE' datatype should be converted to 'FLOAT(53) and datatype 93 is SQL_TYPE_TIMESTAMP and not SQL_TIMESTAMP
We need to see the ODBC trace also to see what datatypes it is passing.
If you are using the ODBC Administrator screen to start the ODBC tracing then make sure tha you tick the box that says -
"Machine-wide tracing for all user identities"
Could you also change the Oracle trace level to 255 instead of debug -
Still working on the ODBC trace; I may have to check on Monday with the key developer on the ODBC driver as to why it's not logging when others are.
However, I've pasted the HS_FDS_TRACE_LEVEL = 255 trace file below in case it offers up any more clues. But like you, I don't think we'll know anything until we can get a decent ODBC driver trace.
FINALLY managed to get the ODBC trace. I was not aware of the bug/feature of Windows 7 that dumps your ODBC log to C:\Windows\Temp\SQL.log despite what you enter for a destination path.
ODBC trace for the issue above is here:
Here's the latest response from my internal Dev group on this issue:
This is dg4odbc using SQLDescribeCol to ask us about the Value column and we are telling it that it is a SQL_DOUBLE (8):
dg4odbcGATEWAY 2274-30dc EXIT SQLDescribeCol with return code 0 (SQL_SUCCESS)
UCHAR * 0x000000000012EA20 [ 5] "Value"
SWORD * 0x000000000012EB14 (5)
SWORD * 0x000000000012EB18 (8)
SQLULEN * 0x000000000012EA9C (15)
SWORD * 0x000000000012EB1C (0)
SWORD * 0x000000000012EB20 (1)
It clearly recognizes that it is retrieving a double column because it then binds the column to a SQL_C_DOUBLE pointer (at memory location 0x05537320):
dg4odbcGATEWAY 2274-30dc ENTER SQLBindCol
SWORD 8 <SQL_C_DOUBLE>
SQLLEN * 0x000000000553CC68
It then uses SQLFetch to repeatedly retrieve each row of data. This trace unfortunately does not show what we are copying into the bound location at each fetch, but there is no reason to think that it is not the 8-byte double value that it should be.
I'm really scratching my head on this one. Are there any other lower-level diagnostics beyond the ODBC trace and the HS trace that could help here?
As a follow up to anyone experiencing the same issue, here's how we worked around it in our driver:
We observed from ODBC Tracing that when the Oracle Gateway for ODBC connects to the Microsoft Access ODBC driver it makes a series of calls to SQLGetTypeInfo before attempting to do any query.
The ODBC Trace from Oracle connecting to our ODBC Driver did not show any such calls. This was apparently caused by the fact that our ODBC Driver returned SQL_ERROR to a SQLGetInfo(SQL_MAX_PROCEDURE_NAME_LEN) call, indicating that it does not support procedures, which apparently caused Oracle to abort its initialization sequence altogether.
We adjusted the driver to “pretend” that it supports procedures and to return 256 to this call. After that adjustment was made, Oracle did in fact make the SQLGetTypeInfo calls that it apparently needs to do to learn type information about the driver. Subsequent queries successfully returned data with the correct types.