Skip to Main Content

Data Lake & Services

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

NoSQL DBAs on the Rise

unknown-1040115Oct 7 2015 — edited Oct 7 2015

NoSQL DBAs on the Rise

by Przemyslaw Piotrowski

NoSQL and Big Data solutions are taking the world by storm and are increasingly being pushed into corporate environments for improving time to market and increasing development agility.

With the advent of new breeds of data management systems that build up on principles other than the relational algebra, a growing doubt has grown around the necessity and the role of the Database Administrator in this scenario. Even if most of these new systems are fully dependent on development teams and all maintenance efforts look redundant at first, once all production demands are considered it becomes crystal clear that Database Administrators need to remain a vital part of enterprise development chain. 24/7 availability, full transactional consistency and reliable recovery strategy cannot be dismissed when even remotely considering stepping away from relational database management system – whether transactional or warehouse-type. Read on to discover the many challenges and considerations about core database tasks, data model design and data itself to remain up-to-date on the ongoing technology drift to NoSQL and Big Data solutions.

Leading market research companies anticipate exponential growth of NoSQL and Big Data systems with an increasing number of successful pilots in corporate environments that are achieving extreme capacity and massive throughput. While the market share of these systems is still largely uncharted, it should be expected that they will eventually begin to supplement and sometimes supersede traditional database technology. Table 1. depicts today's landscape of data management ecosystem, with naming conventions that are used throughout this article.

NoSQLTable1.jpg

            Table 1. General classification of data management systems.

Although the terms "NoSQL" and "Big Data" are frequently being used interchangeably, the distinction is evident with respect to their areas of usage, business strategy usefulness and ability to orchestrate highly-concurrent traffic. That being said, NoSQL fits more into OLTP space and Big Data is more of a warehouse-type.

Data Dawn

Relational database and Structured Query Language (which were invented in early 1970s) were quickly picked up by business because they offered a unique ability to abstract logical data layout from its physical location on disk. This made database development and usage much simpler and a lot more efficient. Donald D. Chamberlin and Raymond F. Boyce, the authors of SQL, considered Edgar F. Codd's relational model to be the simplest possible general-purpose data structure and were able to leverage that simplicity for designing a flexible querying language that remains vital to majority of today's IT systems. Relational algebra made even more sense in a way that normal form was extremely important when memory was scarce and storage was hardly in megabytes epoch, and when data had to be heavily normalized to keep storage costs acceptable.

Early NoSQL solutions were dubbed as having near-zero deployment barrier. Just like databases, data stores are similarly persistent, highly available, horizontally scalable and for most of the time consistent – with some caveats. The need for such solutions originally came from mammoth Internet companies which require massive bandwidth, also – or maybe even especially – on database tier.

Likewise, Big Data emerged from the need of the very same Internet behemoths, originating from the idea of storing massive amounts of user data for the purpose of tracking, personalizing content and offering bespoke user experience. Such immense data volumes have been long known to research studies like analyzing genomes, simulating earthquakes and forecasting weather however recent information explosion made heavy data consumption more appealing and affordable to businesses.

Many Ways to Say No

Having no central authority nor standards body, NoSQL solutions had to go the fragmentation way as each of those systems was initially designed as in-house technology and eventually released to the wild as an open source project. Implemented in different languages, with different objectives, different availability requirements and incompatible APIs, they still can be divided into four major kinds depicted in Table 2.

NoSQLTable2.jpg

                                Table 2. Major Classifications of NoSQL Datastores.

Simple as they seem at first, all these data stores rely on the close relationship with application, with API being tightly integrated into app. Although there were some attempts to come up with a query language that would set common ground for NoSQL solutions, so far they received limited adoption from the community.

When running a relational database, a lot of time is being invested into data model design, architecture of supplemental structures such as constraints and indexes, and also analysis of future data access patterns. That investment pays off later with the ability to run a number of different workloads against that data, of either OLTP or DW characteristics. Even if much of that thought also goes to NoSQL systems' design, they are still deployed at more aggressive rates than traditional RDBMS solutions. What RDBMS had addressed with Data Manipulation Language (a subset of SQL), NoSQL systems must handle individually on a case-by-case basis by implementing code or providing query abstraction that is native to each feature or engine.

Some typical use cases mapped from the RDBMS world to the NoSQL environment are table joins, indexes and partitioning which usually require specific implementation within application layer. For SQL developers this seems like overkill as they are given an almost infinitely flexible tool which in the worst case can experience system slowness but almost rarely lack functionality.

NoSQLTable3.jpg

                             Table 3. Structured Query Language subsets.

Due to the immense complexity these sub-languages hold, most NoSQL engines chose to skip on DDL, DCL and TCL functionality altogether, leaving only basic set and get operations available to the user. So even if initial software development process can seem more agile and appear as lean, further enhancements and continuous maintenance will eventually drift towards 'relational' budgets. Since these emerging technologies grow very fast at some point they will face the need to break backwards-compatibility, leaving applications on unsupported releases.

Are We Big Yet?

The boundaries where “traditional” data ends and Big Data begins are constantly being pushed out by the thriving storage capacity. Data that was considered big 10 years ago can now fit onto a portable hard drive. Many advisory and research companies have tried to come up with ballpark figures for estimating how much information remains in digitalized form today and how much is being generated on a yearly basis but the numbers oscillate from hundreds of exabytes to zettabytes.

Claims are being made that such amounts no longer fit into relational databases and must be transitioned into different paradigms. One such substitute is map-reduce, a breakthrough-yet-not-so-new algorithm that builds up on dividing data into independent chunks and processing them in parallel on commodity-hardware clusters. In its raw form, map-reduce is far from usable – especially to business – therefore shortly after getting introduced, it sprawled a number of wrappers, including SQL-compatible ones. Years forward, Big Data and relational databases get along very well with both-way compatibility through adapters and extensions. Data analyst will often sit with data architect or DBA to design ETL processes that span across multiple data layers and diverse data management systems. Also, end users are more likely connecting to database staging endpoint rather than query datasets directly.

Recent research makes it evident that corporate Big Data most often starts small, meaning that all new data projects are sprawling into datasets rather than go and convert existing databases. This usually attracts less attention from support groups and in turn requires “post-engineering” to prove its value, eventually.

The Database Administrator

Beyond everything that DBAs do, keeping data accessible to users remains a vital part of their job. If they fail to deliver service level agreements, business suffers and customers get hit. That availability is far more than just redundancy because it encompasses the means of delivering certain service levels like performance, throughput and capacity. Frequently, theDBA will be the meeting point between storage, networking and systems teams and will need to make the database resilient to degradation of any of those systems. A properly designed database system will bounce back not only during power surges or network outages but also during natural disasters or human error. It takes a combination of skills to effectively provision and maintain such environment and required decades of profession engineering to reach this point.

On the other hand, NoSQL builds upon the principles of horizontal scaling and redundancy to make capacity a linear function. However, this means every single query hitting the data store requires new lines of code. More importantly, security is an essential aspect to Big Data and NOSQL systems that often goes unnoticed despite its tremendous significance to enterprise operations. Most data stores and datasets are only protected in a sense of being isolated on a network, however all vendors equally agree today on recommending users to remain cautious and keep them tightly locked up as the security layer hasn't actually been built-in very well.

In the rapidly changing landscape of data management solutions it will become increasingly important to leverage as many of the relational skills and experiences into architecture and engineering of data stores and datasets. How to handle an Sarbanes-Oxley audit, how to handle a rolling upgrade, how to perform near-zero downtime migration, how to recover from a failover – that knowledge is already within today's DBA know-how. It just needs stretching out to the newly introduced data management disciplines.

Different IT roles around database operations and support are listed in Table 4. and how they align in the enterprise.

NoSQLTable4.jpg

Table 4. Enterprise responsibility matrix based on RDBMS architecture and common role separation pattern

(● direct, ○ indirect, none if blank).

Data is King

Observing the changing landscape of data management systems since 2000, it's a fairly safe bet to suppose that technology itself will start losing its importance while technology skills will thrive as data management software portfolio expands and diversifies. Although NoSQL and Big Data offer entirely new techniques for dealing with information, that proposition is still far from complete and this is where the DBA can fill the gap.

This in turn induces a major change in perspective and mindset for current relational technology experts. Techniques, algorithms and patterns that are common to NoSQL and Big Data can be planted into database systems to improve availability, capacity and performance. It has already started: map-reduce has been used on database warehouses for many years, just with different terminology. Similarly, key-value access patterns of data stores are not new at all to databases which were able to work asynchronously for decades. What did change was continuous breakthrough in hardware technology tied with evolving requirements for data availability and size. On the verge of breaking Moore's Law and just ahead of the coming in-memory revolution, it may turn out that all data management systems and skills must be thrown at problems so that only the combination of them wins.

About the Author

Przemyslaw Piotrowski is database expert with experience in design, development and support of large-scale computer systems. Przemyslaw is Oracle Database Certified Master specializing in High Availability, Performance Tuning and Emerging Technologies.

--------

1- Estimate based on job share as of May 2014. Source: Indeed, Monster, LinkedIn

Comments

Gary Graham-Oracle

At that spot in the code, SQL Developer is trying to get the connection type for that specific connection definition, as read from the connections.xml file under your user settings for 4.2 EA2, and the Java object for that entire definition is null.  Somehow that may have gotten corrupt.

Have you tried...

1. Deleting that particular connection definition, then defining it again?  All through the UI. Best never to edit connections.xml.

2. Deleting all connection definitions, then importing with the same export file you used for the import into 4.1.5?

The only other thing I can think of is some illegal character in the connection name that slipped in but gets caught upon reading the definition.  That seems very unlikely, though.

2621199

Hey Gary,

1. Did that with various combinations.

2. Deleted:

    SQL Developer 4.2 software

    C:\Users\<user>\AppData\Roaming\SQL Developer\system4.2.0.16.356.1154

    C:\Users\<user>\AppData\Roaming\sqldeveloper\4.2.0

   Unzipped (installed) a clean SQL Developer 4.2

   Configured SQL Developer 4.2 (copied msvcr100.dll, configured Oracle client & tnsnames paths in preferences)

   Imported the previous connections export.

  Got the exact same java.lang.NullPointerException error window, again.

There is no illegal or non-displayable characters in the connections export file or connection.xml .

The connections export file and

C:\Users\<users>\AppData\Roaming\SQL Developer\system4.2.0.16.356.1154\o.jdeveloper.db.connection.13.0.0.1.42.161121.1801\connection.xml

are the same except for export parameters and passwords - I compared them)

I'm open to suggestions.

Gary Graham-Oracle

Try these standard debugging methods...

1. Modify sqldeveloper.conf, removing the non from IncludeConfFile  sqldeveloper-nondebug.conf

2. Open a Cmd window and launch ...\sqldeveloper\sqldeveloper\bin\sqldeveloper.exe from there

Perhaps you will get some additional error messages in the console window or in the View > Log tabs that precede that Java exception.

2621199

1. done

2. I have the 1002 line log.  I attached it to this thread entry.

SOL21D is the DB that I connected to successfully in this session.

PMIBASL10 is the problem DB in this session.

Gary Graham-Oracle

I am not familiar with Data Guard, but the difference between the SOL21D and PMIBASL10 connection attempts seems to be that Oracle is expecting a Data Guard Broker (or perhaps some feature of Data Guard?) to be enabled when connecting to PMIBASL10.  The log contains lines like...

FINE    1430    0    oracle.dbtools.raptor.utils.DatabaseFeatureRegistry$QueryFeature    ORA-00439: feature not enabled: Data Guard Broker at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)

FINE    1428    0    oracle.javatools.db.AbstractDBObjectProvider    fire provider pmibasl10   MiBAS LITE opened

FINE    1427    0    oracle.javatools.db.DBObjectProviderFactory    Provider created using key IdeConnections%23pmibasl10+++MiBAS+LITE: oracle.javatools.db.ora.Oracle12c

FINE    1426    0    oracle.javatools.db.AbstractDatabase    pmibasl10   MiBAS LITE: new oracle.javatools.db.ora.Oracle12c

FINEST    1425    61    oracle.javatools.db.DatabaseFactory    Opening connection for pmibasl10   MiBAS LITE took 63ms

Also, I had never noticed extra information referencing an application like "MiBAS LITE" in the Opening connnection message before, but maybe that's just me.  So it appears you must pursue how to configure PMIBASL10 / Data Guard so that an application like SQL Developer using JDBC can access it.

unknown-7404

2. I have the 1002 line log. I attached it to this thread entry.

SOL21D is the DB that I connected to successfully in this session.

PMIBASL10 is the problem DB in this session.

I, for one, don't open unknown links from unknown sources.

Looks like it's time for you to SHOW US, rather that just tell us:

1. WHAT you do

2. HOW you do it

3. WHAT results you get

That way we can see for ourselves:

1. what connection parameters/settings you are using

2. that the 'test' of those was successful

3. that you can't open the connection

Probably unrelated but:

I copied msvcr100.dll from ...\sqldeveloper\jdk\jre\bin to newly created ...\sqldeveloper\jdk\bin

Why are you doing that? If you need that file on your machine then put it somewhere where Sql Dev can find it.

And why did you copy it to a different place?

Gary Graham-Oracle

By the way, I forgot to ask if that connection worked in 4.1.3.  The 4.2 EA does ship with a different JDBC driver.  If 4.1.3 connects, then maybe we can blame it on the driver...

Gary Graham-Oracle

Probably unrelated

Yes, the OP was just following advice for a workaround in 4.2.0 EA2 - fails on startup

2621199

Long weekend ...

Both PMIBASL10 and SOL21D (as well as other DBs) are on different servers on the same company network.  I created and administer these DBs.  I can or cannot connect to a DB with SQL Developer 4.2.0.16.356.1154-x64, however they all can be connected to with SQL Developer 4.1.5.21.78-x64.  Both versions of SQL Developer use the same client software, TNS connect method and tnsnames.ora.

Regarding "Probably unrelated ..."

"I copied msvcr100.dll from ...\sqldeveloper\jdk\jre\bin to newly created ...\sqldeveloper\jdk\bin"

This is a workaround found in a few places and is also referred to on in the web installation notes for SQL Developer:

(http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/sqldev-install-windows-1969674.html)

Without this workaround SQL Developer barely started.

Database PMIBASL10 is an 64 bit Oracle version12.1.0.1.0 Standard Edition without Data Guard, without and other DBs on the same server.  I manage some Data Guard DBs and this is not the case.  To be sure I double checked my client tnsnames.ora and DB parameters.  PMIBASL10 has nothing to do with Data Guard.  Why would SQL Developer presume that Data Guard is used/invoked/required here?  Is this a configured feature/option in SQL Developer?

2621199

"pmibasl10   MiBAS LITE" is the SQL Developer "Connection Name" in the DB connections tree.

It seems that in log line FINE 1427, the spaces were replaced by the "+"  sign, so we see "pmibasl10+++MiBAS+LITE".

And yes I defined a new DB connection with only pmibasl10 as the "Connection Name" but got the exact same error when attempting to connect.

2621199

TYPO / CORRECTION:

... without Data Guard, without and other DBs on the same server.

should read

without Data Guard, without other DBs on the same server.

Gary Graham-Oracle

Database PMIBASL10 is an 64 bit Oracle version12.1.0.1.0 Standard Edition without Data Guard

OK, my mistake. SQL Developer was just checking for Data Guard but, of course, it is not an option for standard edition.

Both versions of SQL Developer use the same client software, TNS connect method and tnsnames.ora

Just to clarify, is each install (both 4.1.5 and 4.2) definitely configured via Tools > Preferences > Database > Advanced to use the local Oracle client?  That way, the JDBC driver that ships with SQL Developer is not used, so my prior comment

The 4.2 EA does ship with a different JDBC driver.  If 4.1.3 connects, then maybe we can blame it on the driver

would not be relevant.  But that was my prime suspect, other things being equal.  By the way, do you have Use OCI/Thick driver checked off in those Advanced options?  

2621199

1) Yes, each install (both 4.1.5 and 4.2) is definitely configured via Tools > Preferences > Database > Advanced to use the local Oracle client.  The "Use Oracle Client" check box is marked.

2) No I do not use OCI/Thick driver / checked in Advanced options.  I did try that in both 4.1.5 and 4.2 and got
   "C:\oracle\product\12.1.0\client_1\bin\ocijdbc12.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform" for any and all DBs in both SQL Developer versions.  (I think I used this on previous versions (on a different computer) that did not have JDK included.)

unknown-7404

You need to decide if you DO WANT to use an Oracle client of if you DO NOT WANT to use an Oracle client.

The 'basic' connection does NOT use an Oracle client - it uses the ojdbc.jar file.

Other connection types DO use/need an Oracle client and use the OCI/Thick driver.

The below says you want to use an Oracle client but do NOT want to use the thick driver. Please explain how that can work.

2621199 wrote:

1) Yes, each install (both 4.1.5 and 4.2) is definitely configured via Tools > Preferences > Database > Advanced to use the local Oracle client. The "Use Oracle Client" check box is marked.

2) No I do not use OCI/Thick driver / checked in Advanced options. I did try that in both 4.1.5 and 4.2 and got
"C:\oracle\product\12.1.0\client_1\bin\ocijdbc12.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform" for any and all DBs in both SQL Developer versions. (I think I used this on previous versions (on a different computer) that did not have JDK included.)

We can NOT see your screen. This is what I said previously:

Looks like it's time for you to SHOW US, rather that just tell us:

1. WHAT you do

2. HOW you do it

3. WHAT results you get

Gary Graham-Oracle

Can't load IA 32-bit .dll on a AMD 64-bit platform

This message is just telling you there is a mismatch between the Oracle client and Java version.  Both must be either 32-bit or 64-bit, not a mixture of the two.  In this case, the Oracle Instant Client is 32-bit, so you could instead get...

Instant Client downloads for Microsoft Windows (x64)

or install a 32-bit JDK, then modify the product.conf file's SetJavaHome line to point to it rather than use the default bundled JDK.

My thought was that using the OCI/Thick driver might help (that is, there might be some option in the TNS that requires OCI), but just guessing.  As @"rp0428" says, you really have to show us details to get the most effective response

2621199

As I understand it, I do NOT use the OCI/Thick driver, because my installation of SQL Developer is 64-bit and my Oracle client is 32-bit. We need the 32-bit client for a different tool.

All the SQL Developer 4.1.5 and 4.2 definitions are identical. There is only one tnsnames.ora which I now understand is only parsed for the connect string but not used with the Oracle client.
The issue is only with one of the DBs in the SQL Developer tree. That same DB connection is defined identically in the 4.1.5 installation. That same DB can be connected to from my client with the sqlplus user/pwd@//host:port/service_name syntax.
See the two screen shots below.

SQLDev1.PNG

SQLDev2.PNG

2621199

What I do when I get the error:

Double click on the Connections tree on "pimbasl10 MiBAS LITE"

then I receive the first error screen:

SQLDev3.PNG

Clicking the "Details" button, I receive the next error screen (which I enlarged to see all the details):

SQLDev4.PNG

Clicking "OK" I receive:

SQLDev5.PNG

Then OK (twice for some reason) and we're done.

Gary Graham-Oracle

Thanks for clarifying how you connect in terms of the Oracle client and use of Thin vs Thick driver. Since you noted previously that Basic and TNS connection types behave the same, I suppose there is no need to test with OCI.

If you really wanted to test with OCI (while still using 64-bit Java), the fact that you already have a 32-bit Oracle Instant client installed is not an obstacle.  Just download and unzip the 64-bit version, point SQL Developer at it using Tools > Preferences > Database > Advanced, plus this extra step:  launch SQL Developer from a Cmd window (.../sqldeveloper/sqldeveloper/bin/sqldeveloper.exe) where you have reset the PATH variable to prepend the location of that 64-bit Oracle Instant client.

Returning to the original issue of the null pointer exception, it would help if we knew why the connection info object for pmibasl10 is not created as expected.  Apparently this situation is completely unexpected.  It would happen if ...

1) The database fails to report that its product name is Oracle.

2) Some SQLException occurs (which is currently ignored).

2621199

I installed a 64-bit client and verified that it was in my PATH variable:

>echo %PATH%

C:\oracle\product\12.1.0\client_2\bin;C:\ProgramData\Oracle\Java\javapath;C:\oracle\product\12.1.0\client_1\bin;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;...

I then changed Tools > Preferences > Database > Advanced to reflect the new client:

SQLDev6.PNG

(Clent_2 is the 64-bit client)

Of course I copied the tnsnames.ora file from the 32-bit to the64-bit client TNS directory

But unfortunately got the exact same java.lang.NullPointerException error.

I again, deleted the pmibasl10 connections from the Connections tree and added back with TNS.

I also tried again, to play with the tnsnames.ora file, with no success.

Regarding:

  1. The database fails to report that its product name is Oracle.

  2. Some SQLException occurs (which is currently ignored).

Is there no way to debug this?

2621199

I FOUND THE PROBLEM - it is when attempting to connect to a 12.1.0.1 Standard Edition database.

I get the exact same java.lang.NullPointerException error when trying to connect to two other 12.1.0.1 Standard Edition databases, which are on a different server.  The error is identical whether using the 32-bit or the 64-bit client.  There was no problem connecting to a 11.2.0.3 SE database.

Gary Graham-Oracle

This is not something that I am able to test since 12.1.0.1 SE/SE1 got replaced by 12.1.0.2 SE2 in Sept 2015 with an additional 12 months of patching support through end of August 2016.  If you happen to upgrade to 12.1.0.2 SE2 and it suffers from the same issue, then you should log an SR with MOS. 

Normally Development does not concern itself with licensing issues, so I was not surprised to learn of my ignorance regarding SE/SE1.  It seems this is one of those cases where an upgrade is (ultimately) unavoidable...

https://blogs.oracle.com/UPGRADE/entry/oracle_database_12_1_0

https://support.oracle.com/epmos/faces/DocumentDisplay?id=2027072.1

Oracle SE2 - HUGE impact for Oracle Standard Edition Customers! - Oracle Application Express (APEX) Consultants, develop…

2621199

Hey Gary,

All three SE databases that I cannot connect to in SQL Developer 4.2 are 12.1.0.1 SE.

All the other DBs that I tried were 12.1.0.2 EE or 11.2 SE and 11.2EE.

The three 12.1.0.1 SE DBs will not upgraded any time soon.

Is there no workaround in order to use SQL Developer 4.2 ?

Is this an issue that is being investigated that will be resolved in a later SQL Developer version?

thatJeffSmith-Oracle

4.2 is beta

and 12.1.0.2 works, so nothing formal is being done today.

if 4.1 is also not working, you could open a Service Request with My Oracle Support

User_0BQCE

Hi,

I had the same problem connecting to 12.1.0.2 SE, Sqldeveloper was configured to use the InstantClient.

OS: Antergos

SqlDeveloper: 4.2

Orace Client: InstantClient 12.1.0.2 (latest version)

To fix:

In Sqldeveloper -> Preferences -> Database -> Advanced -> Use Oracle Client - not checked (also the Use OCI/Thick Driver is  not checked)

I could then connect to the 12c database and the Database objects would display under the connection.

I do not know if this will work with Windows.

2621199

In Windows, though a full 12.1 client is installed,

Sqldeveloper -> Preferences -> Database -> Advanced -> Use Oracle Client - not checked

Use OCI/Thick Driver - not checked

Does allow connection to 12.1 SE DBs.  Interesting.  Other 12.1 EE and 11.2 DBs also connect.

However this is a SQL Developer high level change.

I'm not sure how not using the full Oracle client affects other things like SQL Developer performance and SQL functionality.  In my current and all previous shops we always used a full client for non-end users.

I think that for now I will stick with 4.1.5 and NOT recommend moving to 4.2 until this is resolved.

unknown-7404

However this is a SQL Developer high level change.

What difference does that make. It works.

I'm not sure how not using the full Oracle client affects other things like SQL Developer performance and SQL functionality.

It generally won't. But  the way you resolve 'not sure' is to test your use cases.

In my current and all previous previous shops we always used a full client for non-end users.

And when I was growing up we used phones with dials on them.

You do NOT need an Oracle client for most use cases. Unless you have very special needs use the thin driver.

I think that for now I will stick with 4.1.5 and NOT recommend moving to 4.2 until this is resolved.

The reason for not 'moving to 4.2' is because the product has not been released - it is still in development.

Every problem/issue you have mentioned in this thread is because of that Oracle client you insist on using. You mixed software versions and you mixed configuration settings.

You also are copying a DLL all over the place to try to fix problems.

Do things the SIMPLE way and all of those problems will go away.

1 - 26

Post Details

Added on Oct 7 2015
1 comment
6,651 views