In preferences you'll want to enable on the Advanced page the option for using thick connections. This will bring in OCI for your connections.
I'm having issue with my Oracle database which requires tracing my clients connection when they connected to their schemas. I have added some parameters to their sqlnet.ora that can create a trace file every time user connects but I don't see a trace file generate on the trace directory.
When I use sqlplus.exe of Oracle client, the trace file generates, so I'm not sure if the SQL Developer sees the Oracle client which has tnsnames.ora or sqlnet.ora files in it. Client platform is Windows 7.
You will get the BEST help when you provide the MOST information about your actual use case and problem/issue. The info you provided is missing what could be some very important information.
1. What 'issue' are you having with your Oracle database? This is curious since if you are including sql developer and JDBC in the loop why do you think your 'unknown to us' problem is with the database?
2. Why have you concluded that the issue requires tracing your clients connections? What else have you tried?
3. Do ALL of your users have an Oracle client installed? Only the OCI library will use the sqlnet and tnsnames ORA files. They need either the full client or the OCI Instant Client (Oracle 10g or later).
4. Have you defined, and set, the TNS_ADMIN environment variable? See the Sql Developer documentation:
In particular see the User Guide 'Database Connections' section
When you start SQL Developer and whenever you display the database connections
dialog box, SQL Developer automatically reads any connections defined in the
tnsnames.ora file on your system, if that file exists. By default, tnsnames.ora is located
in the $ORACLE_HOME/network/admin directory, but it can also be in the directory
specified by the TNS_ADMIN environment variable or registry value or (on Linux
systems) the global configuration directory. On Windows systems, if the tnsnames.ora
file exists but its connections are not being used by SQL Developer, define TNS_
ADMIN as a system environment variable.
Jeff has already said that, for sql developer, you need to specify the OCI/thick advanced setting. The OCI driver is the only one that will use sqlnet.ora settings.
The most important thing here is I need to limit the network bandwidth packet when database server sends packets to the client so this parameter has been already included in the sqlnet.ora and I really need SQL Developer lunches it. Users don't want to use regular sqlplus of Oracle client. They want to run their queries through SQL Developer only.
And this is one of the things causing confusion. First you talk about tracing but now you are talking about a packet limitation.
That makes me suspect that your real reason for wanting to trace is because you are trying 'limit the network bandwidth packet' by some (unknown to us means) and it isn't working so you want to use tracing to see why.
What 'packet' limit are you trying to fiddle with? Jeff or someone on the sql dev team may be able to tell you if what you are trying to do will even be of use in the context of sql developer.
Are you trying to set the SDU (session data unit) size? Then you may need to also configure the Oracle server side properly depending on whether you are using shared or dedicated servers. See the Net Services Admin Guide
The SDU size can range from 512 bytes to 32767 bytes. If the
DEFAULT_SDU_SIZEparameter is not configured in the
sqlnet.orafile, then the default SDU for the client and a dedicated server is 2048 bytes, while for a shared server the default SDU is 32767 bytes.
The actual SDU size used is negotiated between the client and the server at connect time and will be the smaller of the client and server values. As such, configuring an SDU size different from the default requires configuring the SDU on both the client and server computers, unless you are using shared servers, in which case only the client needs to be changed because the shared server defaults to the maximum value.
For example, if the majority of the messages sent and received by the application are smaller than 8K in size, taking into account about 70 bytes for overhead, setting the SDU to 8K will likely produce good results. If sufficient memory is available, using the maximum value for the SDU will minimize the number of system calls and overhead for Oracle Net Services.
But, as the 'Clientside Configuration' section that follows says you don't have to configure that in sqlnet.ora you can provide the SDU value in the connect descriptor. For Java this allows you to use the 'thin' client rather than the OCI client.
Jeff or one of the team can tell you whether sql developer supports this or not.
The bigger issue is that if you are hoping to improve performance by micromanaging SDU your efforts may be doomed to failure anyway for two reasons:
1. sql developer will act as an intermediary so you may not have the same degree of control of your queries that you would have in your own Java code
2. the SDU size is only one part of a much more complicated picture. That larger 'picture' includes the network protocol (usually TCP) frame size and the packet send/receive buffer sizes.
The network software stack has many layers and the Oracle/SDU part is near the top while the TCP part is near the bottom.
The most defining/restricting component is the network frame size being used. For many systems the maximum TCP packet size is around 1500 bytes and the actual useable size is around 1400. If you are one of those rare environments that use Jumbo frames throughout your network (that pretty much means EVERYWHERE that a frame goes in getting from client to server: bridges, routers, switches, client pc, server) then you might have a larger frame size.
Since the default SDU size is 2000 and the default frame size is 1400 then no matter what the upper and/or Oracle layer settings (e.g. SDU) are that will STILL get broken into multiple packets if it is larger than 1400 bytes.
So an SDU of 2000 means Oracle might bundle up 2000 bytes and pass it to a lower TCP protocol layer but that lower layer will then split that into TWO packets since each bundle of 2000 bytes is larger than one packet can hold.
The 'best' match is when SDU is a multiple of the 'useable' packet size. So a value somewhere around 2800/2900 (packet size of 1400+) would typically get split into TWO packets that are both pretty full with few unused bytes.
In general you should not really need to modify the SDU setting at all. In almost 30 years of doing ETL on a variety of systems and tools (including *nix, windows, C, Java) there have only been three projects where tuning involved SDU changes.
Those three projects ALL involved high volume ETL data transfers where both the source and target of the data dealt with highspeed transports (file systems, DBs). None of them involved end-user operations such as ad-hoc queries.
To summarize I am highly suspicious that your actual 'user' issue has ANYTHING to do with network latency or need either OCI clients or SDU tuning. If I were called in to consult on your 'tracing' issue I would first want to review the documentation and research, if any, that justifies the platform dependent, resource intensive architecture you are using for, what appear to be, common ad-hoc user use cases.
In this day and age it just doesn't make sense to be installing ANY Oracle client software on end user machines without a documented, specific, need for OCI functionality. The goal should be to MINIMIZE the amount of custom software, especially OS-specific software, that needs to be installed, configured and supported.
But if you are CERTAIN that you just want a simple answer then Jeff gave you that.