此内容已被标记为最终。 显示 8 条回复
You are on supported configuartion when you are using IBM JDK on AIX. JRockit is not supported on AIX.
For more information on supported configuration refer to :
Link : [http://www.oracle.com/technology/software/products/ias/files/fusion_certification.html]
Which wls version are you using? Pl also mention the turn around time in the best and worst case. You can use tools to analyse the packets and determine the time taken for response from DB and the time taken for WLS to process the request and respond. In many situations/cases it is found that the DB also takes time to respond based on various factors. I hope you have already isolated the issue and confirmed that its not the DB.
This can be easily reproduced on a standard Windows XP development machine. Though we first saw the issue under AIX/WebSphere (where we had to use the IBM JDK), a simple app server independent test can prove this. We took WLS and WebSphere out of the loop and used pure JDBC. Any IBM JVM in the company of ojdbc6.jar takes 3-4 times as much time (than the other 3 combinations that I mentioned earlier) to open a JDBC connection.
Have you contacted IBM for support? Your scenarios seem to have ruled out Weblogic or Jrockit.
Yes. We are in touch with IBM
Even if it is only the JVM that is causing the issue, you would need some statistics to correlate the findings with the theory that the JVM is causing the delay. You could try capturing the packet timestamp for the DB round trip. Capture it at the machine where you are running the client program. There would be series of communication happening between the client and the DB. Authentication, fetching the records and so on. JRockit is optimized. Just verify whether you used -Xnoopt jvm option while you used JRockit. This disables optimization. Garbage collection strategy can also induce delay/pauses in the jvm threads that execute your program. Enabling gc logging will help you identify the gc pause times.
A quick solution to test this would be to do the following:
mv /dev/random /dev/random.bk
ln /dev/urandom /dev/random
Note: The quick solution will disappear after a reboot. You could add it to rc.local to automatically make this change, but I wouldn't recommend it.
The following is from [http://www.usn-it.de/index.php/2009/02/20/oracle-11g-jdbc-driver-hangs-blocked-by-devrandom-entropy-pool-empty/]:
"Oracle 11g JDBC driver hangs blocked by /dev/random – entropy pool empty
On a headless (=without console) network server, the 11g JDBC driver used for (java) application connect may cause trouble. In my case, it refused to connect to the DB without any error, trace or log entry. It simply hung. After several hours, it connected one time, and freezed again. Remote debugging done by the development clarified that it locks after calling SeedGenerator() and SecureRandom().
Reason: The JDBC 11g needs about 40 bytes of secure random numbers, gathered from /dev/random, to encrypt its connect string.
But public-available “man 4 random” says:
When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. /dev/random should be suitable for uses that need very high quality randomness such as one-time pad or key generation. When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered.
So far so good, now the question arises: Why does this mystic “entropy pool” runs out of gas?
The answer is as simple as unsatisfying: because too less entropy “noise” was generated by the system. You can check the “filling level” (maybe zero?) of your pool and the overall size of the pool (usually 4096) by issuing
Hint: /dev/random will deliver one new random number as soon as the pool has reached more than 64 entropy units.
So why does my box not generate more entropy noise?
Because only few drivers will fill the entropy pool, first of all keyboard and mouse. Sounds very useful on a server in a datacenter, isn’t it? Some block device and network drivers seem to do so as well, and I have read from guys on the net changing their network card and driver to enjoy this “feature”! But let’s stop ranting, /dev/random is simply made for high security randomness, and if it can’t make sure that randomness is as good as possible in this deterministic world, it stops. Intelligent people have created /dev/urandom for that, like “man 4 random” clearly states:
A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current non-classified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead.
Now let’s get back on our JDBC problem. Oracle JDBC 11g seems to use /dev/random by default, which causes usually no trouble on clients running with console access by a user, because his/her unpredictable :) actions will keep the entropy pool well-fed. But to make it usable on a headless server with a latently empty entropy pool, you should do several things, in descending security order (without warranty):
1. Involve an audio entroy daemon like AED to gather noise from your datacenter with an open microphone, maybe combine it with a webcam noise collector like VED. Other sources are talking about “Cryptographic Randomness from Air Turbulence in Disk devices“. :)
2. Use the Entropy Gathering Daemon to collect weaker entropy from randomness of userspace programs.
3. Talk your JDBC into using /dev/urandom instead:
The following is from IBM. Bad news is that the problem is here to stay.
1) The IBM java.sql.DriverManager code contained a synchronized block that effectively made concurrent calls to getConnection single threaded. The team responsible for this code have a fix that removes this bottle neck allowing multiple connections to be established in parallel.
2) As part of User authentication in the Oracle JDBC Driver makes use of the java.security.SecureRandom to generate a token. The IBM implementation of SecureRandom takes longer to initialise than the Sun implementation. The reason for this is due to architectural choices by IBM that allow a certain guaranteed level of randomness and as such mean that the IBM JCE to be FIPS certified.
At this time the Java Security team do not see a way to improve the performance of this area and still meet the requirements for FIPS compliance.
This is known as the SecureRandom problem.
The fix for this problem is available for WebSphere 7.0 by installing
IBM Java 6 SR9 which is available here:
More information on this fix is here: