"what is the best value for the oracle.jdbc.implicitStatementCacheSize to be set to get the optimum value in the execute to parse% ?"How many different SQL statements the application issues?
But, this has not bumped up the 'Execute to Parse%' to an expected value of ~100%, but only 45%.Do use bind variables or literal values in your SQLs?
963512 wrote:So you do not process data using a system designed for data management, but using an interpreted language designed to operate televisions and you wonder why it is slow?
The system basically here is a Batch written in Java; reads data, processes and writes to the Database (as simple as that)
the Java process takes its optimum amount of time and this part has already been tuned up.Just because you are processing data as fast as Java can does not mean the data is being processed as fast as it can be.
best value for the 'execute to parse%' in the AWR report (the ideal optimum value would be ~100%).Number of times a statement is executed is determined by the client application. Number of times a statement is parsed is determined by the client application. Therefore the ratio of the two is also entirely determined by the client application and a value of 45% is terrible and means that this particular client application likely does not use bind variables, over parses and is insecure.
When the statement caching was turned ON, we have seen the overal Execute to Parse % bumped up from -0.77% to 45%.When fewer than 50% of SQL are being reused, increasing DB statement cache means it just collect more never reused SQL statements.
4. Nothing you wrote in response to this is believable starting with the value "-0.77%" which absolutely isn't real. I so far see no supporting evidence that changing your execute to parse percentage is the root cause.Search for the word "negative" in the link I posted. Such a value means more parsing than executing.