This content has been marked as final. Show 7 replies
First, the reason OCI prepare and define operations are fast is because they are client-local; they do not require a round-trip to the server to execute. Second, if OCIStmtExecute returned OCI_STILL_EXECUTING, it was most likely still busy. Have you checked the session activity from the server to see if it was truly idle?
Yes, I knew that OCIStmtExecute would be the only function that causes such
delay and that was why I traced that call. And so far, I checked several times
what happens at the exact moment on the server but everything was ok.
Actually OCIStmtExecute becomes slower exactly when crontab-ed log rotate
occurs so I think this delay must be a client-side problem for now.
This server is quite busy and has to respond fast, so it is important to
guarantee fast response time while a small number of timeout losses are tolerable.
But as OCIStmtExecutes' first OCI_STILL_EXECUTING return takes hundreds of
ms it has become more like a blocking call and currently I cannot find any way to do what I want.
So now everytime such thing happens, the thread waits
quite long, after the first OCI_STILL_EXECUTING return
the time difference exceeds timeout limit, and the thread
calls OCIBreak() and OCIReset() and returns.
Are you using OCI connection pooling? If not, you should look into it. It may help your situation.
No, I'm using my own session pool.
There are many(more than the number of open sessions) threads that are working
and they share certain number of OCI sessions. When the process starts, it
establishes all the sessions using separate environment handles and than creates
threads. At runtime, each thread picks up one idle session , executes, and returns
it to the pool by setting the session flag back to idle. Of course acquiring and
returning sessions is protected by mutex.
Any ideas about possible errors or mistakes?
Just any suggestion is welcomed.
The design sounds fine. As for the implementation, the only thing I can think of which could be problematic would be the actual act of acquiring a free handle for execution.
Are you doing a linear search for a free OCI handle or are you pushing free handles onto a stack and popping them when needed? Do you have a significant amount of instrumentation or tracing code to verify proper operation?
Another thing I'm wondering is why, if you already have an OCI thread pool which already handles multi-session management, you're using non-blocking OCI.
I'm doing simple linear search on the flag array to 'get' a session.
And I'm developing a apache dso module that uses oci interface.
So here what I'm calling a thread actually means a worker thread of the
The reason that I chose non-blocking I/O is that the server is pretty much busy
and should not be bottlenecked by oci calls. If so, the number of concurrently
active workers will reach as high as max limit and that will snowball the situation
I'm having the same problem with OCIStmtExecute. I have three instance of the same program running together reading a text files and inserting 2000 lines using array. Each instance has its own file system with its files to insert. When an instance has thousands files (4,000) that instance becomes slowly while the other keep the same performance. Instead of a normal time between 2 to 3ms per line, the instance does 60 to 100ms. It is strange but seems something related to I/O and something used by OCIStmtExecute.
How do you solve your problem?