How many concurrent requests are you able to manage before this starts to bottleneck throughput?
How many modules/services/handlers do you have in your schema(s)?
Hi Jeff - we also have a very similar problem like this in our Environment.
It is an Oracle 18c database running on Exacc (x7-2).
the number of handlers we have is 6 ( select count(*) from ords_metadata.ords_handlers ), with 100 concurrent connection see a long concurrency wait for the above sql and above sql sometimes takes 5 seconds.
more details on this @ https://stackoverflow.com/questions/58675583/ords-performance-is-very-slow
also reached Oracle support on this and here is our Ticket - SR 3-21458874011
I read your support case details, it's lacking a vital piece of information.
What version of ORDS are you running?
Thanks for answering. This particular test uses a very small set of modules (only 3) and only roughly 30 handlers (each one a large pl/sql codeblock, including GET's). Test DB is also 18c, doing tests for a planned upgrade on our production server (Oracle managed M7 currently on 12.1).
The services are serving everything from very small JSON objects to large images, html documents and very large XML procurement documents that include base64 encoded stuff.
This usually starts when we push concurrent sessions over 200 and doesn't go away even though the load shrinks.
I seem to have found a workaround for this by pinning this particular SQL statement in the shared pool. At least it's been been behaving for many hours now.
So I used dbms_shared_pool.keep with ADDRESS and HASH_VALUE to pin this particular statement.
Which of course won't be persistent through a server bounce.
select sql_id, executions, plan_hash_value, address, hash_value from v$sql where sql_id=:SQLID;
begin dbms_shared_pool.keep(:ADDRESS,:HASH_VALUE, 'C'); end;
What version of ORDS?
We've re-written that view to optimize is, so your version of ORDS here is a critical piece of information.
thanks Jeff - we are using the latest version of ORDS (19.2).
19.3 will see some performance improvements, but NOT around that query.
That query/view is being optimized and a scheduled improvement for version 19.4
Thanks Jeff. will there be any patch / fix ( or workaround - in-case if patch/fix is not available) available for us, till 19.4 is schedule for release ?
so that we can continue our development and testing. Kindly advice.
Have same issue (Rajesh has already explained). Here is the more intresting details when we tested.
1. We tested with 30 users load and get 50ms per transaction. DB level < 20ms and overall end to end and 500+ TPS
2. We tested with 50 users load and get 300ms+ per tractions. DB level > 300 ms. DB level all it still < 25ms
** Same set of transactions.
** hitting only one method with post handler and json request - response if select query result and restricted to 10 rows max. No update
** As said earlier DB timing is no different. We did two set snapids and compared all the sqlid across two snaps (20 users and 50 users) and there is 1 to 5 ms increase for 50 users.
** ORDS parameter were increased to handle 100 concurrent connections
ORDS debug does not provide enough info -- not sure where extra 200+ms is spent. Also major issue we see is that number of transactions per seconds starts decreasing as we increase the load. Tomcat server on which ords is deployed is 8CPU and 8GB RAM and max cpu it hit was 70% and no swamp, heap size was always 500MB at least free
Any help on how to resolve this issue will be highly appreciated. We are in production but because of performance issue we have not rollout fully
Thanks and Regards
I've been doing loads of tests and trying out different database parameters to see what affects this behaviour. DB runs fine but as I turn up the load that particular SQL becomes a huge bottleneck and effectively freezes the DB.
I did notice that before the DB hang on the (almost unknown) wait event "enq: TG - IMCDT global resource" (I know it's In-Memory cursor duration temp tables, but searching for documentation for that wait event is fruitless) the server saw a spike of "row cache mutex" waits and then the other one took over.
The rediscovery notes: High 'row cache mutex' observed When running multiple clients doing the same select query clearly describe what is happening here with ORDS running this single SQL for every request for all connected ORDS sessions.
Thanks but for us as mentioned DB wise query are provided in 20 to 25 ms but the overall ORDS output increased from 50ms to 300ms