Discussions
Categories
- 385.5K All Categories
- 5.1K Data
- 2.5K Big Data Appliance
- 2.5K Data Science
- 453.4K Databases
- 223.2K General Database Discussions
- 3.8K Java and JavaScript in the Database
- 47 Multilingual Engine
- 606 MySQL Community Space
- 486 NoSQL Database
- 7.9K Oracle Database Express Edition (XE)
- 3.2K ORDS, SODA & JSON in the Database
- 585 SQLcl
- 4K SQL Developer Data Modeler
- 188K SQL & PL/SQL
- 21.5K SQL Developer
- 46 Data Integration
- 46 GoldenGate
- 298.4K Development
- 4 Application Development
- 20 Developer Projects
- 166 Programming Languages
- 295K Development Tools
- 150 DevOps
- 3.1K QA/Testing
- 646.7K Java
- 37 Java Learning Subscription
- 37.1K Database Connectivity
- 201 Java Community Process
- 108 Java 25
- 22.2K Java APIs
- 138.3K Java Development Tools
- 165.4K Java EE (Java Enterprise Edition)
- 22 Java Essentials
- 176 Java 8 Questions
- 86K Java Programming
- 82 Java Puzzle Ball
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 94.3K Java SE
- 13.8K Java Security
- 208 Java User Groups
- 25 JavaScript - Nashorn
- Programs
- 667 LiveLabs
- 41 Workshops
- 10.3K Software
- 6.7K Berkeley DB Family
- 3.6K JHeadstart
- 6K Other Languages
- 2.3K Chinese
- 207 Deutsche Oracle Community
- 1.1K Español
- 1.9K Japanese
- 474 Portuguese
19.12 Linux - PGA MEMORY VERY LARGE DUE TO Q PROCESSES - BACKGROUND Streams EMON Regular Slave

------------------------------------------------------------------------------------------------------
--Oracle Enterprise Linux 7.9 x64
--Oracle Database Enterprise Edition 19.12 (19-JULY-2021)
--Oracle Grid Infrastructure 19.12 (19-JULY-2021) for Standalone Server ( NOT RAC - GI HAS/ASM ONLY)
------------------------------------------------------------------------------------------------------
After the 19.12 Patchset was applied the database instance started to show repeated ORA-4036 in the database alert log.
------------------------------------------------------------------------------------------------------
"PGA_AGGREGATE_LIMIT has been exceeded but some processes using the most PGA
memory are not eligible to receive ORA-4036 interrupts. Further occurrences
of this condition will be written to the trace file of the DBRM process."
------------------------------------------------------------------------------------------------------
It got to the point that it consumed all Linux memory and Linux start to swap.
This caused further ORA-4036 errors, and eventually was necessary to stop/start the database instance to free up the Linux memory,
then the database worked fine again, till the same issue occurred.
The workload did not changed, and when 19.11 (19-APRIL-2021) Patchset was installed the ORA-4036 was not occurring.
I opened a SR with Oracle Support and the temporary workaround was to set PGA_AGGREGATE_LIMIT=0.
------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
If MEMORY_TARGET is set, then PGA_AGGREGATE_LIMIT defaults to the MEMORY_MAX_TARGET value.
If MEMORY_TARGET is not set, then PGA_AGGREGATE_LIMIT defaults to 200% of PGA_AGGREGATE_TARGET.
If MEMORY_TARGET is not set, and PGA_AGGREGATE_TARGET is explicitly set to 0, then the value of PGA_AGGREGATE_LIMIT is set to 90% of the physical memory size minus the total SGA size.
In all cases, the default PGA_AGGREGATE_LIMIT is at least 2GB and at least 3MB times the PROCESSES parameter (and at least 5MB times the PROCESSES parameter for an Oracle RAC instance).
For a PDB, the default value is the same as the CDB's default value.
------------------------------------------------------------------------------------------------------
Do not attempt to set PGA_AGGREGATE_LIMIT below its default value, even in a parameter file (pfile), or instance startup will fail.
However, PGA_AGGREGATE_LIMIT can be set to 0 either in a parameter file or dynamically after startup.
If a value of 0 is specified, it means there is no limit to the aggregate PGA memory consumed by the instance.
------------------------------------------------------------------------------------------------------
But after a few days, PGA keeps growing larger and larger. But workload only has 5 users connected.
I was able to determine that these processes are taking lots of PGA space and not releasing it.
------------------------------------------------------------------------------------------------------
SESSION_ID, SESSION_SERIAL, PROCESS_NAME, PID_THREAD, CURRENT_SIZE_MB, MAXIMUM_SIZE_MB
5645 64179 Q009::mcsdboralnx5.esri.com 22773 6,115.40 6,115.40
4707 62132 Q007::mcsdboralnx5.esri.com 22769 6,147.46 6,147.46
6116 6422 Q00A::mcsdboralnx5.esri.com 22775 6,179.53 6,179.53
3767 25363 Q005::mcsdboralnx5.esri.com 22765 6,211.46 6,211.46
5174 9587 Q008::mcsdboralnx5.esri.com 22771 6,243.46 6,243.46
------------------------------------------------------------------------------------------------------
All these Q processes are related to "BACKGROUND Streams EMON Regular Slave".
The 19.12 patchset has many fixes for PGA memory leaks, and other memory fixes too, maybe those could have introduced this issue.
--Database 19 Release Updates and Revisions Bugs Fixed Lists ( Doc ID 2523220.1 )
------------------------------------------------------------------------------------------------------
27933036 pga memory leak at koh-kghu call / kol vstring for dbms_aq.listen call
32066061 Memory Leak in "kxsclb in kxscf" Component Causing ORA-4030/ORA-4036
32033795 Unreasonable large memory allocations by query processing code
32338476 Memory Leak: auditpqreceive
------------------------------------------------------------------------------------------------------
I have opened a SR with Oracle Support about this problem.
I could rollback the 19.12 Patchset till Oracle has a patch.
But I want to know if anyone has a workaround, such as setting a hidden parameter, that could help resolve the issue.
Best Answer
-
*** WORKAROUND THAT RESOLVES THE ISSUE ***
This is the workaround that resolves the issue "Q Processes Memory Leak".
PROBLEM DESCRIPTION: After applying 19.12 Patchset (19-JUL-2021) the Q Processes take more and more memory and do not release the memory, this causes PGA to grow larger and larger, if PGA_AGGREGATE_LIMIT=<null> then will encounter ORA-04036, if PGA_AGGREGATE_LIMIT=0 then PGA can grow unlimited, and the Q Processes will consume all server memory".
ORA-4036 - "PGA_AGGREGATE_LIMIT has been exceeded but some processes using the most PGA memory are not eligible to receive ORA-4036 interrupts." !!! this can cause user sessions to disconnect !!!
I have encountered the issue on 19.12 Linux and 19.12 Windows. The workaround below resolves the issue on both platforms.
CAUSE: Oracle explains the 19.12 Patchset (19-JUL-2021) disables the dequeue for AQ_SRVNTFN_TABLE_Q_1, this should be enabled, across any and all patched CDB and "ALL" PDBs.
WORKAROUND:
- Reset the Q Processes to free pga memory !!!
- Re-enable Q Processes Dequeue on CDB and "ALL" PDBs !!!
Thanks,
Marcelo Marques | Principal Product Engineer | Esri
OCP - Oracle Certified Professional
Answers
-
- When did problem occur & detected?
Aug/08/2021 fist time and every day since.
- Did the database/Application Crashed with this issue?
yes, sessions were disconnected due to ORA-4036.
now, after setting PGA_AGGREGATE_LIMIT=0 is stable but PGA memory keeps taking more and more memory space due to the Q processes "BACKGROUND Streams EMON Regular Slave".
- Is this a PROD, QA, DEV, TEST Environment
development
- Is this a RAC environment or single instance?
no rac - orace grid infrastructure for standalone server - HAS/ASM only with single db instance
- Is this a Standby DG Instance?
No
- Is this an engineered system (Exadata, Database Appliance)?
No
-
I am seeing the same issue in 19.12 on Windows and 19.12 on Linux. The PGA memory Q processes "BACKGROUND Streams EMON Regular Slave" keeps getting more and more memory space and not releasing it, got to the point to be consuming 40GB of memory on my server, hence I had no choice, I had to shutdown the database instance and start up it again to clean the large Q processes, but I know it will be just a matter of a few days for those Q processes to grow larger and larger again. : (
-
Note: What are the Q Processes???
Qnnn
AQ Server Class Process
Per AQ Master Class server process
Each server class process acts on behalf of an AQ master class process. This relationship is maintained until the master requires services of a particular service process. Once released, the server class processes are moved to a free server pool.
Database instances Advanced Queueing
-
----------------------------------------------------------------------------------------------------------------------
Oracle Support Replied and proposed a workaround. I have implemented and I am monitoring the database instances to see if the Q processes continue to consume large amounts of memory and not releasing it. Let's see if this workaround works.
----------------------------------------------------------------------------------------------------------------------
My Oracle Support Reply:
There is a known issue with Emon
Bug 9735536 [https://bug.oraclecorp.com/pls/bug/webbug_edit.edit_info_top?rptno=9735536] Enhancement request which allows ability to selectively remove slow clients from Emon Notification mechanism ( Doc ID 9735536.8 )
Event Monitor (EMON) Slave Process Constantly Consuming CPU ( Doc ID 1603844.1 )
To permanently resolve the issue if the release is pre-11.2.0.3 apply Patch 9735536.
In 11.2.0.4 onwards do the following
connect / as sysdba
alter system set "_client_enable_auto_unregister"=true scope=spfile;
shutdown immediate;
startup;
----------------------------------------------------------------------------------------------------------------------
The links above are related to "11.2.0.4 onwards", well quite old workaround and it might not work for 19.12, but let's see what happens.
----------------------------------------------------------------------------------------------------------------------
-
The workaround above suggested by Oracle Support did not work. The Q processes continue to take more and more memory and not releasing it.
My 2 cents.
I asked Oracle Support to log a new bug for this issue.
When 19.11 Patch was installed this issue was not occurring.
Only after the 19.12 Patch was applied the issue started to happen.
19.12 Patch has pga memory leak fixes and other memory fixes as well and some of these memory leak fixes might have introduced this new issue in 19.12.
--Database 19 Release Updates and Revisions Bugs Fixed Lists ( Doc ID 2523220.1 )
27933036 pga memory leak at koh-kghu call / kol vstring for dbms_aq.listen call
32066061 Memory Leak in "kxsclb in kxscf" Component Causing ORA-4030/ORA-4036
32033795 Unreasonable large memory allocations by query processing code
32338476 Memory Leak: auditpqreceive
I am experiencing the issue with 19.12 on Windows and 19.12 on Linux.
I asked Oracle Support to request Oracle Development to look on both platforms.
-
Hello Marcelo,
Thanks for sharing this information
Best regards
Mohamed Houri
-
If anyone is experiencing the same issue then please log a SR with Oracle Support. I'm still working with Oracle Support but we could not find a workaround and I don't think there will be one. I am still trying to make Oracle Support to log a new bug for this issue to have Oracle Development to look into it and provide an One-Off patch.
-
Hello,
When you have set your pga_aggregate_limit >0, you have to know that processes launched by SYS user are not capped by this limit and can go beyond without being stopped by MMON (only insufficient OS memory will stop them). This is why Oracle has shown the following information
"PGA_AGGREGATE_LIMIT has been exceeded but some processes using the most PGA
memory are not eligible to receive ORA-4036 interrupts”.
If you don’t have changed the _pga_limit_dump_summary parameter, then any ORA-04036 error will have a corresponding trace file dumped into the trace file directory. Check the alert_log to find the corresponding trace file name. If you can find this trace file then you can clearly identify what process has consumed the most of pga, in what heap and for what allocation reason. Supply this information to Oracle support.
After you have set the pga_aggregate_limit to 0, processes launched by non-SYS users are not anymore subject to the ORA-04036 and, of course, this will increase the total PGA consumption since both SYS and non-SYS process are not concerned by a PGA limit value.
You have a pga_aggregate_target of 24GB. You have a small number(0.04) of over allocation count. What is SGA memory value? What is the RAM value of this machine? Are you using huge page? If so what is the % of their usage
sar -H
Are the sql tuning advisor and space advisor automatic jobs enabled?
Best Regards
Mohamed Houri
-
Mohamed, I have identified the cause, it is the Q processes that keep taking more and more server memory and not releasing the memory, after I set PGA_AGGREGATE_LIMIT=0 the PGA can grow unlimited and ORA-4036 is not happening anymore, but the Q processes continue to take more and more server memory till all the memory is depleted, the only alternative is to shutdown and startup the database instance to clear the Q processes.
I just got out of the phone with Oracle Support and Oracle Development is investigating the issue.
Thanks,
Marcelo Marques | Principal Product Engineer, Esri | OCP - Oracle Certified Professional
-
Hello,
If anyone applied the 19.12 Pathset (19-JUL-2021) and now is experiencing the same issue with PGA Memory getting very large, but the workload has not changed, that's because the Q processes have a memory leak issue and keep taking more and more memory and this makes the PGA to grow large and large.
If PGA_AGGREGATE_LIMIT is "unset" then Oracle Instance determines the PGA Limit value at instance startup, if the PGA grows and reaches the limit then ORA-4036 error will happen, and user sessions might get disconnected, in this case will be necessary to shutdown / startup the oracle instance to clear the problem, but the Q processes memory leak will continue to happen, now as a temporary workaround set PGA_AGGREGATE_LIMIT=0 this allows the PGA to grow unlimited, this means the Q processes memory leak will take more and more memory till they consume all server memory then ORA-4036 can occur again because there is no more free server memory for the PGA to grow, so you will need to monitor the PGA size and the Q processes size.
I have a 19.12 CDB with 30 PDBs on Linux and Windows and I am "not" using Advanced Queueing, nor Streams and the issue with Q Processes Memory Leak happens.
I am experiencing the Q Processes Memory Leak problem with 19.12 on Linux and 19.12 on Windows. I am still working with Oracle Support on this issue, there is no workaround, the issue does not happen on 19.11 Patchset (19-Apr-2021), so an alternative is to rollback 19.12 Pathset till an One-Off Patch for 19.12 is available, and eventually the fix will get into 19.13 Patchset (19-OCT-2021).
If you encounter the same problem then create a SR with Oracle Support.
You will need to provide the logs described in this Oracle Support article.
SRDC - Unidentified Memory Issues Checklist of Evidence to Supply (1909842.1)
And, also attach to the SR, see below, 3 days of Q processes heapdump logs.
Thanks,
Marcelo Marques | Principal Product Engineer, Esri | OCP - Oracle Certified Professional