Forum Stats

  • 3,872,411 Users
  • 2,266,417 Discussions
  • 7,911,193 Comments

Discussions

DB resource manager 12.2

2»

Answers

  • User_RXNCF
    User_RXNCF Member Posts: 51 Red Ribbon
    edited Aug 25, 2017 1:30PM

    rp, thanks for info...somehow I am still confused

    CDB resource plan:  which we can assign shares to PDB. I am not worried by PDB resource plans which we can create consumer groups.

    at first for CDB resource plan it said.

    With the Resource Manager, you can:

    • Specify that different PDBs should receive different shares of the system resources so that more resources are allocated to the more important PDBs
    • Limit the CPU usage of a particular PDB
    • Limit the number of parallel execution servers that a particular PDB can use
    • Limit the memory usage of a particular PDB
    • Specify the amount of memory guaranteed for a particular PDB
    • Specify the maximum amount of memory a particular PDB can use
    • Use PDB performance profiles for different sets of PDBA performance profile for a set of PDBs can specify shares of system resources, CPU usage, and number of parallel execution servers. PDB performance profiles enable you to manage resources for large numbers of PDBs by specifying Resource Manager directives for profiles instead of individual PDBs.
    • Limit the resource usage of different sessions connected to a single PDB
    • Limit the I/O generated by specific PDBs
    • Monitor the resource usage of PDBs

    below it says like this:  so I want to know where are controllong memory at CDB resource pland

    The directives control allocation of the following resources to the PDBs:

    • CPU
    • Parallel execution servers

    check Table 46-2 Utilization Limits for PDBs

    Memory

    Several initialization parameters can control the memory usage of a PDB. For example, the SGA_TARGET initialization parameter limits the PDB's SGA usage, and the PGA_AGGREGATE_LIMIT initialization parameter limits the PDB's PGA usage.

    See "Initialization Parameters That Control Memory for PDBs" for more information about this resource.

    CPU

    The sessions connected to a PDB reach the CPU utilization limit for the PDB.

    This utilization limit for CPU is set by the utilization_limit parameter in subprograms of the DBMS_RESOURCE_MANAGER package. The utilization_limit parameter specifies the percentage of the system resources that a PDB can use. The value ranges from 0 to 100.

    You can also limit CPU for a PDB by setting the initialization parameter CPU_COUNT. For example, if you set the CPU_COUNT to 8, then the PDB cannot use more than 8 CPUs at any time. If both utilization_limitand CPU_COUNT are specified, then the more restrictive (lower) value is enforced.

  • FRivasF-Oracle
    FRivasF-Oracle Member Posts: 11 Employee
    edited Sep 1, 2017 5:36AM

    Hi,

    Regarding what you said about "if we using PDB Parameters for memory, IO , we can better use cpu_count PDB Parameter for CPU caging I believe, let me know your inputs.".

    You are missing the point. The goal is to take the most of the systems CPU, so you don't need to proactively limit the CPU an instance uses. If your machine has 10 CPUs, you have 2 PDBs, and the first PDB is idle, then the second PDB should be able to use 10 CPUs. With your parameter-approach, they would use 5 (if distributed evenly). CDB resource manager lets you leverage the most of your hardware with some warranties.

    Hope it helps

  • User_RXNCF
    User_RXNCF Member Posts: 51 Red Ribbon
    edited Aug 31, 2017 11:15AM

    frivas,

    that's a good point.

    cpu_count - if we want to put hard limit on resources.

    I am testing all the scenarios, but I am not getting answer for DB resource manager (not PDB init parameters) whether it can control memory and IO   with shares parameter   in non-engineered systems.

    Thanks

  • User_FD0WJ
    User_FD0WJ Member Posts: 1 Employee
    edited Aug 31, 2017 1:52PM

    Hi,

    There are significant improvements in 12.2 in terms resource management at the PDB level. Before I briefly explain them individually, I should mention managing memory, CPU, and I/O are all possible from a PDB's point of view.

    CPU Management: In 12.1 per PDB CPU management was possible via CDB resource plan, in which you have to allocate your CPU in terms of percentages. In 12.2, it is now possible to set per PDB CPU limit using a PDB-level parameter called "cpu_count". Setting the CPU limit in terms of number of threads vs percentages has certain advantages. For example, a CPU limit of 50% in an X4-2 has different number of threads available to the PDB than a CPU limit of 50% in an X6-2.

    Memory Management: In 12.2, per-PDB memory management capabilities are now available. The following parameters can now be set at the PDB level:

    SGA_TARGET

    SGA_MIN_SIZE

    DB_SHARED_POOL_SIZE

    PGA_AGGREGATE_LIMIT

    PGA_AGGREGATE_TARGET

    "sga_min_size" is a new parameter that can be used to allocate guaranteed memory among PDBs. The recommendation is to use this parameter with low density or mission critical applications since setting this parameter in other use cases might limit the sharing of memory between PDBs.

    I/O Management: In 12.1, I/O management was only possible with Oracle Exadata and Oracle SuperCluster engineered systems. In 12.2, this restriction is no longer there. In other words, it is now possible to impose rate limits for PDBs on non-Exadata storage using two new PDB-level parameters. These two parameters, Max_IOPS and Max_MBPS, can be dynamically set and altered in a PDB. (If they are set in CDB$Root or in Exadata storage, the operation won't be permitted and an error message will be returned).

    Thanks,

    Can

    Geoff Grandstaff-Oracle
  • FRivasF-Oracle
    FRivasF-Oracle Member Posts: 11 Employee
    edited Sep 1, 2017 6:07AM

    Hi 3435889,

    CPU caging is not a substitute for CPU resource management but a complement; and I also think there are a very limited set of scenarios where caging should be also defined along with the CPU resource management. In that case you can define a inter-PDB CPU management plan, and also cpu_count hard limits, if you want.

    Notice that you can also configure inter-CPU resource management with HARD limits. You have three types of inter-PDB CPU management plans:

    1.- Default: All PDBs can use 100% of CDB CPU. When the system is at 100% CPU, all PDBs have the same shares/priority.

    2.- MINUMUMS: All PDBs can use 100% of CDB CPU. When the system is at 100% CPU, some PDBs will get more minimum/warrantied CPU than others.

    3.- MINUMUM + MAXIMUM : Some PDBs will never be able to use 100% CDB CPU, and will hard/proactively be limited to a x% of total CDB CPU capacity, independently of the system load. Also, when the system is at 100% CPU, some PDBs will get more minimum/warrantied CPU than others.

    So there you go; no PDB CPU caging is needed to put hard limits on CPU usage. You can also fine-control the CPU consumption regarding PDBs parallel execution servers. Those can be adjusted with parallel_server_limit as a total % of CDB capabilities, which can be overallocated, and priorized with the assigned shares for each PDB. You even have the parallel_degree_limit as a hard limit for the PDB DOP. And finally, if you do not want to inherit the values from cpu_count on some depending parameters, you can manually overload those conveniently.

    So in my opinion, there are indeed some scenarios not covered by this configuration, but not many.

    Regards,

  • User_RXNCF
    User_RXNCF Member Posts: 51 Red Ribbon
    edited Sep 1, 2017 10:35AM

    Francis,

    I understand cpu caging will put hard limits.  I am looking on option only to implement one of them DBRM or setting init parameters at PDB level. implementing and having two types again I feel maintenance burden for DBA.

    as DBRM cannot control memory and IO with shares parameters on non-engineered systes , and it can control CPU and IO (correct if this wrong) IN EXADATA, I feel I better go with init parameters as it can control all cpu, memory, IO.

    I wish DBRM has parameters like CPU for memory and IO to control.

    Thanks

  • FRivasF-Oracle
    FRivasF-Oracle Member Posts: 11 Employee
    edited Sep 4, 2017 3:24AM Answer ✓

    Hi 3435889,

    "implementing and having two types again I feel maintenance burden for DBA."->  Yes, I can understand that.

    There are some good reasons for the memory and IO resources to be out of the CDB resource manager. Regarding memory, as you know it is not always possible to shrink a SGA memory buffer anytime. Depending on the load, an instance will allow you to shrink the buffer cache to give it to the shared pool (for instance), or not. So you can't define a extrictly load-quantitative-based DBRM rule for memory; you might want to transfer x Mb from this PDB to the other, but it might be in use and DBRM will be unable to honor your rule. You can do it proactively indeed, before it's used. Your can define minimums as a safety net. But afterwards, you depend on the quantitative and qualitative kind of load.

    IO is also not an easy resource to manage, specially if you don't own it. In the Oracle Engineered Systems we do know the whole IO capacity of the hardware, and we do have 100% control over it. So it is easy to program the cells software in order to talk with the database parties and distribute those IO capacities based on a priority. But we can't control it's behavior if you use a third party storage. We can't also make the PDBs talk each other to distribute the load in any particular way because we also don't know the absolute storage capacities and we might be crippling it down. And we already do have an idea (system statistics) of it, but third party storage performance is really dynamic and changes depending on many factors. On the other hand we can limit the PDB IO to a fixed amount, based on your own DBA knowledge of the environment.

    Regards,

    Franck Pachot
This discussion has been closed.