Skip to Main Content

Java Development Tools

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Implementing lazy loading of task flows

Vikul Aggarwal-OracleJul 2 2013 — edited Jul 11 2013

I have a dashboard which has multiple regions. These regions are implemented as task flows in showDetailFrame components. There regions can be individually expanded or collapsed. Whenever a region is expanded or collapsed, this is stored in the personalisation content by the framework, and the state of that particular region is retained on the next login. I want to implement lazy loading of task flows on my dashboard. Such that, if a particular region is found collapsed upon login, the task flow shouldn't be fired at all. I can't find an approach to start with. What i have in mind is, if i could determine the state of the components before the page is rendered, i could control the activation of the task flow. But for that, i'd require to access the mds related data. Is there an API using which i can access that?

Comments

Tubby

968361 wrote:

Hi All,

I have an application where I have 15 Dbms scheduler jobs running to process 500K transactions per Hour. All will be processing a specific set of a transaction so Locking is not the issue. I had a sequence with a cache size 2000 and incremented by 1.

Now I have a limit that I can have only 14 Million transactions processed in a day. ( 5 Million TYPE1, 2 MIllion Type2, 3 Million type3 etc)

I have 15 sessions running to process these transactions. I am bulk fetching transactions with a limit of 1000 of random types(say 200 transactions of Type1, 500 transactions of Type2 300 transactions of Type3). Different types of transactions will use different sequences e.g Type 1 will use TYPE1 sequence, Type 2 transaction will use TYPE2 sequence and so on.

But what I see is each time I try to execute the query to get cache it jumps by 2000. So I have gaps in my sequence and I am consuming my 14 million marks without even processing 800K transactions?

Is it because :

1) Multithreading as 15 different sessions are trying to process transactions (different sequence type) and each session will keep a cache of 2000.

2) High Cache size

How can I avoid this issue?

Regards

VInod

Have a look at Justin's reply in the following thread

Oracle 10g sequence cache aging

Options for you would be to lower the cache to something like 100 and benchmark what that does to your process. I've used high caches before for some heavy loading but typically 100 VS 2000, etc... isn't going to be the sticking point for any performance issues.

Cheers,

Cheers,

Paulzip

The library cache is probably aging out the sequence due to limits on memory allocation in the SGA.  In which case it'll have to re-read the cache and another 2000 sequence values each time.

You could pin the sequence in the SGA, using dbms_shared_pool.keep, which should lessen that at the cost of using some of the SGA up.

L. Fernigrini
But what I see is each time I try to execute the query to get cache it jumps by 2000. So I have gaps in my sequence and I am consuming my 14 million marks without even processing 800K transactions?

How are you querying this?

Sequences values are stored in the database. I do not know the exact internal details, but imagine there is a table that holds the sequence values, something like:

CREATE TABLE sys.SequenceValues (SEQUENCE_ID NUMBER, SEQUENCE_VALUE NUMBER);

When you do not use cache, each time a session requires a sequence value, Oracle searches for the value in this "table", reads the SEQUENCE_VALUE, adds 1, updates the value (commiting the update so other session see the last value used) and returns the value to the session.

When you use cache, the process is pretty much the same. The example would be

1) A session asks for a sequence number.

2) Oracle does not have that sequence in memory, so searches for the sequence in the table, reads SEQUENCE_VALUES, adds N (being N the cache size), saves the new value to the sequence values table (internally "commits" that change), stores those N values on the cache, and returns the first one to the session

3) Another session ask for the sequence, since it is on cache, Oracle returns the next available number without having to find the value on the table, and having to update the table with the last used value.

4) Repeat until N request are made

5) N+1 session asks oracle for a sequence number, Oracle repeats step "2".

So, if you query USER_SEQUENCES or ALL_SEQUENCES, you will see that LAST_NUMBER will jump with N intervals, and that is OK since numbers between those interval are in the CACHE.

From the docs (https://docs.oracle.com/cd/B28359_01/server.111/b28320/statviews_2053.htm#REFRN20244 )

LAST_NUMBERNUMBERNOT NULLLast sequence number written to disk. If a sequence uses caching, the number written to disk is the last number placed in the sequence cache. This number is likely to be greater than the last sequence number that was used.
L. Fernigrini

OP mentions inserting 500K rows per hour (139 per second), assuming there are 10 different sequences that would mean each one is used 14+/- times per second, I doubt the library cache would age out something that is used so frequently....

968361

Thanks for your feedback.

Even if I do select sequencename.currval from dual, at regular intervals say after each fetch I can see sequence has jumped by 2K records. But I have not processed that many transactions. The thing that confuses me, even more, is even if it is so frequently used sequence how can it age out so easily that the system has to fetch the sequence each time. I doubt the library cache is the problem.

L. Fernigrini

Just in case, can you post the syntax of one sequence that has the problem? If you are querying CURRVAL then it should not jump every 2K ....

Jonathan Lewis

You can't select CURRVAL if you haven't previously selecte nextval, so you need to be more precise in describing what you're doing in the test.

How sure are you of the sequence definition   Do the following from SQL*Plus and show us the results.

set long 20000

select dbms_metadata.get_ddl('SEQUENCE','{name of sequence}','{name of sequence owner}') from dual;

The "names" should be in upper case (unless you've delinerately forced the database into using lower case too; and don't include the curly brackets.

Regards

Jonathan Lewis

Paulzip
Answer

L. Fernigrini wrote:

OP mentions inserting 500K rows per hour (139 per second), assuming there are 10 different sequences that would mean each one is used 14+/- times per second, I doubt the library cache would age out something that is used so frequently....

You are assuming these scheduler processes are the only ones using the SGA. The instance could be managing loads of other sessions, in a busy environment, all accessing different objects, all fighting over library cache in the SGA.

You're also assuming the scheduler processes are uniformly processing an even distribution of records across the hour. They might not be.  It could be on 12.2 or above Standard Edition, which now has concurrent thread limits. So there might be lots of waits, and ageing out because of that.

You're also assuming the SGA is sized adequately, it might not be.

The skipping of a proportion of a sequence's cache size, is classically library cache ageing. I'm just going with the usual reason.

OP might need to run some instance monitoring, to determine if that is the reason or not.

Marked as Answer by 968361 · Sep 27 2020
L. Fernigrini

You are assuming these scheduler processes are the only ones using the SGA. The instance could be managing loads of other sessions, in a busy environment, all accessing different objects, all fighting over library cache in the SGA.

No, not at all. But I assume that if SGA is under real strong pressure then OP should be worried about the process not finishing or taking too long and not on sequence spiking values...

You're also assuming the scheduler processes are uniformly processing an even distribution of records across the hour. They might not be.  It could be on 12.2 or above Standard Edition, which now has concurrent thread limits. So there might be lots of waits, and ageing out because of that.

No assuming, I'm taking inco account OP mention:

               I am bulk fetching transactions with a limit of 1000 of random types(say 200  transactions of Type1, 500  transactions of  Type2 300  transactions of  Type3).

So I understand that each bulk of 1000 random operations are somehow distributed in a similar way as the whole set of data is...Not a perfect distribution but OP mentions 50%, 30% and 20% as examples, he did  not mention 50%, 49% and 1%.

You're also assuming the SGA is sized adequately, it might not be.

That's true.

The skipping of a proportion of a sequence's cache size, is classically library cache ageing. I'm just going with the usual reason.

Yes, I've seen this, but now when there are 15 concurrent processes frequently using those cached... That would suggest a sever pressure on the Library Cache....  OP did not mention that but could be possible

OP might need to run some instance monitoring, to determine if that is the reason or not.

Yes, that could be a way to identify the reason of the problem.

968361

As requested,

Below is the syntax of the sequence

create sequence TRSQ_XYZDD2L

minvalue 1

maxvalue 9999999999999999999999999999

start with 17156001

increment by 1

cache 2000;

968361

start with 17156001  --> it will be 1 at the beginning

Jonathan Lewis

This may be the syntax of the sequence that was used to create the sequence, but you don't seem to have done as I asked and checked what is currently IN the database, or the output would have been more like:

   CREATE SEQUENCE  "TEST_USER"."TRSQ_XYZDD2L"  MINVALUE 1 MAXVALUE 999999999999

9999999999999999 INCREMENT BY 1 START WITH 17156001 CACHE 2000 NOORDER  NOCYCLE

NOKEEP  NOSCALE  GLOBAL

However it looks as if you may be doing some stress testing of a development version of your application rather than trying to address a production problem, so I'll assume it's accurate.

What version of Oracle are you running, this may make a difference.

PaulZip's comment about the the sequence being pushed out of the library cache due to pressure of other objects is the most likely (though slightly unexpected, given the rate of use) reason. If you are doing stress tests it's possible that you've got some other code going on that's constantly generating and optimising previously unseen SQL and demanding space in the library cache to do so.  In which case PaulZip's suggestion to use dbms_shared_pool,keep() to keep the sequence as soon as it is loaded is probably a workaround.

Other reasons for pressure on the library cache / shared pool -

  •   If your application (or the Oracle scheduler) is leaking cursors (i.e. opening them but not managing to close them properly) then you could be losing lots of memory in the library cache.
  •   if you've found an Oracle bug with the scheduler and re-using chld cursors you may find that the scheduler jobs are constantly creating new child cursors for the same parent, and demanding huge amounts of space int he library cache for each of your important statements.

If you're licensed for the performance and diagnostic packs then an AWR report should give you some clues about memory usage and any strange behaviour of SQL in your system.  (e.g. "SQL ordered by vesion count", "SQL ordered by parse calls")

Regards

Jonathan Lewis

968361

Thanks the DB details are:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 0

PL/SQL Release 12.1.0.2.0 - Production 0

"CORE 12.1.0.2.0 Production" 0

TNS for Linux: Version 12.1.0.2.0 - Production 0

NLSRTL Version 12.1.0.2.0 - Production 0

SGA 21GB

Yes, you are correct it is testing environment to see if we can handle 500K transactions per hour load. Currently, we are processing 150K transactions per hour But we are checking the feasibility of how much we can process.

Meanwhile, I will check your other recommendations.

Regards

Vinod

968361

HI,

shared_pool_reserved_size is 5% of the SGA whereas in other environments it is 7%. Does this parameter has anything to do with Caching? 

Paulzip

Yes, shared_pool_size and shared_pool_reserved_size, come into play regarding library cache reload ratio and library cache hit ratio.

Reload % :

In an ideal world, for optimal performance, you’ll want to keep the library cache reload ratio as close to zero as possible.  This is the ratio of how often something needs to be reloaded back into memory after being aged out.  If this above 1%-ish, then the SHARED_POOL_SIZE can be increased to mitigate.

Hit Ratio %

Likewise, you want the library cache hit ratio (how often something has been cached and can be reused from the library quickly) somewhere above 95 percent.  if the library cache hit ratio comes in below 95 percent, then the SHARED_POOL_SIZE parameter may need to be increased.

Here's a query to give you these :

select sum(pins) hits, sum(reloads) misses, round(sum(reloads) / nullif(sum(pins), 0)* 100, 3) reload_pct, round(sum(pins) / (sum(pins) + sum(reloads)) * 100, 3) hit_ratio_pct

from v$librarycache;

My advice would be to look at the individual sequence's information to see if they are being aged out...

select *

from v$db_object_cache

where owner = 'sequence_owner' and name = 'sequence_name'

Mike Kutz

968361 wrote:


Yes, you are correct it is testing environment to see if we can handle 500K transactions per hour load. Currently, we are processing 150K transactions per hour But we are checking the feasibility of how much we can process.

It will heavily be dependent on your process.

If your process is simple, you can do 500k in under 1 sec... IF YOU PROCESS IN BULK.

worst. method. ever. (Comic Book Guy from The Simpson.)

loop

    insert ...

    commit;

end loop;

(there are actually worst .. )

Due to how COMMIT operates (it won't return until the data is safely stored), your fastest transaction rate will be heavily determined by c (the speed of light.).

  • On HDDs, this will be about 200/sec (the current observed rate)
  • On SSDs, it will be slightly higher.

At that point, your options are

  • Change the speed of light.
  • Change your method.

Again, they key is to operate in bulk.

If you can process the data with INSERT..SELECT (or MERGE), do that.

My $0.02

MK

1 - 16
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Aug 8 2013
Added on Jul 2 2013
2 comments
1,722 views