Skip to Main Content

Database Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Quiz for "Learning Python for PL/SQL Developers: Part 4"

Arup Nanda_2Oct 14 2016 — edited Feb 17 2017

by Arup Nanda

Questions

1. We have a simple class with just one attribute: empId. Here is how we define the class and instantiate a variable called emp1 of that class. But it produced an error. Why?

# q1.txt

>>> class employee:

...    def __init__ (self, empId):

...       empId = self.empId

...

>>> emp1 = employee(1)

Traceback (most recent call last):

  File "<stdin>", line 1, in <module>

  File "<stdin>", line 3, in __init__

AttributeError: 'employee' object has no attribute 'empId'

We have explicitly define an attribute named empId; so why is it complaining that there is no attribute in that name?

2. What will be the output of the following:

#q2.txt

>>> class employee:

...    def __init__ (self, empId):

...       self.empId = empId

...

>>> emp1 = employee(1)

>>> emp1.__dict__['deptNo'] = 10

>>> print (emp1.empId + len(emp1.__dict__))

3. You want to define a function that doesn't accept any parameters. So, you write this:

def printMe:

But it produced en error. Why?

4. Consider the following code:

#q4.txt

def printMe():

   print("In the function line 1")

print("In the function line 2")

print("About to call the function")

printMe()

print("After calling the function")

When you execute the program, will the output be as follows?

About to call the function

In the function line 1

In the function line 2

After calling the function

5. Here is a function to return the sum of two integer values:

#q5.txt

def mySum(p1,p2):

   p3 = 0

   print("Inside the function mySum")

   return p3

   p3 = p1 + p2

What will be output of the following?

print("About to call the function")

print('1 + 2 = ', mySum(1,2))

print("After calling the function")

6. What will be the result of the following code?

#q6.txt

myVar = 1

def myFunc():

   myVar = myVar + 1

   print('Inside function myVar=', myVar)

myVar = myVar + 1

myFunc()

print('Outside function myVar=', myVar)

7. Here is a program:

#q7.txt

def myMax(p1, p2, p3, p4):

    print ("The maximum value is ", max(p1,p2,p3,p4))

myList = [5,2,7,1]

myMax(*myList)

What will be result? If it produces an error, explain where and why.

Answers

1. The error is due to the following:

empId = self.empId

It assigns self.empId (which is the supposed attribute) to empId, which, in this case, is considered a variable. The correct usage is the following:

self.empId = empId

2. The output will be 3. Here is the logic:

  • emp1.__dict__ will be a dictionary of all attributes of the instance emp1 of the class employee. Initially we had only one attribute: empId.
  • Then we added another attribute: deptNo. So len(emp.__dict__) now will be 2.
  • The total length is 3.

3. The correct syntax is this:

def printMe():

You need to use the parentheses even if no parameters are expected.

4. The output will be different. Note line 3.

print("In the function line 2")

Line 2 is not indented. So, Python interprets it as outside the function. Therefore the output will be this:

In the function line 2

About to call the function

In the function line 1

After calling the function

5. It will not print the correct results. Note that the return statement is the last statement in the function code. Other lines after that are not executed; but it never produces an error. Therefore, the line p3 = p1 + p2 is never executed. The function returns the value of p3 at that time, which is 0. The output will be this:

About to call the function

Inside the function mySum

1 + 2 =  0

After calling the function

6. The code will produce an error. Look at line myVar = myVar + 1 inside the function. Because it is inside the function, the scope of the variable myVar is inside the function only. However, that variable has not been initialized before that call. The intent was probably to update the myVar variable that is defined globally, but Python doesn't know that. If you want to update the global variable myVar, simply add a line inside the function:

global myVar

Here is the complete updated code:

#q6a.txt

myVar = 1

def myFunc():

   global myVar

   myVar = myVar + 1

   print('Inside function myVar=', myVar)

myVar = myVar + 1

myFunc()

print('Outside function myVar=', myVar)

Now the output is this:

Inside function myVar= 3

Outside function myVar= 3

Note how the variable has the same value inside and outside the function.

7. The code will not result in error. It will show "7." The input to the function is simply a list (*myList), a list of four numbers was provided, and the function was expecting that many numbers. If you remove the * in the call, the code would  fail with the following error:

TypeError: myMax() missing 3 required positional arguments: 'p2', 'p3', and 'p4'

That's interesting but logical. If you didn't have the "*", mList would be passed as a single parameter, which would happen to be of type list. The function was expecting four parameters, not one. Hence the error.

Back to the Part 4 article.

About the Author

Arup Nanda (arup@proligence.com has been an Oracle DBA since 1993, handling all aspects of database administration, from performance tuning to security and disaster recovery. He was Oracle Magazine's DBA of the Year in 2003 and received an Oracle Excellence Award for Technologist of the Year in 2012.

Comments

Tubby

968361 wrote:

Hi All,

I have an application where I have 15 Dbms scheduler jobs running to process 500K transactions per Hour. All will be processing a specific set of a transaction so Locking is not the issue. I had a sequence with a cache size 2000 and incremented by 1.

Now I have a limit that I can have only 14 Million transactions processed in a day. ( 5 Million TYPE1, 2 MIllion Type2, 3 Million type3 etc)

I have 15 sessions running to process these transactions. I am bulk fetching transactions with a limit of 1000 of random types(say 200 transactions of Type1, 500 transactions of Type2 300 transactions of Type3). Different types of transactions will use different sequences e.g Type 1 will use TYPE1 sequence, Type 2 transaction will use TYPE2 sequence and so on.

But what I see is each time I try to execute the query to get cache it jumps by 2000. So I have gaps in my sequence and I am consuming my 14 million marks without even processing 800K transactions?

Is it because :

1) Multithreading as 15 different sessions are trying to process transactions (different sequence type) and each session will keep a cache of 2000.

2) High Cache size

How can I avoid this issue?

Regards

VInod

Have a look at Justin's reply in the following thread

Oracle 10g sequence cache aging

Options for you would be to lower the cache to something like 100 and benchmark what that does to your process. I've used high caches before for some heavy loading but typically 100 VS 2000, etc... isn't going to be the sticking point for any performance issues.

Cheers,

Cheers,

Paulzip

The library cache is probably aging out the sequence due to limits on memory allocation in the SGA.  In which case it'll have to re-read the cache and another 2000 sequence values each time.

You could pin the sequence in the SGA, using dbms_shared_pool.keep, which should lessen that at the cost of using some of the SGA up.

L. Fernigrini
But what I see is each time I try to execute the query to get cache it jumps by 2000. So I have gaps in my sequence and I am consuming my 14 million marks without even processing 800K transactions?

How are you querying this?

Sequences values are stored in the database. I do not know the exact internal details, but imagine there is a table that holds the sequence values, something like:

CREATE TABLE sys.SequenceValues (SEQUENCE_ID NUMBER, SEQUENCE_VALUE NUMBER);

When you do not use cache, each time a session requires a sequence value, Oracle searches for the value in this "table", reads the SEQUENCE_VALUE, adds 1, updates the value (commiting the update so other session see the last value used) and returns the value to the session.

When you use cache, the process is pretty much the same. The example would be

1) A session asks for a sequence number.

2) Oracle does not have that sequence in memory, so searches for the sequence in the table, reads SEQUENCE_VALUES, adds N (being N the cache size), saves the new value to the sequence values table (internally "commits" that change), stores those N values on the cache, and returns the first one to the session

3) Another session ask for the sequence, since it is on cache, Oracle returns the next available number without having to find the value on the table, and having to update the table with the last used value.

4) Repeat until N request are made

5) N+1 session asks oracle for a sequence number, Oracle repeats step "2".

So, if you query USER_SEQUENCES or ALL_SEQUENCES, you will see that LAST_NUMBER will jump with N intervals, and that is OK since numbers between those interval are in the CACHE.

From the docs (https://docs.oracle.com/cd/B28359_01/server.111/b28320/statviews_2053.htm#REFRN20244 )

LAST_NUMBERNUMBERNOT NULLLast sequence number written to disk. If a sequence uses caching, the number written to disk is the last number placed in the sequence cache. This number is likely to be greater than the last sequence number that was used.
L. Fernigrini

OP mentions inserting 500K rows per hour (139 per second), assuming there are 10 different sequences that would mean each one is used 14+/- times per second, I doubt the library cache would age out something that is used so frequently....

968361

Thanks for your feedback.

Even if I do select sequencename.currval from dual, at regular intervals say after each fetch I can see sequence has jumped by 2K records. But I have not processed that many transactions. The thing that confuses me, even more, is even if it is so frequently used sequence how can it age out so easily that the system has to fetch the sequence each time. I doubt the library cache is the problem.

L. Fernigrini

Just in case, can you post the syntax of one sequence that has the problem? If you are querying CURRVAL then it should not jump every 2K ....

Jonathan Lewis

You can't select CURRVAL if you haven't previously selecte nextval, so you need to be more precise in describing what you're doing in the test.

How sure are you of the sequence definition   Do the following from SQL*Plus and show us the results.

set long 20000

select dbms_metadata.get_ddl('SEQUENCE','{name of sequence}','{name of sequence owner}') from dual;

The "names" should be in upper case (unless you've delinerately forced the database into using lower case too; and don't include the curly brackets.

Regards

Jonathan Lewis

Paulzip
Answer

L. Fernigrini wrote:

OP mentions inserting 500K rows per hour (139 per second), assuming there are 10 different sequences that would mean each one is used 14+/- times per second, I doubt the library cache would age out something that is used so frequently....

You are assuming these scheduler processes are the only ones using the SGA. The instance could be managing loads of other sessions, in a busy environment, all accessing different objects, all fighting over library cache in the SGA.

You're also assuming the scheduler processes are uniformly processing an even distribution of records across the hour. They might not be.  It could be on 12.2 or above Standard Edition, which now has concurrent thread limits. So there might be lots of waits, and ageing out because of that.

You're also assuming the SGA is sized adequately, it might not be.

The skipping of a proportion of a sequence's cache size, is classically library cache ageing. I'm just going with the usual reason.

OP might need to run some instance monitoring, to determine if that is the reason or not.

Marked as Answer by 968361 · Sep 27 2020
L. Fernigrini

You are assuming these scheduler processes are the only ones using the SGA. The instance could be managing loads of other sessions, in a busy environment, all accessing different objects, all fighting over library cache in the SGA.

No, not at all. But I assume that if SGA is under real strong pressure then OP should be worried about the process not finishing or taking too long and not on sequence spiking values...

You're also assuming the scheduler processes are uniformly processing an even distribution of records across the hour. They might not be.  It could be on 12.2 or above Standard Edition, which now has concurrent thread limits. So there might be lots of waits, and ageing out because of that.

No assuming, I'm taking inco account OP mention:

               I am bulk fetching transactions with a limit of 1000 of random types(say 200  transactions of Type1, 500  transactions of  Type2 300  transactions of  Type3).

So I understand that each bulk of 1000 random operations are somehow distributed in a similar way as the whole set of data is...Not a perfect distribution but OP mentions 50%, 30% and 20% as examples, he did  not mention 50%, 49% and 1%.

You're also assuming the SGA is sized adequately, it might not be.

That's true.

The skipping of a proportion of a sequence's cache size, is classically library cache ageing. I'm just going with the usual reason.

Yes, I've seen this, but now when there are 15 concurrent processes frequently using those cached... That would suggest a sever pressure on the Library Cache....  OP did not mention that but could be possible

OP might need to run some instance monitoring, to determine if that is the reason or not.

Yes, that could be a way to identify the reason of the problem.

968361

As requested,

Below is the syntax of the sequence

create sequence TRSQ_XYZDD2L

minvalue 1

maxvalue 9999999999999999999999999999

start with 17156001

increment by 1

cache 2000;

968361

start with 17156001  --> it will be 1 at the beginning

Jonathan Lewis

This may be the syntax of the sequence that was used to create the sequence, but you don't seem to have done as I asked and checked what is currently IN the database, or the output would have been more like:

   CREATE SEQUENCE  "TEST_USER"."TRSQ_XYZDD2L"  MINVALUE 1 MAXVALUE 999999999999

9999999999999999 INCREMENT BY 1 START WITH 17156001 CACHE 2000 NOORDER  NOCYCLE

NOKEEP  NOSCALE  GLOBAL

However it looks as if you may be doing some stress testing of a development version of your application rather than trying to address a production problem, so I'll assume it's accurate.

What version of Oracle are you running, this may make a difference.

PaulZip's comment about the the sequence being pushed out of the library cache due to pressure of other objects is the most likely (though slightly unexpected, given the rate of use) reason. If you are doing stress tests it's possible that you've got some other code going on that's constantly generating and optimising previously unseen SQL and demanding space in the library cache to do so.  In which case PaulZip's suggestion to use dbms_shared_pool,keep() to keep the sequence as soon as it is loaded is probably a workaround.

Other reasons for pressure on the library cache / shared pool -

  •   If your application (or the Oracle scheduler) is leaking cursors (i.e. opening them but not managing to close them properly) then you could be losing lots of memory in the library cache.
  •   if you've found an Oracle bug with the scheduler and re-using chld cursors you may find that the scheduler jobs are constantly creating new child cursors for the same parent, and demanding huge amounts of space int he library cache for each of your important statements.

If you're licensed for the performance and diagnostic packs then an AWR report should give you some clues about memory usage and any strange behaviour of SQL in your system.  (e.g. "SQL ordered by vesion count", "SQL ordered by parse calls")

Regards

Jonathan Lewis

968361

Thanks the DB details are:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 0

PL/SQL Release 12.1.0.2.0 - Production 0

"CORE 12.1.0.2.0 Production" 0

TNS for Linux: Version 12.1.0.2.0 - Production 0

NLSRTL Version 12.1.0.2.0 - Production 0

SGA 21GB

Yes, you are correct it is testing environment to see if we can handle 500K transactions per hour load. Currently, we are processing 150K transactions per hour But we are checking the feasibility of how much we can process.

Meanwhile, I will check your other recommendations.

Regards

Vinod

968361

HI,

shared_pool_reserved_size is 5% of the SGA whereas in other environments it is 7%. Does this parameter has anything to do with Caching? 

Paulzip

Yes, shared_pool_size and shared_pool_reserved_size, come into play regarding library cache reload ratio and library cache hit ratio.

Reload % :

In an ideal world, for optimal performance, you’ll want to keep the library cache reload ratio as close to zero as possible.  This is the ratio of how often something needs to be reloaded back into memory after being aged out.  If this above 1%-ish, then the SHARED_POOL_SIZE can be increased to mitigate.

Hit Ratio %

Likewise, you want the library cache hit ratio (how often something has been cached and can be reused from the library quickly) somewhere above 95 percent.  if the library cache hit ratio comes in below 95 percent, then the SHARED_POOL_SIZE parameter may need to be increased.

Here's a query to give you these :

select sum(pins) hits, sum(reloads) misses, round(sum(reloads) / nullif(sum(pins), 0)* 100, 3) reload_pct, round(sum(pins) / (sum(pins) + sum(reloads)) * 100, 3) hit_ratio_pct

from v$librarycache;

My advice would be to look at the individual sequence's information to see if they are being aged out...

select *

from v$db_object_cache

where owner = 'sequence_owner' and name = 'sequence_name'

Mike Kutz

968361 wrote:


Yes, you are correct it is testing environment to see if we can handle 500K transactions per hour load. Currently, we are processing 150K transactions per hour But we are checking the feasibility of how much we can process.

It will heavily be dependent on your process.

If your process is simple, you can do 500k in under 1 sec... IF YOU PROCESS IN BULK.

worst. method. ever. (Comic Book Guy from The Simpson.)

loop

    insert ...

    commit;

end loop;

(there are actually worst .. )

Due to how COMMIT operates (it won't return until the data is safely stored), your fastest transaction rate will be heavily determined by c (the speed of light.).

  • On HDDs, this will be about 200/sec (the current observed rate)
  • On SSDs, it will be slightly higher.

At that point, your options are

  • Change the speed of light.
  • Change your method.

Again, they key is to operate in bulk.

If you can process the data with INSERT..SELECT (or MERGE), do that.

My $0.02

MK

1 - 16

Post Details

Added on Oct 14 2016
1 comment
860 views