Forum Stats

  • 3,752,161 Users
  • 2,250,465 Discussions
  • 7,867,741 Comments

Discussions

How do I know the size in memory of some user-defined type variable

BEDE
BEDE Oracle DeveloperMember Posts: 2,276 Gold Trophy

I know that for the Oracle defined primary data type (like number, varchar2, date, timestamp) I can use select vsize(variable_name) from dual.

But how do I do for a variable of some user-defined type.

I can do something like the following:

declare

  type tab_ch is table of varchar2(4000);

 tb_ch tab_ch;

 v_size number;

begin

  tb_ch:=tab_ch('cuckoo', 'bau-wau','wysiswyg 8894983984983498jjsn');

 v_size:=0;

 for i in tb_ch.first..tb_ch.last loop

   select v_size+vsize(tb_ch(i)) into v_size

   from dual;

 end loop;

 dbms_output.put_line(v_size);

end;

/

But the above only gives the total size in memory of the components of a variable, while I think the variable itself takes more space in memory.

How to find the exact size in memory?

Best Answers

  • User_H3J7U
    User_H3J7U Member Posts: 415 Bronze Trophy
    Accepted Answer
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.3.0.0.0
    
    SQL> declare
     2  type tab_ch is table of varchar2(200 char);
     3  tb_ch tab_ch;
     4  sz1 number;
     5  sz2 number;
     6 begin
     7   select value into sz1 from v$mystat join v$statname using(statistic#) where name = 'session pga memory';
     8   tb_ch:=tab_ch();
     9   for i in 1..100 loop
     10     tb_ch.extend();
     11     tb_ch(i) := dbms_random.string('P', 100);
     12   end loop;
     13   select value into sz2 from v$mystat join v$statname using(statistic#) where name = 'session pga memory';
     14   dbms_output.put_line((sz2-sz1)/100);
     15 end;
     16 /
    3932.16
    
    SQL> /
    0
    
    SQL> exec dbms_session.free_unused_user_memory
    
    SQL> /
    655.36
    
    SQL> /
    0
    


  • padders
    padders Member Posts: 1,055 Silver Trophy
    Accepted Answer

    > Perhaps he was dividing by 100 because he created 100 objects and wanted to get the average size instead of the overall size?

    Hmm yeah very possibly, I didn't consider that, although quite how that answers the question I am not sure.

    I am not sure how useful 'session pga memory' statistic is since it appears to be the allocated size rather than the used size. In H3J7U's example it appears to have gone up by 64K so presumably this is some multiple of an OS related chunk size by which memory is allocated. For very large datatypes I guess the significance of this chunk overhead is reduced compared with the overall memory requirement.

    There is also a v$process_memory view that appears to break this statistic down into categories (including SQL and PL/SQL) and also distinguishes allocated vs. used. However I have not had a lot of luck tying allocation and usage down to specific operations in PL/SQL in the past so I doubt whether this will be more use.

    I notice that while VSIZE does not work on collections if you create a type with the collection as an attribute it does work and appears to reflect some overall and per-element overhead. Clearly this requires SQL types though and whether this genuinely represents the real size and whether the size requirement is the same in SQL and PL/SQL I have no idea. 

    Generally speaking though (and no doubt you are aware) you should be looking to avoid large collections residing in PL/SQL memory altogether by making good use of BULK COLLECT LIMIT, temporary tables and/or pipelined table functions.

    BluShadow

Answers

  • BluShadow
    BluShadow Member, Moderator Posts: 41,317 Red Diamond

    That's a tricky one. For the type you've declared there, it would only have minimal overhead in order to provide some index to the collection with pointers to where the data is internally. But what about even more complex types like object types that could have methods an other internal attributes and types associated with it.

    The real question is... do you really need to know? ;)

  • BEDE
    BEDE Oracle Developer Member Posts: 2,276 Gold Trophy

    Yes, I really wish to find a way to determine how much space takes in memory a complex object type. I just gave a brief example of how I can use vsize to get the size of the components of a table of varchar or, if it be the case, of the fields of a complex user-defined type.

    Some way to peek into the SGA to get how many bytes some object takes?

  • User_H3J7U
    User_H3J7U Member Posts: 415 Bronze Trophy
    Accepted Answer
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.3.0.0.0
    
    SQL> declare
     2  type tab_ch is table of varchar2(200 char);
     3  tb_ch tab_ch;
     4  sz1 number;
     5  sz2 number;
     6 begin
     7   select value into sz1 from v$mystat join v$statname using(statistic#) where name = 'session pga memory';
     8   tb_ch:=tab_ch();
     9   for i in 1..100 loop
     10     tb_ch.extend();
     11     tb_ch(i) := dbms_random.string('P', 100);
     12   end loop;
     13   select value into sz2 from v$mystat join v$statname using(statistic#) where name = 'session pga memory';
     14   dbms_output.put_line((sz2-sz1)/100);
     15 end;
     16 /
    3932.16
    
    SQL> /
    0
    
    SQL> exec dbms_session.free_unused_user_memory
    
    SQL> /
    655.36
    
    SQL> /
    0
    


  • padders
    padders Member Posts: 1,055 Silver Trophy

    Why you are dividing 'session pga memory' (which presumably is in bytes) by 100?

    How is this showing the exact size in memory of a used-defined type?

  • BEDE
    BEDE Oracle Developer Member Posts: 2,276 Gold Trophy

    Anyway, this helps somewhat. If I have the session PGA memory before I populate a massive pl/sql table of some object type and after that, then I got a pretty good estimate of how much memory that pl/sql table takes. As for dividing to 100, that beats me... I can't tell why it should be divided to 100 and not eventually to 1024 to get in kB.

  • BluShadow
    BluShadow Member, Moderator Posts: 41,317 Red Diamond

    Perhaps he was dividing by 100 because he created 100 objects and wanted to get the average size instead of the overall size?

  • padders
    padders Member Posts: 1,055 Silver Trophy
    Accepted Answer

    > Perhaps he was dividing by 100 because he created 100 objects and wanted to get the average size instead of the overall size?

    Hmm yeah very possibly, I didn't consider that, although quite how that answers the question I am not sure.

    I am not sure how useful 'session pga memory' statistic is since it appears to be the allocated size rather than the used size. In H3J7U's example it appears to have gone up by 64K so presumably this is some multiple of an OS related chunk size by which memory is allocated. For very large datatypes I guess the significance of this chunk overhead is reduced compared with the overall memory requirement.

    There is also a v$process_memory view that appears to break this statistic down into categories (including SQL and PL/SQL) and also distinguishes allocated vs. used. However I have not had a lot of luck tying allocation and usage down to specific operations in PL/SQL in the past so I doubt whether this will be more use.

    I notice that while VSIZE does not work on collections if you create a type with the collection as an attribute it does work and appears to reflect some overall and per-element overhead. Clearly this requires SQL types though and whether this genuinely represents the real size and whether the size requirement is the same in SQL and PL/SQL I have no idea. 

    Generally speaking though (and no doubt you are aware) you should be looking to avoid large collections residing in PL/SQL memory altogether by making good use of BULK COLLECT LIMIT, temporary tables and/or pipelined table functions.

    BluShadow