This discussion is archived
1 2 Previous Next 15 Replies Latest reply: Jun 19, 2013 10:09 PM by Dizwell RSS

Issue in understanding the negative Wasted space value for a table while fragmentation

008e4dd3-5c7b-451f-a6e5-713cfaedf3a4 Newbie
Currently Being Moderated

Hi All,

       After running the command for getting Fragmentation on tables, i am getting a negative wasted_space for my table. However for rest of the tables i am getting positive value. Can anybody help me to understand that.

 

 

OWNER                          TABLE_NAME                      size (kb) actual_data (kb) wasted_space (kb)

------------------------------ ------------------------------ ---------- ---------------- -----------------

PROVPROD                       BATCH_REQUEST_RESPONSE            4736696       4840637,48        -103941,48

 

Regards,

Arun Singh

  • 1. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    Dizwell Newbie
    Currently Being Moderated

    You won't get any help on this (I would expect) until you show us the command you're using to determine those numbers.

     

    Please provide the SQL you're running.

  • 2. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    008e4dd3-5c7b-451f-a6e5-713cfaedf3a4 Newbie
    Currently Being Moderated

    Thanks for quick response. Here is the SQL which was run.

     

    select table_name,round((blocks*8),2) "size (kb)" ,

                              round((num_rows*avg_row_len/1024),2) "actual_data (kb)",

                                (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) "wasted_space (kb)"

           from dba_tables

           where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))

           order by 4 desc

     

    Regards,

    Arun

  • 3. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    Dizwell Newbie
    Currently Being Moderated

    Right.I think the maths is wrong.

     

    You compute wasted space by taking the size of the table (expressed as a number of 8K blocks) and subtracting the probably length of actual data (average row length * number of rows). But that seems to assume you can pack an 8K block completely full. But you can't. You have to allow for the block header (so subtract something like 192 bytes to take care of that). And then I'm guessing that you set PCTFREE (or if you don't, you're using the default 10% free?). So lop off another 800 bytes or so for that. So in fact, your computation should really be along the lines of

     

    round (blocks*7.3),2) - round((num_rows*avg_row_len/1024),2))

     

    And that means your wasted space value will be lower, and therefore the row may possibly slip through your "trap the bad ones" WHERE clause.

     

    (PS, my mind has gone completely blank on the exact values for the block header and so forth. I'll see if I can refresh it ASAP)

  • 4. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    Dizwell Newbie
    Currently Being Moderated

    Still working with a dodgy memory, but I think the precise figures for the block header allowance are "start at 84 bytes and add 2 bytes for every row". So from 8192 bytes, you actually have (maximum) 8108 bytes free. From which you deduct 10% (or whatever PCTFREE), thus leaving 7297 bytes of usable space per block (at best).

  • 5. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    008e4dd3-5c7b-451f-a6e5-713cfaedf3a4 Newbie
    Currently Being Moderated

    Thanks for reply.

    But using same SQL we got positive value as well for some other tables like


    TABLE_NAME                      size (kb) actual_data (kb) wasted_space (kb)
    ------------------------------ ---------- ---------------- -----------------
    EPTLOG                           69245008       4521721,45        64723286,6
    PLSCOPE_IDENTIFIER$                   160            94,05             65,95

  • 6. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    Dizwell Newbie
    Currently Being Moderated

    But that's a different problem. If I insert a million rows, I move the high water mark up to block 1000. If I delete 999,999 rows, the high water mark is still at block 1000, and the DBA_TABLES view will still record that 1000*8 KB of space is allocated to the table. So it's very easy to get things which say "not much data is used in a huge table". You originally asked, though, about how you can get "more data is stored than is allocated to the table" -which is rarer to see, but that's because the difference between 8 and 7.3 is not huge. On some tables, though (ones which are very full), it will tip your calculations over into negative territory.

     

    Personally, I would adjust the query simply to display 0% wastage when the results of the final calculation are less than zero. It's just the block overhead and pctfree that's not being accounted for.

  • 7. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    008e4dd3-5c7b-451f-a6e5-713cfaedf3a4 Newbie
    Currently Being Moderated

    Thanks for the reply.

     

    I have looked into these records and i found that it works on the formula Wasted Space = size - actual data.

     

    we are facing performance issue on the same table(BATCH_REQUEST_RESPONSE) whose try wasted space is comes as negative. Does it meam that size assign to this database is less and we are trying on put more that which leads to performance bottleneck.

     

    TABLE_NAME                      size (kb) actual_data (kb) wasted_space (kb)

    ------------------------------ ---------- ---------------- -----------------

    EPTLOG                           69245008       4521721,45        64723286,6

    PLSCOPE_IDENTIFIER$                   160            94,05             65,95

    PROVPROD                       BATCH_REQUEST_RESPONSE            4736696       4840637,48        -103941,48

     

    Regards,

    Arun Singh

  • 8. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated

    008e4dd3-5c7b-451f-a6e5-713cfaedf3a4 wrote:

     

    select table_name,round((blocks*8),2) "size (kb)" ,

                              round((num_rows*avg_row_len/1024),2) "actual_data (kb)",

                                (round((blocks*8),2) - round((num_rows*avg_row_len/1024),2)) "wasted_space (kb)"

           from dba_tables

           where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))

           order by 4 desc

     

     

    I must be missing something here - I see the code with a predicate which selects only those rows where "size" is greater than "actual_data", but you've managed to report a row showing size - actual_data as negative.

     

    Either my brain (or eyesight, perhaps) has failed badly, or Oracle has an amazing arithmetic bug, or this isn't the code you're running/

     

    Regards

    Jonathan Lewis

  • 9. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    008e4dd3-5c7b-451f-a6e5-713cfaedf3a4 Newbie
    Currently Being Moderated

    No it may be possible that there would be different SQL used by DBA.

     

    If we go by your point and the SQL for getting data for BATCH_REQUEST_RESPONSE would be based on condition size < actual data.

     

    so is this leads to performance bottleneck while trying to fetch data from table since we have less size but more actual data in the table?

     

    Regards,

    Arun Singh         

  • 10. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    sybrand_b Guru
    Currently Being Moderated

    So far there is no indication you have a 'performance bottleneck' because of 'fragmentation'. You also didn't supply your definition of 'fragmentation' nor any actual statement which suffers from a performance bottleneck.

    We don't know the four digit version of Oracle which would suffer from this, as you didn't specify it.

    Your tool to identify 'fragmentation' is inaccurate.

    Only using the Oracle supplied package dbms_space would provide accurate results.

    Apart from that:

    If the HWM of the table doesn't match the actual data, you may see some impact. Dbms_space will report where the HWM is.

    You can also suffer from row chaining. This for sure will have performance impact..

     

    So you really need to investigate this using dbms_space (demos on http://asktom.oracle.com and/or Morgans Library)

     

    ----------------

    Sybrand Bakker

    Senior Oracle DBA

  • 11. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated

    008e4dd3-5c7b-451f-a6e5-713cfaedf3a4 wrote:

     

    No it may be possible that there would be different SQL used by DBA.

     

    SO the first problem is that we don't know how the DBA managed to get a result which says that the volume of data in the table is greater than the size of the table. There's no point in speculating about the effect that nonsense results have on performance.

     

    Let us assume that a piece of SQL vaguely similar to the one you supplied does actually seem to suggest this anomalous result. You might get lucky and find that someone can guess the reason - but you might ask yourself the question - could there be something about the table that means this SQL is either invalid, or could produce misleading results, perhaps there's something "uncommon" about the table. Even if you can't address that possibility yourself, it might give you a clue to show us the table definition (describe) or even the DDL (using dbms_metadata.get_ddl). Here, for example is one demonstration of how the data in the table can be greater than the size of the table:

     

     
    
    SQL> create table t1 as
      2  select * from all_objects
      3  where rownum <= 10000;
    
    Table created.
    
    SQL> create table t2 compress as
      2  select * from all_objects
      3  where rownum <= 10000;
    
    Table created.
    
    SQL> execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')
    
    PL/SQL procedure successfully completed.
    
    SQL> execute dbms_stats.gather_table_stats(user,'t2',method_opt=>'for all columns size 1')
    
    PL/SQL procedure successfully completed.
    
    SQL> select table_name, avg_row_len, num_rows, blocks from user_tables where table_name in ('T1','T2');
    
    TABLE_NAME           AVG_ROW_LEN   NUM_ROWS     BLOCKS
    -------------------- ----------- ---------- ----------
    T1                            88      10000        130
    T2                            88      10000         39
    
    
    

     

    Compressed tables report the row-length of the uncompressed row - which means the apparent data size can be much larger than the actual table size.

    In general we would choose to do compression in cases where we thought it would improve performance - sometimes it may increase the CPU usage and have a negative impact on performance.

     

     

    Regards

    Jonathan Lewis

  • 12. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    jgarry Guru
    Currently Being Moderated

    If you have a performance problem, you should follow a methodology to solve a performance problem.  Here's a good start:  HOW TO: Post a SQL statement tuning request - template posting

     

    While it is possible there is something odd about the data, that is only one possibility among many, and it is entirely possible there is more than one thing wrong.  Did the problem start suddenly?  Did it gradually build until someone finally complained?

  • 13. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    008e4dd3-5c7b-451f-a6e5-713cfaedf3a4 Newbie
    Currently Being Moderated

    Thanks for such a detailed explanation. Here are the DDL for table.

     

    CREATE TABLE
    "PROVPROD"."BATCH_REQUEST_RESPONSE"

    (
    "REQUESTRESPONSEID" NUMBER,

    "WORKINGID"         NUMBER
    NOT NULL ENABLE,

     
    "BATCHUID"        
    NUMBER NOT NULL ENABLE,

      
    "SOURCEAGENTID"     VARCHAR2(32 BYTE) NOT NULL
    ENABLE,

      
    "TRANSACTIONID"     VARCHAR2(256 BYTE),

      
    "BATCHENVNAME"      VARCHAR2(64 BYTE) NOT
    NULL ENABLE,

      
    "FILETYPE"        
    VARCHAR2(8 BYTE) NOT NULL ENABLE,

      
    "ISHEADER"        
    VARCHAR2(1 BYTE) NOT NULL ENABLE,


      
    "ROWNUMBER"         NUMBER
    NOT NULL ENABLE,

       
    "ROWVALUE" BLOB NOT NULL ENABLE,

       
    "SOSTATE"  VARCHAR2(16 BYTE) NOT NULL ENABLE,

      
    "SORESULT" NUMBER,
      
    "SOQUEUEDTS" TIMESTAMP (6),
     
    "SORESPONSETS" TIMESTAMP (6),
    "SOREQUESTXML" BLOB NOT NULL ENABLE,
      
    "SORESPONSEXML" BLOB DEFAULT EMPTY_BLOB(),

    "SOPRIORITY" NUMBER NOT NULL ENABLE,
    "SODELAY"  
    NUMBER NOT NULL ENABLE,
     
    "SOTYPE"     VARCHAR2(32 BYTE) NOT NULL ENABLE,

    "SOID"       NUMBER,

    "ACKREC"     VARCHAR2(1 CHAR) DEFAULT 'N',

    "FILENAME"   VARCHAR2(64 BYTE) NOT NULL ENABLE,
    PRIMARY KEY
    ("REQUESTRESPONSEID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
    COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
    2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
    FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "PROVPROD_DATA"
    ENABLE
    )
    SEGMENT CREATION
    IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE
      (
    INITIAL 65536
    NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
    FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
    DEFAULT
      )
      TABLESPACE
    "PROVPROD_DATA" LOB
      (

    "ROWVALUE"

      )

      STORE AS BASICFILE
      (
    TABLESPACE
    "PROVPROD_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE
    LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE
    DEFAULT CELL_FLASH_CACHE DEFAULT)
    )

    LOB

    (

    "SOREQUESTXML"

    )
    STORE AS BASICFILE
    (

    TABLESPACE
    "PROVPROD_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE
    LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE
    DEFAULT CELL_FLASH_CACHE DEFAULT)
    )
    LOB
    (

    "SORESPONSEXML"
    )
    STORE AS BASICFILE

    (
    TABLESPACE
    "PROVPROD_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE
    LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE
    DEFAULT CELL_FLASH_CACHE DEFAULT)
    ) ;
    CREATE UNIQUE INDEX
    "PROVPROD"."SYS_C0012044" ON
    "PROVPROD"."BATCH_REQUEST_RESPONSE"
    (
     
    "REQUESTRESPONSEID"
    )
    PCTFREE 10 INITRANS 2
    MAXTRANS 255 COMPUTE STATISTICS STORAGE
    (
    INITIAL 65536
    NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
    FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
    DEFAULT
    )
    TABLESPACE
    "PROVPROD_DATA" ;
    CREATE UNIQUE INDEX
    "PROVPROD"."SYS_IL0000064787C00016$$" ON
    "PROVPROD"."BATCH_REQUEST_RESPONSE"
    (

    PCTFREE 10
    INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1
    MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL
    DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE
    "PROVPROD_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;

    CREATE UNIQUE INDEX
    "PROVPROD"."SYS_IL0000064787C00015$$" ON
    "PROVPROD"."BATCH_REQUEST_RESPONSE" ( PCTFREE 10 INITRANS 2
    MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
    2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
    FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE
    "PROVPROD_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;

    CREATE UNIQUE INDEX
    "PROVPROD"."SYS_IL0000064787C00010$$" ON "PROVPROD"."BATCH_REQUEST_RESPONSE"
    ( PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576
    MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE
    "PROVPROD_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;

    CREATE UNIQUE INDEX
    "PROVPROD"."TRANSACTIONIDINDX" ON
    "PROVPROD"."BATCH_REQUEST_RESPONSE"
    ("TRANSACTIONID") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE
    STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
    2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT
    FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE
    "PROVPROD_DATA" ;

    CREATE INDEX
    "PROVPROD"."METRICSRESULTINDX" ON
    "PROVPROD"."BATCH_REQUEST_RESPONSE"
    (
        
    "BATCHUID",
        
    "SOURCEAGENTID",
        
    "BATCHENVNAME"
        )

    PCTFREE 10
    INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE
    (
       
    INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
    FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT
    )
        TABLESPACE
    "PROVPROD_DATA" ;


  • 14. Re: Issue in understanding the negative Wasted space value for a table while fragmentation
    Jonathan Lewis Oracle ACE Director
    Currently Being Moderated

    Your table definition has 3 LOBs in it, two are mandatory and one of them has a default value empty_blob().

     

    A LOB requires a lob locator to be stored in the row, and dbms_stats.gather_table_stats() always assumes the length of lob locator is something like 84 bytes, even though it might be much shorter (in the order of 36 bytes). This means that in your case easily see Oracle reporting your average row length as something like 150 bytes per row large than it really is.

     

    As an example, I've just created a table with the definition (id number, b blob) and filled it with "select rownum, empty_blob() from all_objects". The average row length reports as 106, a dump of the block shows that a typical row takes about 45 bytes.

     

    Regards

    Jonathan Lewis

     

     


1 2 Previous Next

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points