7 Replies Latest reply: Apr 8, 2013 1:16 PM by user5716448 RSS

    Compression speed

    user5716448
      Hi,

      We are experimenting with compression finding etl slower which is what I would expect compared to uncompressed table.

      Using Oracle 11.2.0.3 and trying basic and for oltp on a large fact table has lots of bitmap indexes.

      User reporting reporting queries accessing the compressed table up to 3x slower comapre to the equivalent uncompressed table.

      Looking at manuals and forums suggest etl can be slower -fine we have plenty of time to load but reports would not be negatively affacted.

      Anybody else seen this?

      Thanks
        • 1. Re: Compression speed
          TSharma-Oracle
          Did you compress index as well?
          Did you run the explain before ans after compression? Did you find any difference in explain plan?
          • 2. Re: Compression speed
            user5716448
            Hi,

            bitmap indexes compressed by default.

            Explain plan looks the same

            Thanks
            • 3. Re: Compression speed
              Jonathan Lewis
              user5716448 wrote:
              Hi,

              We are experimenting with compression finding etl slower which is what I would expect compared to uncompressed table.

              Using Oracle 11.2.0.3 and trying basic and for oltp on a large fact table has lots of bitmap indexes.

              User reporting reporting queries accessing the compressed table up to 3x slower comapre to the equivalent uncompressed table.

              Looking at manuals and forums suggest etl can be slower -fine we have plenty of time to load but reports would not be negatively affacted.
              First step - almost always - check execution plans for changes.

              If you compress a table the compressed version will (usually) be smaller, and sometimes this means a plan that was using an indexed access path will switch to one using a tablescan access path. (Other variiations on the theme are possible - a compressed table might make the clustering factor of a b-tree index smaller, which could make a plan switch in the opposite direction - i.e. from tablespace to indexed access - or could simply lead to a change in the choice of index).


              Regards
              Jonathan Lewis
              • 4. Re: Compression speed
                rp0428
                >
                bitmap indexes compressed by default.

                Explain plan looks the same
                >
                If you want help you have to post the code that shows what you are doing; not just tell us you did something.

                We have no way to know what you mean by 'compressed by default'.

                And we don't know what you mean by 'plan looks the same'.

                Post the DDL for the tables and indexes and the commands you executed to compress the table and the data.
                >
                Using Oracle 11.2.0.3 and trying basic and for oltp on a large fact table has lots of bitmap indexes.
                >
                You do know, don't you, that just altering the table for basic compressison does NOT compress ANY of the existing data in that 'large fact table'?

                You have to MOVE the data in order for it to be compressed.
                • 5. Re: Compression speed
                  user5716448
                  Thanks or replies.

                  Yes - am aware that compress doesn't change existing data - we created a brand new table with compression.

                  Another team is doing the performance checks - they concluded the compression was impacting the cpu too heaviily compared to dinetcial queries ran against non-compessed version of he table and have decided to not pursue compession. Main advantages we hoped to acheive were disk space (this one 1/3 of size of original) and report speed (which approx 3 tiimes slower in teams perf checks)

                  data warehouse - star query.

                  One question I have which was raised by another member of team was compression of entire database as opposed to compressing just one table.

                  I'm assuming we would eexperience similar cpu issues?

                  Any thoughts?

                  In our previous tests only the large fact table was compressed.

                  bitmap indexes - opartitioned table and the bitmpa indexes local - when tried compess command on them failed - looked at manual and mentions oracle creates bitmap indexes in compressed format by default.
                  • 6. Re: Compression speed
                    Jonathan Lewis
                    user5716448 wrote:

                    Another team is doing the performance checks - they concluded the compression was impacting the cpu too heaviily compared to dinetcial queries ran against non-compessed version of he table and have decided to not pursue compession. Main advantages we hoped to acheive were disk space (this one 1/3 of size of original) and report speed (which approx 3 tiimes slower in teams perf checks)
                    One-third less disc space is believable, 3 times slower while using the same execution plan is hard to believe - although increased CPU is something we might expect. Of course, if you're talking about a query that used to take 1/10th second now taking 3/10ths of a second I wouldn't be surprised - the absolute difference is not significant and could be explained by an increase in CPU; but if you're talking about 1 minute expanding to 3 minutes I would be surprised.

                    I'd also be interested in how the other team did their testing - is is possible that having compressed the tables their ETL processing introduced lots of updates after doing direct path inserts - thus introducing a huge volume of chained rows ? That could easily have the type of performance impact you're talking about.

                    The information you've given us is too vague, though - how about giving us an example of a query which runs three times as slowly, with the before and after execution plans and stats from a trace file (level 8) that has been passed through tkprof. A report of the session statistics (change) due to the query would also be an important source of helpful information - it might, for example, show us a very large count of "table fetch by continued row".
                    One question I have which was raised by another member of team was compression of entire database as opposed to compressing just one table.
                    That's the sort of question that suggests the team doesn't know how to look at their test results. If they knew why the query time was slower they wouldn't need to ask the question - as it is they're just hoping for a magic bullet.


                    Regards
                    Jonathan Lewis
                    • 7. Re: Compression speed
                      user5716448
                      Hi,

                      Thansk for update.

                      Performance team repeated their tests but after increase of cpu from 4 to 10 cpu.

                      They are able to increase amount of availavble speed.

                      Conclusion was compression was quicker at reporting and cpu had been set too low on initital tests.

                      Thanks again for all posts