5 Replies Latest reply on Mar 30, 2020 2:40 PM by Oqelu

    ZFS high time utilisation for compression

    YTC#1 - Bruce D Porter

      Before I go off into researching and stat checking , I wonder if someone

      has a quick answer

       

      Oracle DB redo log dataset is compressed (don;t ask, DBAs changed usage

      without saying).

       

      log files show as 250Mb in OraDB and 100Mb on filesystem .... DBAs don't

      like this (banging head against wall trying to make them understand it does not matter).

       

      Anyway,they did some copy tests.

      With compression on a log file takes 2mis 45 secs (about 1min 45 sec in

      kernel) to copy.

       

      With compression off, 15 secs.

       

      That is an excessive over head IMO.

       

      Server is M4000, S10. Zpool version = 37.

       

      I'll try and get a comparable test on the new S7s tomorrow.... just

      don't want to be wasting time on what I see as a non issue (ie :- just

      turn compression off solves any problems).

        • 1. Re: ZFS high time utilisation for compression
          Andris Perkons-Oracle

          What type of compression are they using? gzip? Better to use lzjb or lz4 (if available, I don't have an S10 system to check.) Much lighter on CPU usage.

           

          Andris

          • 2. Re: ZFS high time utilisation for compression
            YTC#1 - Bruce D Porter

            I am off site until Tuesday, but from memory it should be lzjb as default.?

            • 3. Re: ZFS high time utilisation for compression
              Andris Perkons-Oracle

              Yes, if you just set "compress=on" then it defaults to lzjb. If even that has a significant impact on CPU usage, then I guess you are right with your assessment to turn of compression altogether. The M4000 is pretty old and does not have that much CPU horsepower compared to a current system.

              • 4. Re: ZFS high time utilisation for compression
                YTC#1 - Bruce D Porter

                It did raise a 2nd question when I showed them how files remained compressed until copied though.

                 

                They pointed out the compressed file was still being written to by the DB. Did this remain compressed as it was currently, or would it eventually grow.

                As I understand it Oracle Redo logs constantly writes to bits of the file. If te block being written to was compressed the I presume it stays compressed ?

                • 5. Re: ZFS high time utilisation for compression
                  Oqelu

                  Time it takes you to uncompress 250MB is abnormal. Andris Perkons-Oracle tries not to talk about this.

                  If a block is full, it remains as is, compressed, or not.
                  If not full, then operating system writes to it new bytes, as it appends them to file. So, it compresses it again.
                  If you copy from compressed to compressed file, system should simply copy compressed data as is, without decompression. It should take you the same 15 seconds as uncompressed.
                  This way any file system works. Not just ZFS.

                  ... Or supposed to work. A stupid system may redo the entire file, instead of just 1 new block... Or double work: uncompress, and compress again. Or both.

                  In ZFS a block can be large. Specification arbitrarily restricts it to 128kB. It simply states so, without any reason. Technically, in theory, a block can be a lot larger. Though, operating system may not be designed to deal with this. It would be stupid, if system compresses anew entire block, rather than just 1 sector on disk. Then it might take a lot of time. But this might happen, only if you append, rather than copy. If this happens always, it would be super stupid.
                  I did not test if this is the case in Solaris.