1 Reply Latest reply: Jan 10, 2012 5:35 PM by alan.pae RSS

    Solaris 11/11 - filesystem throughput issue

    909724
      Hi,

      We have performed recently an upgrade from Solaris 11 Express 11/10 to the release 11/11 (following the upgrade procedure)

      However, we have throughput issues on the filesystem
      - On some regular task such as compiling, there is a 2 folds increase in time
      - We have also an Oracle DB 11g running and its worst than that, there is nearly a 6 folds increase in time (for some benchmark)

      The behaviour has been checked a few times with the previous Express kernel

      There was no hardware change on the target host.
      We just use ZFS has a "regular" filesystem: no raid or any fancy feature use.

      We had a look to the configuration of caches on the ZFS side, and there is nothing special or any tuning (except 2GB for arc_max).
      Caches are active.
      AHCI is active.

      Hardware details:
      <<
      System Configuration: Oracle Corporation i86pc
      Memory size: 8074 Megabytes
      System Peripherals (Software Nodes):

      i86pc
      scsi_vhci, instance #0
      pci, instance #0
      pci1028,47e, instance #0
      display, instance #0
      pci1028,47e (driver not attached)
      pci1028,47e (driver not attached)
      pci1028,47e, instance #0 (driver not attached)
      pci1028,47e, instance #0
      hub, instance #0
      pci1028,47e, instance #0
      pci8086,1c10 (driver not attached)
      pci8086,1c14, instance #1
      pci1028,47e, instance #1
      hub, instance #1
      keyboard, instance #0
      mouse, instance #1
      pci8086,244e, instance #0
      pci10ec,8139, instance #0
      isa, instance #0
      i8042, instance #0
      mouse, instance #0
      asy, instance #0 (driver not attached)
      motherboard (driver not attached)
      pit_beep, instance #0
      pci1028,47e, instance #0
      disk, instance #0
      disk, instance #1
      cdrom, instance #2
      pci1028,47e (driver not attached)
      fw, instance #0
      cpu, instance #0
      cpu, instance #1
      cpu, instance #2
      cpu, instance #3
      cpu, instance #4
      cpu, instance #5
      cpu, instance #6
      cpu, instance #7
      sb, instance #1
      used-resources (driver not attached)
      iscsi, instance #0
      fcoe, instance #0
      options, instance #0
      pseudo, instance #0
      agpgart, instance #0
      xsvc, instance #0
      vga_arbiter, instance #0 (driver not attached)
      intel-iommu, instance #0
      intel-iommu, instance #1
      >>

      <<
      sd0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
      Vendor: ATA Product: ST31000524AS Revision: JC45 Serial No: 5VP7R4TD
      Size: 1000.20GB <1000204886016 bytes>
      Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
      Illegal Request: 8 Predictive Failure Analysis: 0
      sd1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
      Vendor: ATA Product: ST31000524AS Revision: JC45 Serial No: 6VPB11DQ
      Size: 1000.20GB <1000204886016 bytes>
      Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
      Illegal Request: 2 Predictive Failure Analysis: 0
      >>


      ZFS_PARAMS:
      ::zfs_params
      arc_reduce_dnlc_percent = 0x3
      zfs_arc_max = 0x80000000
      zfs_arc_min = 0x0
      arc_shrink_shift = 0x5
      zfs_mdcomp_disable = 0x0
      zfs_prefetch_disable = 0x0
      zfetch_max_streams = 0x8
      zfetch_min_sec_reap = 0x2
      zfetch_block_cap = 0x100
      zfetch_array_rd_sz = 0x100000
      zfs_default_bs = 0x9
      zfs_default_ibs = 0xe
      metaslab_aliquot = 0x80000
      mdb: variable reference_tracking_enable not found: unknown symbol name
      mdb: variable reference_history not found: unknown symbol name
      spa_max_replication_override = 0x3
      spa_mode_global = 0x3
      zfs_flags = 0x0
      zfs_txg_synctime_ms = 0x1388
      zfs_txg_timeout = 0x1e
      zfs_write_limit_min = 0x2000000
      zfs_write_limit_max = 0x3f021a00
      zfs_write_limit_shift = 0x3
      zfs_write_limit_override = 0x0
      zfs_no_write_throttle = 0x0
      zfs_vdev_cache_max = 0x4000
      zfs_vdev_cache_size = 0x0
      zfs_vdev_cache_bshift = 0x10
      vdev_mirror_shift = 0x15
      zfs_vdev_max_pending = 0xa
      zfs_vdev_min_pending = 0x4
      zfs_vdev_future_pending = 0xa
      zfs_scrub_limit = 0xa
      zfs_no_scrub_io = 0x0
      zfs_no_scrub_prefetch = 0x0
      zfs_vdev_time_shift = 0x6
      zfs_vdev_ramp_rate = 0x2
      zfs_vdev_aggregation_limit = 0x20000
      fzap_default_block_shift = 0xe
      zfs_immediate_write_sz = 0x8000
      zfs_read_chunk_size = 0x100000
      zfs_nocacheflush = 0x0
      zil_replay_disable = 0x0
      metaslab_gang_threshold = 0x100001
      metaslab_df_alloc_threshold = 0x100000
      metaslab_df_free_pct = 0x4
      zio_injection_enabled = 0x0
      zvol_immediate_write_sz = 0x8000
      >>

      Any idea to fix this throughput issue ?

      Regards,
      Julien