This discussion is archived
6 Replies Latest reply: Apr 17, 2013 3:17 AM by warut21 RSS

Restore data from IBM LTO-4 to T5120 solaris 10 8/11 slow

warut21 Newbie
Currently Being Moderated
Hello all,

I backup zfs data to ibm lto4 tape drive using zfs send then dd to tape with transfer rate about 30MB/s but when I try to restore using dd from tape drive it transfer rate is about 7MB/s. Is this normal transfer rate of lto4 tape drive on solaris 10?

my command to restore from tape

dd if=/dev/rmt/0cbn of=/dev/null bs=131072

What am I doing wrong? from lto4 datasheet say it transfer rate about 30MB/s - 120 MB/s

Edited by: user13259844 on Apr 12, 2013 3:45 AM
  • 1. Re: Restore data from IBM LTO-4 to T5120 solaris 10 8/11 slow
    Nik Expert
    Currently Being Moderated
    Hi.
    It's not normal transfer rate for LTO4.
    Please show result
    iostat -xnz 2 3
    when dd work.

    Please show mt -f /dev/rmt/ocbn when cartridge is loaded.

    What type of cartridge you are use ?
    How tape drive connected to you server ? ( FC / SAS ? )

    Regards
  • 2. Re: Restore data from IBM LTO-4 to T5120 solaris 10 8/11 slow
    warut21 Newbie
    Currently Being Moderated
    Hi,

    Thanks for reply. I am using HP LTO-4 Ultrium RW data cartridge 1.6TB and LTO-4 Tape drive connect to scsi card on PCIE0 of T5120.


    when running dd command "dd if=/dev/rmt/0cbn of=/dev/null bs=131072"

    iostat -xnz 2 3 result
    ================================================================
    bash-3.2# iostat -xnz 2 3
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    1.9 7.6 26.1 196.7 0.0 0.1 0.0 14.0 0 3 c6t5000CCA01247B510d0
    1.9 7.7 25.7 196.7 0.0 0.1 0.0 14.2 0 3 c6t5000CCA012481A18d0
    0.2 0.0 0.4 0.0 0.0 0.0 42.6 13.0 0 0 c0t0d0
    7.2 10.4 51.2 351.5 0.0 0.1 0.0 2.9 0 3 c6t600A0B800032D0CE00000EF24A2C6662d0
    0.0 0.4 0.1 53.1 0.0 0.0 0.0 15.1 0 0 c2t220000D0239AEFF2d4
    0.0 0.1 1.2 9.8 0.0 0.0 0.0 7.9 0 0 c5t210000D0238AEFF2d6
    78.8 12.9 4579.1 225.6 0.0 0.5 0.0 5.4 0 11 c2t220000D0239AEFF2d5
    4.5 233.8 76.5 4319.2 0.0 0.1 0.0 0.2 0 6 rmt/0
    0.1 0.0 1.8 0.0 0.0 0.0 0.0 3.0 0 0 192.168.42.183:/var/tmp
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.0 579.0 0.0 4632.1 0.0 0.4 0.0 0.7 0 40 c6t600A0B800032D0CE00000EF24A2C6662d0
    463.6 0.0 8716.4 0.0 0.0 1.0 0.0 2.1 1 97 rmt/0
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    5.5 16.5 44.0 568.2 0.0 0.2 0.0 7.3 0 13 c6t5000CCA01247B510d0
    1.5 16.5 12.0 568.2 0.0 0.1 0.0 5.7 0 10 c6t5000CCA012481A18d0
    4.0 664.7 530.6 5370.5 0.0 0.3 0.0 0.5 0 32 c6t600A0B800032D0CE00000EF24A2C6662d0
    462.6 0.0 8330.9 0.0 0.0 1.0 0.0 2.1 1 97 rmt/0

    ==================================================================

    mt stat result

    ==================================================================
    bash-3.2# mt -f /dev/rmt/0cbn stat
    IBM Ultrium Gen 4 LTO tape drive:
    sense key(0x12)= EOF residual= 0 retries= 0
    file no= 1 block no= 0
    ==================================================================
  • 3. Re: Restore data from IBM LTO-4 to T5120 solaris 10 8/11 slow
    Nik Expert
    Currently Being Moderated
    Hi.


    iostat show that bottleneck is tape ( busy 97% ).

    It strange small size of block :

    462.6 0.0 8330.9 0.0 0.0 1.0 0.0 2.1 1 97 rmt/0

    BS=8330.9 / 462.6 ~ 18 kB



    Try make some test on new tape:

    dd if=/dev/zero of=/dev/rmt/0n bs=128k count=8192
    dd if=/dev/rmt/0n of=/dev/null bs=128k

    and see iostat output on other terminal.


    Regards.
  • 4. Re: Restore data from IBM LTO-4 to T5120 solaris 10 8/11 slow
    warut21 Newbie
    Currently Being Moderated
    Hi Nik,

    Thanks for your advice, after test command

    dd if=/dev/zero of=/dev/rmt/0n bs=128k count=8192

    dd if=/dev/rmt/0n of=/dev/null bs=128k

    then run iostat it show

    ============================================

    bash-3.2# iostat -xnz 2 3
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    1.9 7.6 25.9 196.4 0.0 0.1 0.0 14.0 0 3 c6t5000CCA01247B510d0
    1.9 7.7 25.5 196.4 0.0 0.1 0.0 14.2 0 3 c6t5000CCA012481A18d0
    0.2 0.0 0.4 0.0 0.0 0.0 42.6 13.0 0 0 c0t0d0
    7.2 10.4 51.0 349.7 0.0 0.1 0.0 2.9 0 3 c6t600A0B800032D0CE00000EF24A2C6662d0
    0.0 0.4 0.1 52.7 0.0 0.0 0.0 15.1 0 0 c2t220000D0239AEFF2d4
    0.0 0.1 1.2 9.8 0.0 0.0 0.0 7.9 0 0 c5t210000D0238AEFF2d6
    78.3 12.8 4547.9 224.2 0.0 0.5 0.0 5.4 0 11 c2t220000D0239AEFF2d5
    4.6 232.2 78.4 4289.9 0.0 0.1 0.0 0.2 0 6 rmt/0
    0.1 0.0 1.8 0.0 0.0 0.0 0.0 3.0 0 0 192.168.42.183:/var/tmp
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.5 92.5 52.0 1523.6 0.0 1.8 0.0 18.8 0 26 c6t5000CCA01247B510d0
    0.0 92.0 0.0 1603.6 0.0 1.6 0.0 17.1 0 24 c6t5000CCA012481A18d0
    0.0 4.0 0.0 32.0 0.0 0.0 0.0 0.5 0 0 c6t600A0B800032D0CE00000EF24A2C6662d0
    0.0 1016.2 0.0 130075.8 0.0 0.9 0.0 0.9 2 90 rmt/0
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.0 6.0 0.0 152.0 0.0 0.1 0.0 16.2 0 3 c6t5000CCA01247B510d0
    0.0 5.0 0.0 72.0 0.0 0.1 0.0 18.3 0 3 c6t5000CCA012481A18d0
    0.0 4.5 0.0 32.5 0.0 0.0 0.0 0.7 0 0 c6t600A0B800032D0CE00000EF24A2C6662d0
    0.0 833.8 0.0 106722.1 0.0 0.7 0.0 0.9 1 74 rmt/0

    ===============================================================

    bash-3.2# iostat -xnz 2 3
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    1.9 7.6 25.9 196.4 0.0 0.1 0.0 14.0 0 3 c6t5000CCA01247B510d0
    1.9 7.7 25.5 196.4 0.0 0.1 0.0 14.2 0 3 c6t5000CCA012481A18d0
    0.2 0.0 0.4 0.0 0.0 0.0 42.6 13.0 0 0 c0t0d0
    7.2 10.4 51.0 349.6 0.0 0.1 0.0 2.9 0 3 c6t600A0B800032D0CE00000EF24A2C6662d0
    0.0 0.4 0.1 52.7 0.0 0.0 0.0 15.1 0 0 c2t220000D0239AEFF2d4
    0.0 0.1 1.2 9.8 0.0 0.0 0.0 7.9 0 0 c5t210000D0238AEFF2d6
    78.3 12.8 4547.7 224.2 0.0 0.5 0.0 5.4 0 11 c2t220000D0239AEFF2d5
    4.6 232.2 78.4 4289.8 0.0 0.1 0.0 0.2 0 6 rmt/0
    0.1 0.0 1.8 0.0 0.0 0.0 0.0 3.0 0 0 192.168.42.183:/var/tmp
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.0 7.0 0.0 52.5 0.0 0.0 0.0 0.5 0 0 c6t600A0B800032D0CE00000EF24A2C6662d0
    1065.6 0.0 136391.4 0.0 0.0 0.9 0.0 0.9 2 94 rmt/0
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.0 102.5 0.0 1387.9 0.0 1.8 0.0 17.3 0 28 c6t5000CCA01247B510d0
    0.5 103.5 0.3 1387.9 0.0 2.2 0.0 21.0 0 35 c6t5000CCA012481A18d0
    0.0 4.0 0.0 32.0 0.0 0.0 0.0 0.5 0 0 c6t600A0B800032D0CE00000EF24A2C6662d0
    1063.8 0.0 136167.7 0.0 0.0 0.9 0.0 0.9 2 94 rmt/0

    =============================================================


    so I think something wrong with my backup script which is using zfs send to tape on remote host

    this is my command "zfs send bpool/sapmnt@2013-04-16|rsh 192.168.42.214 dd of=/dev/rmt/0cbn bs=128k"

    after run backup script it show iostat

    =============================================================
    iostat on backup host read data from zfs on disk c0d3 show blocksize about 58k

    root@tpcorpprd # iostat -xnz 2 3
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.9 0.2 12.6 12.5 0.0 0.0 0.0 15.1 0 1 c0d0
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0d1
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0d2
    80.2 9.2 4658.4 231.0 0.0 0.6 0.0 7.2 0 13 c0d3
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0d4
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0d5
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    root@tpcorpprd # iostat -xnz 2 3
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.9 0.2 12.6 12.5 0.0 0.0 0.0 15.1 0 1 c0d0
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0d1
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0d2
    80.2 9.2 4658.4 231.0 0.0 0.6 0.0 7.2 0 13 c0d3
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0d4
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c0d5
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device

    ======================================================

    iostat on remote host which write data to tape rmt0 show blocksize about 18k

    bash-3.2# iostat -xnz 2 3
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    1.9 7.6 25.9 196.3 0.0 0.1 0.0 14.0 0 3 c6t5000CCA01247B510d0
    1.9 7.7 25.5 196.3 0.0 0.1 0.0 14.2 0 3 c6t5000CCA012481A18d0
    0.2 0.0 0.4 0.0 0.0 0.0 42.6 13.0 0 0 c0t0d0
    7.2 10.4 51.0 349.1 0.0 0.1 0.0 2.9 0 3 c6t600A0B800032D0CE00000EF24A2C6662d0
    0.0 0.4 0.1 52.6 0.0 0.0 0.0 15.1 0 0 c2t220000D0239AEFF2d4
    0.0 0.1 1.2 9.8 0.0 0.0 0.0 7.9 0 0 c5t210000D0238AEFF2d6
    78.1 12.8 4539.8 225.1 0.0 0.5 0.0 5.4 0 11 c2t220000D0239AEFF2d5
    4.7 231.9 79.7 4283.1 0.0 0.1 0.0 0.2 0 6 rmt/0
    0.1 0.0 1.8 0.0 0.0 0.0 0.0 3.0 0 0 192.168.42.183:/var/tmp
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    20.0 10.0 114.8 374.8 0.0 0.4 0.0 13.0 0 14 c6t5000CCA01247B510d0
    16.0 10.0 118.3 374.8 0.0 0.4 0.0 14.3 0 15 c6t5000CCA012481A18d0
    1.0 4.0 0.8 32.1 0.0 0.0 0.0 0.4 0 0 c6t600A0B800032D0CE00000EF24A2C6662d0
    0.0 3350.6 0.0 62838.7 0.0 0.6 0.0 0.2 4 64 rmt/0
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    0.0 4.0 0.0 32.0 0.0 0.0 0.0 1.1 0 0 c6t600A0B800032D0CE00000EF24A2C6662d0
    0.0 1162.4 0.0 22243.0 0.0 0.9 0.0 0.8 1 88 rmt/0

    ===============================================================




    What I do wrong about backup zfs data to tape?
  • 5. Re: Restore data from IBM LTO-4 to T5120 solaris 10 8/11 slow
    Nik Expert
    Currently Being Moderated
    Hi.

    Use:
    zfs send bpool/sapmnt@2013-04-16|rsh 192.168.42.214 dd of=/dev/rmt/0cbn obs=128k

    Also you can monitor performance and try customize ibs settings.
    For example:

    zfs send bpool/sapmnt@2013-04-16|rsh 192.168.42.214 dd of=/dev/rmt/0cbn ibs=128k obs=128k


    Regards.
  • 6. Re: Restore data from IBM LTO-4 to T5120 solaris 10 8/11 slow
    warut21 Newbie
    Currently Being Moderated
    Thanks Nik

    after change bs=128k to ibs=128k obs=128k for backup to tape then restore it get transfer rate about 43 MB/sec.

    dont know why bs=128k not equal ibs and obs but it is enough time for my restore window now.

    Thanks again.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points