Discussions
Categories
- 17.9K All Categories
- 3.4K Industry Applications
- 3.3K Intelligent Advisor
- 63 Insurance
- 535.7K On-Premises Infrastructure
- 138.1K Analytics Software
- 38.6K Application Development Software
- 5.6K Cloud Platform
- 109.3K Database Software
- 17.5K Enterprise Manager
- 8.8K Hardware
- 71K Infrastructure Software
- 105.2K Integration
- 41.5K Security Software
NFS performance

Centos 7.8
Hello Team,
I noticed NFS read rate is very slow when application reads files from an external server. Kindly advise what needs to be checked? Is the network card a bottleneck?
Read rate from application server.
-bash-4.2$ time dd if=/datacdr/CDR/inocs/sms/cbs_cdr_sms_20220112_601_101_614287.add of=/dev/null ibs=30000 obs=4096 count=3333
2+1 records in
21+1 records out
89820 bytes (90 kB) copied, 0.00761152 s, 11.8 MB/s
real 0m0.013s
user 0m0.002s
sys 0m0.000s
Mount options used:
10.215.228.72:/export/cdr2/data /datacdr nfs rw,nfsvers=3,soft,nosuid,rsize=32768,wsize=65536,noatime 0 0
Best Answer
-
Hi.
You and network team should analyze real usage of network performance.
Problem may be on Server side of client side.
May be other VM on same server have high utilization of some resources.
So You should really understand how utilized 40Gb.
You also can add more NIC to server and use bonding or distribute VM across NIC ( or even NIC bondings).
Regards,
Nik
Answers
-
Hi.
Speed 11.8 MB/s look like Network card with 100Mb Ethernet.
What type network card used on server and client side ? What speed of this card ?
This card connected to same switch or you have many switches ? What speed of every connections from server to client ?
Regards,
Nik
-
Hello Nik,
thanks for the update.
I have checked with Network Team.
It's a 40GB link from client to server.
Should I change the network card on the application server? Do you think packets are being dropped?
Regards,
Roshan
-
Hi.
40GB - means 40 Giga Byte per second Network Card. I do not know NIC with this speed.
40 Gb - means 40 Giga bit per second. This NIC should provide speed ~ 4GB/s.
For analyze performance bottleneck - You should provide more test.
Current result only show that You have some problem.
What OS installed on Client and Server ?
Can You show result of iostat -x 5 on server and client side ?
You use very small file for test - 90 kB. You can not see real performance on small files. (Time for check access, open file etc will much more that real copy). Try copy at least 1GB file.
Regards,
Nik
-
Hi Nik,
please find below output on client side:
$ iostat -x 5 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.8 5.2 42.4 53.3 0.0 0.1 9.6 0 3 nfs1 346.4 22.7 4288.3 586.4 0.2 1.4 4.4 1 61 nfs2 0.0 0.0 0.0 0.2 0.0 0.0 2.2 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 5.2 0.0 25.5 0.0 0.0 3.1 0 2 nfs1 678.7 148.1 20382.4 4710.0 0.0 1.1 1.3 1 47 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 1.8 0.0 8.4 0.0 0.0 0.6 0 0 nfs1 669.4 149.0 20067.3 4719.5 0.0 1.1 1.3 1 47 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 3.4 0.0 18.2 0.0 0.0 5.0 0 0 nfs1 734.5 157.2 22140.0 5007.0 0.0 1.3 1.5 3 53 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 7.6 0.0 37.2 0.0 0.0 0.9 0 1 nfs1 791.2 145.0 19903.7 4527.5 0.0 1.7 1.9 1 88 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 4.4 0.0 23.2 0.0 0.0 0.7 0 0 nfs1 752.9 159.0 18254.0 4564.9 0.0 1.8 1.9 2 88 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 3.0 0.0 15.1 0.0 0.0 0.7 0 0 nfs1 706.8 151.8 16972.8 4371.9 0.0 1.9 2.2 1 90 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 4.8 0.0 23.5 0.0 0.0 0.5 0 0 nfs1 732.7 143.6 17737.4 4170.5 0.0 1.7 1.9 1 90 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 1.6 0.0 6.9 0.0 0.0 0.6 0 0 nfs1 619.8 150.4 17562.4 4703.7 0.0 1.3 1.7 1 61 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 2.2 0.0 10.1 0.0 0.0 0.6 0 0 nfs1 656.0 157.4 19624.2 4987.1 0.0 0.6 0.8 1 38 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 3.0 0.0 16.8 0.0 0.0 0.5 0 0 nfs1 636.2 153.6 18972.4 4880.4 0.0 0.8 1.1 1 43 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 6.0 0.0 31.4 0.0 0.0 0.7 0 0 nfs1 651.2 154.8 19452.3 4926.1 0.0 0.6 0.8 1 41 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 5.4 0.0 27.4 0.0 0.0 0.6 0 0 nfs1 617.1 149.0 18404.5 4746.2 0.0 0.8 1.0 1 40 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 5.4 0.0 31.8 0.0 0.0 4.1 0 2 nfs1 609.4 140.8 18179.4 4493.2 0.0 1.1 1.4 1 50 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 1.8 0.0 8.4 0.0 0.0 0.8 0 0 nfs1 600.0 139.0 17906.0 4427.1 0.0 1.0 1.4 1 45 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 3.8 0.0 17.0 0.0 0.0 0.5 0 0 nfs1 665.2 148.6 20194.7 4723.6 0.0 0.7 0.9 2 38 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 8.0 0.0 37.2 0.0 0.0 0.6 0 1 nfs1 618.2 138.8 18218.8 4225.6 0.0 0.9 1.2 1 48 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 3.8 0.0 23.4 0.0 0.0 0.9 0 0 nfs1 620.5 148.0 17287.5 4218.2 0.0 1.1 1.5 1 59 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b vdc1 0.0 2.8 0.0 15.0 0.0 0.0 0.8 0 0 nfs1 628.0 170.2 18437.8 4996.5 0.0 0.9 1.2 1 50 nfs2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
iostat -x 5 on server application side:
[[email protected] datacdr2]# iostat -x 5 Linux 3.10.0-1160.42.2.el7.x86_64 (RB-BIGD-STRIIM1) 01/12/2022 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 24.83 0.00 10.61 1.04 0.00 63.52 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 1.07 0.05 21.61 13.61 2273.98 211.30 0.23 10.70 19.67 10.68 1.63 3.54 dm-0 0.00 0.00 0.01 0.30 0.47 2.49 19.02 0.00 4.38 4.83 4.36 2.33 0.07 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 19.29 0.00 4.01 0.12 5.43 1.34 0.00 dm-2 0.00 0.00 0.03 22.37 13.12 2271.48 203.92 0.24 10.52 25.00 10.50 1.55 3.48 avg-cpu: %user %nice %system %iowait %steal %idle 27.62 0.00 8.04 18.97 0.00 45.37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.20 32.60 1.60 9898.70 603.68 1.17 35.60 11.00 35.75 1.43 4.70 dm-0 0.00 0.00 0.20 0.80 1.60 3.20 9.60 0.00 3.00 11.00 1.00 2.40 0.24 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 31.80 0.00 9895.50 622.36 1.16 36.63 0.00 36.63 1.40 4.46 avg-cpu: %user %nice %system %iowait %steal %idle 19.59 0.00 10.31 23.72 0.00 46.38 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 16.20 0.00 1322.90 163.32 0.03 1.64 0.00 1.64 1.17 1.90 dm-0 0.00 0.00 0.00 0.20 0.00 4.00 40.00 0.00 1.00 0.00 1.00 1.00 0.02 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 16.00 0.00 1318.90 164.86 0.03 1.65 0.00 1.65 1.18 1.88 avg-cpu: %user %nice %system %iowait %steal %idle 29.93 0.00 9.93 23.58 0.00 36.56 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 17.40 0.00 1842.20 211.75 0.06 3.30 0.00 3.30 1.57 2.74 dm-0 0.00 0.00 0.00 1.20 0.00 9.60 16.00 0.00 0.17 0.00 0.17 0.17 0.02 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 16.20 0.00 1832.60 226.25 0.06 3.53 0.00 3.53 1.68 2.72 avg-cpu: %user %nice %system %iowait %steal %idle 20.40 0.00 7.28 30.39 0.00 41.93 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 10.40 0.00 29.80 0.00 1621.50 108.83 0.08 2.77 0.00 2.77 1.14 3.40 dm-0 0.00 0.00 0.00 0.80 0.00 9.80 24.50 0.02 23.50 0.00 23.50 21.25 1.70 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 39.40 0.00 1611.70 81.81 0.09 2.38 0.00 2.38 0.43 1.70 avg-cpu: %user %nice %system %iowait %steal %idle 17.83 0.00 11.21 29.55 0.00 41.40 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 13.00 0.00 1292.90 198.91 0.02 1.58 0.00 1.58 1.05 1.36 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 13.00 0.00 1292.90 198.91 0.02 1.60 0.00 1.60 1.06 1.38 avg-cpu: %user %nice %system %iowait %steal %idle 21.83 0.00 6.33 22.77 0.00 49.08 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 18.00 0.00 1525.10 169.46 0.06 3.26 0.00 3.26 1.63 2.94 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 18.00 0.00 1525.10 169.46 0.06 3.26 0.00 3.26 1.63 2.94 avg-cpu: %user %nice %system %iowait %steal %idle 27.67 0.00 7.67 18.53 0.00 46.13 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 37.13 0.00 13178.94 709.96 4.56 122.72 0.00 122.72 1.94 7.21 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 37.13 0.00 13178.94 709.96 4.56 122.72 0.00 122.72 1.94 7.19 avg-cpu: %user %nice %system %iowait %steal %idle 20.84 0.00 8.67 27.12 0.00 43.37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 16.40 0.00 1330.20 162.22 0.05 3.02 0.00 3.02 2.44 4.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 16.40 0.00 1330.20 162.22 0.05 3.02 0.00 3.02 2.44 4.00 avg-cpu: %user %nice %system %iowait %steal %idle 24.75 0.00 7.12 28.47 0.00 39.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 22.20 0.00 1394.30 125.61 0.07 2.97 0.00 2.97 1.68 3.72 dm-0 0.00 0.00 0.00 0.60 0.00 7.20 24.00 0.00 1.00 0.00 1.00 0.33 0.02 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 21.60 0.00 1387.10 128.44 0.07 3.03 0.00 3.03 1.71 3.70 avg-cpu: %user %nice %system %iowait %steal %idle 45.84 0.00 7.70 12.95 0.00 33.50 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 9.60 0.00 42.00 0.00 1563.90 74.47 0.14 3.37 0.00 3.37 1.91 8.04 dm-0 0.00 0.00 0.00 1.80 0.00 11.90 13.22 0.01 3.22 0.00 3.22 2.33 0.42 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 49.80 0.00 1552.00 62.33 0.17 3.34 0.00 3.34 1.53 7.62 avg-cpu: %user %nice %system %iowait %steal %idle 46.58 0.00 9.12 1.44 0.00 42.85 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 25.80 0.00 1509.60 117.02 0.08 3.16 0.00 3.16 2.50 6.44 dm-0 0.00 0.00 0.00 0.40 0.00 2.40 12.00 0.00 1.50 0.00 1.50 1.00 0.04 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 25.40 0.00 1507.20 118.68 0.08 3.17 0.00 3.17 2.51 6.38 avg-cpu: %user %nice %system %iowait %steal %idle 37.07 0.00 8.27 2.15 0.00 52.52 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 17.00 0.00 1350.80 158.92 0.07 3.86 0.00 3.86 2.01 3.42 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 17.00 0.00 1350.80 158.92 0.07 3.84 0.00 3.84 1.99 3.38 avg-cpu: %user %nice %system %iowait %steal %idle 33.04 0.00 12.08 2.56 0.00 52.32 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 27.40 0.00 3154.70 230.27 0.09 3.34 0.00 3.34 1.02 2.80 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 27.40 0.00 3154.70 230.27 0.09 3.34 0.00 3.34 1.01 2.78 avg-cpu: %user %nice %system %iowait %steal %idle 20.25 0.00 8.19 8.04 0.00 63.52 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 21.80 0.00 1816.40 166.64 0.20 9.01 0.00 9.01 4.46 9.72 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 21.80 0.00 1816.40 166.64 0.20 8.99 0.00 8.99 4.45 9.70 avg-cpu: %user %nice %system %iowait %steal %idle 23.11 0.00 8.90 10.69 0.00 57.30 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 25.60 0.00 1385.80 108.27 0.07 2.63 0.00 2.63 2.20 5.64 dm-0 0.00 0.00 0.00 0.40 0.00 3.20 16.00 0.00 1.50 0.00 1.50 1.00 0.04 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 25.20 0.00 1382.60 109.73 0.07 2.66 0.00 2.66 2.23 5.62 avg-cpu: %user %nice %system %iowait %steal %idle 20.81 0.00 11.30 4.45 0.00 63.44 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 8.00 0.00 22.60 0.00 1457.70 129.00 0.05 2.00 0.00 2.00 0.65 1.48 dm-0 0.00 0.00 0.00 1.20 0.00 5.60 9.33 0.00 3.83 0.00 3.83 3.83 0.46 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 29.40 0.00 1452.10 98.78 0.06 2.02 0.00 2.02 0.35 1.02 avg-cpu: %user %nice %system %iowait %steal %idle 18.61 0.00 8.60 3.85 0.00 68.94 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 24.00 0.00 1348.40 112.37 0.05 2.15 0.00 2.15 1.76 4.22 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 24.00 0.00 1348.40 112.37 0.05 2.15 0.00 2.15 1.76 4.22 avg-cpu: %user %nice %system %iowait %steal %idle 16.83 0.00 9.94 1.70 0.00 71.53 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 11.00 0.00 1646.00 299.27 0.02 1.93 0.00 1.93 0.69 0.76 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 11.00 0.00 1646.00 299.27 0.02 1.93 0.00 1.93 0.69 0.76 avg-cpu: %user %nice %system %iowait %steal %idle 18.98 0.00 8.57 7.10 0.00 65.35 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 27.40 0.00 3193.60 233.11 0.11 4.17 0.00 4.17 1.88 5.14 dm-0 0.00 0.00 0.00 0.80 0.00 4.60 11.50 0.00 3.25 0.00 3.25 2.75 0.22 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 26.60 0.00 3189.00 239.77 0.11 4.17 0.00 4.17 1.83 4.86 avg-cpu: %user %nice %system %iowait %steal %idle 28.20 0.00 8.88 4.67 0.00 58.26 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 13.60 0.00 1312.90 193.07 0.04 2.66 0.00 2.66 1.07 1.46 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 13.60 0.00 1312.90 193.07 0.04 2.66 0.00 2.66 1.07 1.46 avg-cpu: %user %nice %system %iowait %steal %idle 26.95 0.00 12.92 5.22 0.00 54.91 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 26.00 0.00 1895.20 145.78 0.08 2.99 0.00 2.99 2.29 5.96 dm-0 0.00 0.00 0.00 0.40 0.00 3.20 16.00 0.00 1.00 0.00 1.00 0.50 0.02 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 25.60 0.00 1892.00 147.81 0.08 3.01 0.00 3.01 2.31 5.92 avg-cpu: %user %nice %system %iowait %steal %idle 28.83 0.00 9.11 12.06 0.00 50.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 6.80 0.00 28.60 0.00 1693.70 118.44 0.09 3.02 0.00 3.02 0.59 1.70 dm-0 0.00 0.00 0.00 0.80 0.00 7.00 17.50 0.00 2.50 0.00 2.50 2.50 0.20 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 34.60 0.00 1686.70 97.50 0.10 2.85 0.00 2.85 0.43 1.50 avg-cpu: %user %nice %system %iowait %steal %idle 44.05 0.00 10.25 11.72 0.00 33.98 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 30.40 0.00 1399.80 92.09 0.16 5.18 0.00 5.18 4.99 15.18 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 30.40 0.00 1399.80 92.09 0.16 5.20 0.00 5.20 5.02 15.26 avg-cpu: %user %nice %system %iowait %steal %idle 35.29 0.00 7.86 11.46 0.00 45.40 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 11.00 0.00 1312.30 238.60 0.02 1.96 0.00 1.96 1.27 1.40 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 11.00 0.00 1312.30 238.60 0.02 1.96 0.00 1.96 1.27 1.40 avg-cpu: %user %nice %system %iowait %steal %idle 26.93 0.00 9.33 13.15 0.00 50.58 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.00 0.00 0.00 26.20 0.00 3225.90 246.25 0.10 3.67 0.00 3.67 1.56 4.10 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-2 0.00 0.00 0.00 26.20 0.00 3225.90 246.25 0.10 3.66 0.00 3.66 1.55 4.06
I tried with a 20G file and the below output is given.
[[email protected] ~]# time dd if=/datacdr/root.dmp of=/dev/null ibs=30000 obs=4096 count=3333 3333+0 records in 24411+1 records out 99990000 bytes (100 MB) copied, 16.38 s, 6.1 MB/s real 0m16.389s user 0m0.020s sys 0m0.092s /etc/security/limits.conf # End of file striim soft nofile 500000 striim hard nofile 500000 [[email protected] limits.d]# cat 20-nproc.conf # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for reasoning.
* soft nproc 4096
root soft nproc unlimited
[[email protected] limits.d]# Max open files is 500,000 for user striim [[email protected] 2416]# cat limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 4096 1031027 processes Max open files 500000 500000 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 1031027 1031027 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us
-
Hi.
You not say what OS used on Client Side.
I can see that You have two nfs connections on Client ( nfs1 and nfs2)
NFS1 load > 40% R+W ~ 24MB/s
But on Server side I do not see this load.
iostat on both side should be run at same time when you run test.
It's look like Solaris installed at LDOM. So You network card may shared with other LDOMs.
Please clear whole configuration.
Regards,
Nik
-
Hi
OS on client side is Solaris. Kindly advise how do I clear whole configuration? How do you conclude Solaris on LDOM?
Thanks,
Roshan
-
Hi.
Are You system administrator of this Server ? In this case You must know what You manage.
System Administration is not your job? Describe problem about NFS performance to system administrator.
You can not got detail system configuration about system from virtual vm.
Disk name vdc1 look like disk name inside LDOM.
Try this command for detect that this system is LDOM
prtdiag -v
ldm list
Regards,
Nik
-
Hi Nik,
We moved the VM to another more resourceful host and I see the latency faster now.
I noticed that only during public holidays and Sundays the transfer speed is quite high and stable.
Most probably it is network saturation. I checked with network team and they told me we using 40GbE bandwith.
Do you think we need to increase to 100GbE?
Regards,
Roshan
-
Hi.
You and network team should analyze real usage of network performance.
Problem may be on Server side of client side.
May be other VM on same server have high utilization of some resources.
So You should really understand how utilized 40Gb.
You also can add more NIC to server and use bonding or distribute VM across NIC ( or even NIC bondings).
Regards,
Nik