This content has been marked as final. Show 5 replies
Now the load avg is down and the %wa (disk wait) is also down:
top - 09:29:29 up 6 days, 19:58, 3 users, load average: 0.62, 6.66, 11.54
Tasks: 857 total, 1 running, 856 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.2%us, 0.2%sy, 0.0%ni, 98.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 16439284k total, 16339420k used, 99864k free, 252636k buffers
Swap: 20482832k total, 344664k used, 20138168k free, 9841680k cached
Which one is influencing the other ?
In a nutshell, it the average number of processes that have been waiting for CPU, disk, or network, including the number currently executing during the past 1, 5, and 15 minute time periods.
Thanks for your reply ! What I want to know is from the output of top given above, can we be sure that there is an I/O issue on the server ?
What I want to know is from the output of top given above, can we be sure that there is an I/O issue on the server ?All you can say is that during a certain 1 minute time interval in the past, 15 CPU's were theoretically needed to have no process in the waiting queue. However, it does not tell you anything about the cause. For instance, what if the problem was due to network congestion or routing issue outside your computer? The problem with any such statistics is that they cannot take your expectation of efficiency for a given task or time frame into the consideration. The results may even be even be very good for what the system was doing at the time.
So unless you have statistics to compare the workload with the workload of your machine, I'd say there is no way to tell if your system is configured properly or could be doing any better. If the workload is continuously more than 1 on average and you experience bad performance, then you might have a reason to check what the system is doing right or wrong.
Got it. Thanks !