Forum Stats

  • 3,853,528 Users
  • 2,264,231 Discussions
  • 7,905,381 Comments

Discussions

Methodology question (always aim for same throughput)

IHodgetts
IHodgetts Member Posts: 264
edited Jan 15, 2009 5:47AM in QA/Testing
This is more of a 'sanity check' than anything (probably a real 'newbie' question) I just wondered how people approach these sorts of issues...

I have a very simple test for a single method in a webservice. The test criteria are to hit 2.5 transactions per second with 100 users (and guage the response time).

Prior to a change in the environment, I had the scenario set with a delay of 29s (as each iteration typically took 8s). We identified an issue and the fix decreased the response times to around 4s.

Am I correct in the approach that I should now be INCREASING the delay in my scenario (to say 33s) to account for the improvement (and achieve the same throughput). As obviously running exactly the same scenario will generate a higher throughput because the respones times have decreased.

Answers

  • It depends on objective of your tests, if objective of test to have constant throughput and to see impact /measure other performance metrics than increasing delay is right approach.
    Ashish Dave-Oracle
  • IHodgetts
    IHodgetts Member Posts: 264
    Absolutely; the main aim is to obtain a throughput of 2.5tps with 100VUS (though the concurrency figure is a bit vague).

    The issue was that compared to our other webservices (doing a similar test) we used to get drastically different response times (typically 2-4s) but this particular webservice was only responding in typically 7-8s. We eventually found a couple of issues in both application and environment and the resulting changes decreased the typical response times significantly.

    I just wanted confirmation that I was taking the correct approach in altering my scenario once the performance had improved. I actually find that I need to have a 34s delay between iterations (rather than the initial 29s) to achieve around the same result in a 15 minute test. Effectively a 'performance regression test' will need tweaking as changes to applications/environments occur.
This discussion has been closed.