Added: Over a year ago by Capacitas
Capacitas believes that analysis of performance testing results should be carried out in conjunction with analytical performance modelling. This has helped Capacitas identify performance problems in the past that wouldn't have been observed by looking at response time results in isolation.
The overhead of performing analytical modelling is relatively small compared to the overhead of carrying out the performance tests. Analytical modelling also drives the performance test strategy and ensures the best use of testing time. The author has developed a set of tips that will help you achieve the maximum benefit from your performance testing process.
The performance tester should carry out the same test at different utilisations or loads of the system under test e.g. at 20%, 40%, 60%, 80% etc. The aim is to obtain a graph of system response time vs. system under load. Ensure that load is the only variable that changes between the performance tests.
The author experienced situations where the people carrying out the tests have decided to change configuration or attempt to do performance tuning between the different load tests. The performance test should be viewed as an experiment and only a limited number of changes should be made between tests in order to understand the system behaviour.
Ideally, as part of the performance assurance process, you build an analytical performance model of the system. This model can be used to compare the theoretical model response time predictions with the actual response times measured in the performance tests.
If they are different then these differences must be accounted for. When differences occur you should investigate as this may indicate potential hidden performance problems such as soft bottlenecks e.g. contention of threads in a multi-threaded environment.
The response time as measured from your load-generating tool should equal the sum of the time spent in each of the components of the system under test. The differences between these two measures are called unaccounted time. So, if the system components include an application server and a database server then the time spent at the application server and at the database server should approximately equal the time measured at the load-generating tool.
This seems to be obvious, but normally the performance tester can't measure the response time at a component level and usually has to resort to using techniques such as the Utilisation Law. Examples of unaccounted time include a call to another system, time-outs, etc.
The way in which transactions arrive at the system has a significant impact on response times. A real-world example of this is when passengers wait for buses at a bus stop. The bus might be scheduled to arrive every 5 minutes, but if 3 buses arrive at once then the next bus wouldn’t arrive for 15 minutes.
The pattern of bus arrivals is described by an arrival rate distribution. The distribution describes the probability of how bursty the bus arrival pattern is i.e. the probability of 1 bus arriving at once, the probability of 2 buses arriving at once, the probability of 3 buses arriving at once etc.
So how do you determine whether the transaction arrival rate distribution is realistic? This is can be done by comparing the arrival rate distribution for the system under test against the Poisson distribution (Excel provides a Poisson function). The Poisson distribution can be used to approximate a large number of random events e.g. the number of hits on a web server. If the arrival rate distribution has lower probabilities of a large number of transactions arriving at the system then the performance test will underestimate the response time of the system.
This can be measured by carrying out a single threaded test. The single threaded test is basically a single virtual user at the load generator tool sending transactions synchronously into the system under test with no wait time between receiving a response and sending the next transaction.
The response time measured in this type of performance test should have minimal queuing and therefore accurately represent the service time. The service time distribution can be then plotted. This information can be inputted into your analytical performance model, which will help support tip 2.