top of page

What is Performance Testing? | David Tzemach

Performance testing is a general name for several test techniques used to validate how the system performs and behaves. Performance testing is mainly used to measure and examine different system aspects such as stability, reliability, responsiveness and resource usage under various workloads.

When conducting performance testing, it is important to use different types of performance tests (e.g. load, stress, reliability etc.) as each of them provides a different type of data that the team needs in order to understand if the system is ready for real-world deployments.


When planning a performance testing project, start with the basics and determine the system’s technical boundaries and limitation specifications (i.e.: operation x should take between 1-5 seconds, operation y should not cause a delay greater than one millisecond) so you know whether they system behaves satisfactorily.


The goals of performance tests

  • Performance tests allow the team to measure aspects of the system as well as the correlations between them.

  • Performance tests help identify bottlenecks in the system and determine the baseline for future testing activities.

  • Analyzing test results helps the team make informed decisions about the quality of the application.

  • Performance testing ensures compliance with preliminary performance goals and requirements.



Type of performance tests

Performance testing includes different test types that can be used to validate the application’s reliability, resilience, and stability:

  • Spike testing – Testing the application with sudden overloads. The goal is to validate that the software can handle situations where there is a jump in the user activity.

  • Endurance/soak testing – Testing the system at different loads for a long period. The goal is to see that the system can handle sustained use.

  • Load testing – Testing the software with an increasing set of data (near to the software’s limit), to see that the software can handle such loads without any side effects (unexpected crashes, memory leaks, bottlenecks etc.).

  • Stress testing – Testing the software under load that exceeds the system limits. The goal here is to cause system failures, analyze the crashes, and find a way for the software to recover from these failures.

  • Fail-over/fault-tolerance testing – Testing that the software correctly responds to failures when there are two or more physical nodes. In this type of test, we validate that when a node fails the users can continue their work and the application data and services move to an available node.

  • Redundancy testing – Testing the application’s redundancy mechanism in case of load, to determine the effectiveness of the balancing system.

  • Volume testing – Testing the application with different amounts of data, to understand its limits and demonstrate how that amount of data affects the predefined requirements.



How do we define a successful performance-testing project?

One of the most important things in any project is to define the success criteria. Otherwise, how we can determine if the process delivered the expected results? How can we learn from failure? Was the process efficient enough? To answer those questions, it is important that the criteria include two crucial items:

  1. Current performance has improved – Performance testing projects are expensive and demanding. Therefore, it is crucial to return the investment by improving the system performance.

  2. Numbers instead of assumptions – One of the biggest challenges when initiating a performance-testing project is a lack of specifications that define how the system should react in different situations. Because of this, development teams must make assumptions regarding how the system should react when you just can’t infer the expected output as you can in the functional side of testing. We must measure how many of those assumptions are now transformed into real numbers that the team can easily analyze and examine, and then turn them into applicable action items.

Advantages and disadvantages of performance testing

There are many advantages to performance testing. Here are just a few examples:

  • Allows the business to understand the technical limitations of the system.

  • Helps the business improve the end user experience.

  • Allows the business to compare two systems to evaluate their performance or to assess the performance aspects of different product releases.

  • Allows the organization to find functional and design bugs that the team cannot reveal during the functional testing effort.

  • Allows the business to reduce customer issues caused by inefficient performance of the system.

There is no such thing as a disadvantage of performance testing, though there are challenges and effort that this type of testing raises:

  • It is hard to design, evaluate and define constructive action items related to these tests.

  • It is hard to conduct these tests as part of the ordinary development cycles.

  • Most often, the team must base their tests on assumptions, as the PO is unable to provide all of the technical limitations of the system.

  • Performance tests demand resources (e.g. physical environments, tools and time) that are not always available.

  • Compared to functional testing, it can be difficult to understand the root cause of a problem.




88 views0 comments
bottom of page