top of page

How can we define a successful performance testing project?

Updated: Jan 19, 2022

One of the essential things in every project is to define the success/failure criteria; otherwise, how can we determine if the process delivered the expected results? How can we learn from failure? Was the process efficient enough?

So what are the main criteria we can examine to evaluate a performance testing process? There is no specific answer; every project has its targets and expectations. Still, based on my experience, there are a few mandatory criteria that we always need to consider when evaluating the performance testing effort.


You succeeded in improving the current performance.

So you conduct your tests spend thousands of dollars, but the application behaves the same way before starting your tests.  If that’s the result, then we failed. Still, if we succeeded in improving the performance of the application, then we can say that we achieved one of the main goals that we start the test in the first place (Think about it, do you want to start a costly process without improving the actual product?).


You succeeded in finding the application bugs early

Well, you all know that bugs are cheaper when you see them in the early stages of the testing process. This fact is relevant to performance tests as it’s relevant to any other testing process. Based on this, we can say that finding bugs early is a huge consideration when determining the quality of our testing efforts.


You can deliver documentation that reflects the process

Like any other testing project, we always need to ask ourselves a simple question “Did we spend the time we had efficiently..?” To answer this question, we need to examine how it’s spent during the project; the best way to accomplish this task is to review the documentation we create and use.


Examples:

  • The Test strategy that we used.

  • The Set of Tools.

  • The Environment topology.

  • Scripts for simulating testing data.

You know the numbers instead of assumptions.

One of the biggest problems that we have in the “Non-Functional” side of testing (and in the performance world in particular) is that we need to take a lot of assumptions on “How” the system will react in some situations, there are a lot of reasons that cause this, but the truth is that sometimes you can’t get the expected outputs as you have in the “Functional” side of testing.


Therefore, I can say that based on my experience, true success must include “numbers” every assumption that you have before the tests should be translated into numbers that you can analyze and examine in different phases of the project.


You create a baseline for future projects.

As I already told you, performance tests will consume a considerable amount of resources, time, and money. A significant success factor is achieved if you can use the current testing on future projects. It’s like “recycling” think about this for a sec; each performance test will lead to further costs on Hardware, Software, and testing tools. If you succeed in maintaining these objects, you can use them on future projects and reduce the costs.


Furthermore, another major issue is the baseline that you achieved in the current testing cycle; this baseline will be a great start to any other performance projects because you have a baseline that you can compare the execution results, understand the differences between versions and save a tremendous amount of time when you need to estimate the time for each test.




9 views0 comments
bottom of page