top of page

End QA Elitism Right Now!

I recently came across a tweet in which someone asked, "As a tester, when do you give the approval for release?" A few questions elicit a negative reaction from me upon first reading. That is exactly what this question did. This subject annoyed me because it brought back some horrible memories from previous employment.


The most frustrating initiatives I've worked on as a software engineer were those with "QA elitism," in which QA determined whether to authorize or delay a release. Whenever I've been on teams where the testing team was solely responsible for a project's deployment to production, disagreements inevitably arose. Sometimes it transpired as the development cycle came to an end, and other times it was a silent ticking disaster waiting to happen that exploded weeks or months afterward.


I have seen so many testers speak openly about their teams being in charge of project releases on various websites, and it always irritates me. I'm not sure why organizations still require QA or testers to authorize a release. I'm not dismissing the significance of QA in the release process, nor am I looking to blame testers for causing conflict among team members. When put in these situations, everyone does their best with good motives.

Even so, making testers the group responsible for a release puts unrealistic stress on them. It also, whether intentionally or unintentionally, creates silos between product, development, and testing. Finally, there is constant finger-pointing, trust erosion, and someone becoming the scapegoat when things go wrong. These problems benefit no one in the organization.


QA elitism anti-patterns

In my experience, mainly two circumstances have emerged in these types of projects, causing long-term issues for the teams and the organization as a whole.


Releases are delayed by QA because of strict defect classification

When it comes to categorizing defects throughout the development cycle, some testing teams are more lenient than others. What one team considers to be a minor issue for another team may be a barrier, suggesting that different teams have different definitions of quality. Since there is no "one size fits all" method for determining the severity of a defect, it is a difficult issue to balance.


The problem arises when a QA team uses rigid defect classification to forcefully delay a release. Even before code could be deployed, the development team had to address each and every defect that the team had logged with a particular label. I realize the rationale. After all, we shouldn't release code that has an obvious flaw that damages the final product. However, when defects are misclassified, this system breaks down.

I can immediately think of one disheartening example. The QA team once prevented one of my pull requests from being merged and released because they identified a high-priority defect in the functionality. The defect? Only in Microsoft Edge was there an inaccuracy of one image. The project's release was delayed while the problem was being settled.


The QA team should, of course, document the glitch as something they discovered during testing. But I still don't comprehend why they designated this problem as a blocker. The majority of the team hardly even noticed the issue when it was brought up because it was so subtle and No functionality was being broken.


This minor problem resulted in a great deal of back and forth between myself, the tester, and other stakeholders. The defect was ultimately classified as "won't fix." This back and forth ultimately caused the release to be delayed because the team was dispersed across the globe and working in various time zones. Admittedly, rebuilding the trust between all parties involved took a long time.


This illustration demonstrates how maintaining a system where QA can delay releases due to how they classify defects can lead to circumstances that are detrimental to the project as a whole. Defects should be classified appropriately, but there should be space available for revisions because different people have different perspectives.


Still, defects make it into production, and QA is criticized for allowing this to take place

Another situation I've observed in companies that demand approval from QA prior to deployment is the inevitable defect that manages to slip through the gaps. In a typical release cycle, development spends some time fixing defects and finishing up the backlog from QA. If QA finds no obstacles, they will give the project the green light, and it will be delivered to the clients.


However, nobody is flawless. No matter how much testing is carried out prior to a release or how much time developers spend in defect-fixing mode, defects will inevitably slip by. Guess who gets the blame when the product team receives a defect report in organizations with QA gatekeeping? Every time, the testers will be the main cause of failure.


When there is a severe time constraint for product development and release, these problems tend to arise more frequently. The team is frequently overworked, scope creep spirals out of control, and nobody has a good handle on producing high-quality work. When a product needs to be developed quickly, quality is typically the first thing to go.

I have worked on teams that believe in producing results quickly rather than worrying about writing tests because "QA will solve that." I've never subscribed to that viewpoint. I've received criticism from team leads as a developer for holding off on pushing a feature until I had finished creating some automated tests for it. They requested that I share my untested code so that QA could test it.


There is a cycle on those teams where a lot of work is sent to QA for testing. The testers do their best to meet any deadlines despite not having enough time or resources to fully dedicate themselves to the project. The possibility of bugs being introduced into production rises. Since the testing team reviewed the project last, they will always be held responsible while the development team escapes with little damage. The morale of the team is inevitably destroyed.


What can you do right away to end QA elitism?

If your group is dealing with one (or both) of these structural issues, you should try to assist your company as quickly as you can to protect your project from even further degradation. There were a few characteristics shared by the project teams that I have observed to have the greatest overall success in producing high-quality products on schedule.


Early and frequent testing is carried out

These productive teams tested as much as they could, as early as possible, in addition to having everyone responsible for the project do their part with testing. They didn't start testing at a specific point in the project's timeline; it was part of their normal procedure. Making testing an element of everyone's job is easier said than done, but these teams have taken steps to make it dead simple to bake in quality from the beginning by making testing part of the workflow. For instance:

  • Various tests were automatically run at varying stages on a daily basis. When developers committed new code to a branch, a process began to run a few quick tests. It ran more thorough tests when merging code into the main branch. A full battery of tests was carried out at night, producing statistics on the project's current status.

  • The Quality assurance team members were given full autonomy to conduct exploratory testing in addition to their other obligations. This type of testing was not a discrete activity added to the schedule. The entire team comprehended that the Testing process was always working alongside everyone else.

  • When new code or a pull request was committed, DevOps and developers set up systems that automatically created testing environments. This process enabled them to develop staging virtual machines for new capabilities and assist non-technical users in performing acceptance testing quickly.


Testing was never delayed because systems were in place to carry out automated tests and create environments for additional features prior to deployment. During their whole day, neither of these team members had to overthink quality. They had everything they considered necessary to dive in quickly whenever they considered it necessary to do testing.

Everyone is held accountable for testing

I've worked on teams that have all been given responsibility for testing, and those projects have had the fewest bugs or defects. Due to their expertise, testers were still given the responsibility of testing. However, the remaining project stakeholders also contributed fairly and in their own ways:

  • Project owners, project managers, and architects reached staging environments regularly to initiate acceptance testing for new features and ensure that the product did look and functioned as intended.

  • Programmers spent time creating automated tests for their software. At the very least, they wrapped their project with unit tests, but I've also seen a few development teams creating end-to-end tests for advanced functionality.

  • The DevOps team made certain that proper monitoring and alerts were in place, such as setting up continuous deployment processes that performed various forms of testing when new code was committed.


These groups did not rely merely on QA to decide whether or not their project was ready to be deployed. If any of the teams felt there was something that needed to be addressed before going into production, it was debated between each clan. For example, if QA discovered a problem, they would consult with product and development to determine whether the concern should be addressed immediately or deferred until a later point in time.


This process worked pretty well because it encouraged cross-disciplinary dialogue. Rather than working in silos, they would gather to discuss major risks before they caused further slowdowns. It added context to the concerns at hand and clarified any uncertainty about the severity of a defect. Just keep in mind that testing is an essential component of high-quality products, and everyone must collaborate to achieve it.


Closing

A few little organizations still practice "QA elitism," in which the quality assurance team is in charge of project release. This practice is detrimental in numerous ways, putting extra stress on testers and encouraging dissent among different teams working on the project.

When organizations use this process, I've observed two anti-patterns. One trouble is that projects are delayed as a result of testers blocking minor, occasionally insignificant issues. Another issue is that since the quality assurance team is the group that gives the green light when bugs inevitably make it through to production, they end up being blamed.

The most productive teams I've witnessed share a few characteristics that help them eliminate these drawbacks. Instead of placing the burden solely on testers, these teams get everyone working on the project to do their part with testing. They also test frequently and as early as possible, putting systems in place to make it simple for everyone to contribute to a high-quality product.

16 views0 comments