INSIGHTS

How Context Influences Test Strategy Design

Every time a product is released to customers, there’s an expectation that it will work flawlessly. However, in practice, we know that a complete absence of defects is nearly impossible. What we can do is apply strategies and techniques to minimize failures and deliver more reliable software.

Over the years, I’ve seen different approaches to ensuring software quality. Two in particular stand out: one based on consolidated, standardized testing techniques, and another that’s more pragmatic and adaptive. The first is heavily influenced by the ISTQB model, which relies on formal standards and practices, while the second follows the principles of Rapid Software Testing (RST), which emphasizes heuristics and context-driven thinking. Though rarely followed to the letter, both provide a foundation that shapes how teams build their testing strategies.

When applying these approaches in environments with short release cycles, I learned that failing fast in agile contexts is essential, since time to market is key to staying competitive. Using the premise that ‘tests tend to cluster’, we’re able to identify where to focus our efforts and where it makes sense to prioritise quicker, lightweight tests — backend tests, for example— It’s also critical to understand our base of root causes and incorporate fast validations, like the configuration tests described in Google’s Site Reliability Engineering book. These help ensure deployment safety without delaying delivery. Instead of being the QA who obsessively hunts for edge cases, we need to be effective and pragmatic.

When deciding which approach to adopt in a QA team, I realized that technical testing knowledge alone wasn’t enough. Once, outside of working hours, I used a crowdtesting mindset to experiment with a new Pix feature. That session revealed two bugs that would’ve been hard to predict in a traditional test case, but were obvious once I looked at the flow with a developer’s mindset. Interestingly, those same issues still exist in apps from major banks — we resolved them in 2023 after two weeks of production issues. This kind of experience reinforced how crucial it is to understand not only the product, but the entire development dynamic. That’s when I found direction in Lehman’s Laws of Software Evolution, especially the first two: Continuous Change and Increasing Complexity.

As I dug deeper into those laws, it became clear that the traditional ISTQB model struggles to deal with these evolving realities. Its structured and standardized approach can create a rigid view of what quality means, while in practice, quality is fluid: it varies according to functionality, product goals, and most importantly, the development strategy being implemented. To build effective testing strategies, we can’t rely on fixed techniques alone. We need to understand what’s being built — and how. Only then can we shape testing approaches that actually evolve alongside the software and the technologies involved.

Throughout this text, I’ve tried to show that software quality can’t be reduced to a fixed set of practices, a checklist based on certifications, or an isolated theory. It’s really a blend of many forms of knowledge, all of which need to be adapted to the context we’re in. If we want to deliver real quality, we have to learn faster than the problems can emerge.

Enjoy this insight? Share it!

Learn more

Let’s talk