This month, I want to shed a little more light on requirements and test cases. The general consensus seems to be that requirements, at least, should be 100% complete early on in the project, but that is not always the case. Both test cases and requirements are continually refined as we learn more about the product. And we learn more about the product by testing. Michael Bolton has some interesting views in his blog posts below.
Very Short Blog Posts: “Insufficient Requirements”
Some people say they “don’t have enough requirements to start testing,” or that the requirements are unclear or incomplete or contradictory or out of date.
First, those people usually mean requirements documents; don’t confuse that with requirements, which may be explicit or tacit. There are plenty of sources of requirements-related information, and it’s a tester’s job to discover them and make inferences about them.
Second, insufficient clarity about requirements is both a test result and a project risk, and awareness of them may be crucially important project information. (If the testers aren’t clear on the requirements, are the programmers? And if they’re not, how is it that they’re building the product?)
Finally, if there is uncertainty about requirements, one great way around that problem is to start testing and reporting on what you find. “Insufficient requirements” may be a problem for testing—but it’s also precisely a problem that testing can help to solve.
Understanding the requirements is, in most projects, considered an essential step before you can begin testing. This may be true for checking or formal testing, but understanding the requirements is not really necessary before real testing, which is learning about a product through experimentation. You may need to test in order to develop an understanding of the requirements, which in turn triggers more and better testing, yielding even better understanding of the requirements—and so on.
Drop the Crutches
I had a fun chat with a client/colleague yesterday. He proposed—and I agreed—that test cases are like crutches. I added that the crutches are regularly foisted on people who weren’t limping to start with. It’s as though before the soccer game begins, we hand all the players a crutch. The crutches then hobble them.
We also agreed that test cases often lead to goal displacement. Instead of a thorough investigation of the product, the goal morphs into “finish the test cases!” Managers are inclined to ask “How’s the testing going?” But they usually don’t mean that. Instead, they almost certainly mean “How’s the product doing?” But, it seems to me, testers often interpret “How’s the testing going?” as “Are you done with those test cases?”, which ramps up the goal displacement.
Of course, “How’s the testing going?” is an important part of the three-part testing story, especially if problems in the product or project are preventing us from learning more deeply about the product. But most of the time, that’s probably not the part of story we want to lead with. In my experience, both as a program manager and as a tester, managers want to know one thing above all:
Are there problems that threaten the on-time, successful completion of the project?
The most successful and respected testers—in my experience—are the ones that answer that question by actively investigating the product and telling the story of what they’ve found. The testers that over focus on test cases distract themselves AND their teams and managers from that investigation, and from the problems investigation would reveal.
For a tester, there’s nothing wrong with checking quickly to see that the product can do something—but there’s not much right—or interesting—about it either. Checking seems to me to be a reasonably good thing to work into your programming practice; checks can be excellent alerts to unwanted low-level changes. But demonstration—showing that the product can work—is different from testing—investigating and experimenting to find out how it does (or doesn’t) work in a variety of circumstances and conditions.
Sometimes people object saying that they have to confirm that the product works and that they don’t have time to investigate. To me, that’s getting things backwards. If you actively, vigorously look for problems and don’t find them, you’ll get that confirmation you crave, as a happy side effect.
No matter what, you must prepare yourself to realize this:
Nobody can be relied upon to anticipate all of the problems that can beset a non-trivial product.
We call it “development” for a reason. The product and everything around it, including the requirements and the test strategy, do not arrive fully-formed. We continuously refine what we know about the product, and how to test it, and what the requirements really are, and all of those things feed back into each other. Things are revealed to us as we go, not as a cascade of boxes on a process diagram, but more like a fractal. The idea that we could know entirely what the requirements are before we’ve discussed and decided we’re done seems like total hubris to me. We humans have a poor track record in understanding and expressing exactly what we want. We’re no better at predicting the future. Deciding today what will make us happy ten months—or even days—from now combines both of those weaknesses and multiplies them.
For that reason, it seems to me that any hard or overly specific “Definition of Done” is antithetical to real agility. Let’s embrace unpredictability, learning, and change, and treat “Definition of Done” as a very unreliable heuristic. Better yet, consider a Definition of Not Done Yet: “we’re probably not done until at least These Things are done”. The “at least” part of DoNDY affords the possibility that we may recognize or discover important requirements along the way. And who knows?—we may at any time decide that we’re okay with dropping something from our DoNDY too. Maybe the only thing we can really depend upon is The Unsettling Rule.
Test cases—almost always prepared in advance of an actual test—are highly vulnerable to a constantly shifting landscape. They get old. And they pile up. There usually isn’t a lot of time to revisit them. But there’s typically little need to revisit many of them either. Many test cases lose relevance as the product changes or as it stabilizes.
Many people seem prone to say “We have to run a bunch of old test cases because we don’t know how changes to the code are affecting our product!” If you have lost your capacity to comprehend the product, why believe that you still comprehend those test cases? Why believe that they’re still relevant?
Therefore: just as you (appropriately) remain skeptical about the product, remain skeptical of your test ideas—especially test cases. Since requirements, products, and test ideas are subject to both gradual and explosive change, don’t over formalize or otherwise constrain your testing to stuff that you’ve already anticipated. You WILL learn as you go.
Instead of over focusing on test cases and worrying about completing them, focus on risk. Ask “How might some person suffer loss, harm, annoyance, or diminished value?” Then learn about the product, the technologies, and the people around it. Map those things out. Don’t feel obliged to be overly or prematurely specific; recognize that your map won’t perfectly match the territory, and that that’s okay—and it might even be a Good Thing. Seek coverage of risks and interesting conditions. Design your test ideas and prepare to test in ways that allow you to embrace change and adapt to it. Explain what you’ve learned.
Do all that, and you’ll find yourself throwing away the crutches that you never needed anyway. You’ll provide a more valuable service to your client and to your team. You and your testing will remain relevant.
During the last round of testing for the Travel & Expense project, we realized another good use for the test cases, especially in a University environment. It allowed the heavy users of University Travel in each department to get very familiar with the new web application. Not only did they become early champions of the software, but it also allowed them to raise important questions unique to their departments. All of this really helps with fostering acceptance to using this new tool. (It is not mandatory.)
I like the concepts that Michael Bolton describes in his blog posts. They just seem to make sense for the type of projects we have been involved in lately. Please feel free to add your own comments below. You can learn more about Michael Bolton here.
Thanks for reading!