These terms are being thrown around a lot lately in the testing community, and they can cause some confusion the first time you hear them being used in this context. They definitely confused me the first time I heard them, so I will attempt to provide some clarification in this post.
What are heuristics and oracles, and why should you learn more about them?
I have found that heuristics and oracles provide a great starting point for testing, especially when little is known about the product, and they frequently result in more thorough tests. They also help to label methods and processes that would otherwise be difficult to explain. Labeling these processes makes it easier to improve them. But what are they…?
Heuristics are simply experience-based techniques for solving a problem or learning something new. They’re like mental shortcuts that you have learned by experience. For example, you use a trial and error heuristic when you are matching nuts and bolts together or when you are finding the pieces of a puzzle. They can also be simple rules used to make a decision; either learned or hard-wired.
“If it smells bad… don’t eat it.”
“If someone is running towards you at high speed, yelling angrily… run away.”
In testing, heuristics are used to guide your testing, and the great thing is, you do not need to personally experience them yourself. You can often apply other people’s heuristics to your own testing. For example, if we are using the consistency heuristic, we are testing all the ways that a product is consistent in some way. (See Using Oracle Heuristics below.)
One of the values of learning more about heuristics is in discovering how other people think, and becoming capable of describing your own thinking.
In case you’re thinking you are the only one having trouble with this word, here are some statements from other testers who are attempting to explain heuristics…
“As an adjective, ‘heuristic’ means ‘serving to discover’ or ‘helping to learn’. When Archimedes realized that things that sink displace their volume of water, and things that float displace their mass, he ran naked through the streets of Athens yelling, ‘Eureka!’ or ‘I’ve discovered it!’ ‘Eureka’ and ‘heuristic’ come from the same root word in Greek.
“Here’s one way of understanding heuristics: compare ‘heuristic’ with ‘algorithm’. An algorithm is a method for solving a problem that’s guaranteed to have a right answer. So an algorithm is like a rule that you follow; a heuristic is like a rule of thumb that you apply. Rules of thumb usually work, but not always.”
“When you’re in uncertain conditions, or dealing with imperfect or incomplete information, you apply heuristics – methods that might work, or that might fail.”
Heuristics can also give you the words you need to describe your testing. For example, I used to describe my technique for finding a problem most often by saying, “I just played with it and this error occurred.” Heuristics changed that. Now I can clearly articulate what I was doing and the heuristic I was using. It provides a bit of credibility to the testing effort.
An oracle in testing is an evaluation tool that will tell you whether the program has passed or failed a test.
In high-volume automated testing, the oracle is probably another program that generates results or checks the results of the software under test.
In manual testing, an oracle could be as simple as a list of heuristics to use as a guide for testing. The oracle is generally more trusted than the software under test, so a concern flagged by the oracle is worth spending time and effort to check.
Using Oracle Heuristics
There is a common list of consistency heuristics presented in the book, Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach, and Bret Pettichord. These have been used by testers around the world for over 10 years and have been discussed, updated, and expanded since then. They have grown to the following list with the acronym, A FEW HICCUPPS, as described below. The descriptions are provided by James Bach and Michael Bolton (not the singer).
Acceptability: The idea– not covered in any other item on the list– is that something can be good, and not wrong, but still not good enough; there might be a better way for it to work that can reasonably be achieved.
Familiarity: We expect the system to be inconsistent with patterns of familiar problems. When we watch testers, we notice that they often start testing a product by seeking problems that they’ve seen before. This gives them some immediate traction; as they start to look for familiar kinds of bugs, they explore and interact with the product, and in doing so, they learn about it.
Explainability: We expect a system to be understandable to the degree that we can articulately explain its behavior to ourselves and others. If, as testers, we don’t understand a system well enough to describe it, or if it exhibits behavior that we can’t explain, then we have reason to suspect that there might be a problem of one kind or another. On the one hand, there might be a problem in the product that threatens its value. On the other hand, we might not know the about the product well enough to test it capably. This is, arguably, a bigger problem than the first. Our misunderstanding might waste time by prompting us to report non-problems. Worse, our misunderstandings might prevent us for recognizing a genuine problem when it’s in front of us.
World: We expect the product to be consistent with things that we know about or can observe in the world. Often this kind of inconsistency leads us to recognize that the product is inconsistent with its purpose or with an expectation that we might have had, based on our models and schemas. When we’re testing, we’re not able to realize and articulate all of our expectations in advance of an observation. Sometimes we notice an inconsistency with our knowledge of the world before we apply some other principle. This heuristic can fail when our knowledge of the world is wrong; when we’re misinformed or misremembering. It can also fail when the product reveals something that we hadn’t previously known about the world.
History: We expect the present version of the system to be consistent with past versions of it.
Image: We expect the system to be consistent with an image that the organization wants to project, with its brand, or with its reputation.
Comparable products: We expect the system to be consistent with systems that are in some way comparable. This includes other products in the same product line; competitive products, services, or systems; or products that are not in the same category but which process the same data; or alternative processes or algorithms.
Claims: We expect the system to be consistent with things important people say about it, whether in writing (references specifications, design documents, manuals, whiteboard sketches…) or in conversation (meetings, public announcements, lunchroom conversations…).
Users’ desires: We believe that the system should be consistent with ideas about what reasonable users might want.
Product: We expect each element of the system (or product) to be consistent with comparable elements in the same system.
Purpose: We expect the system to be consistent with the explicit and implicit uses to which people might put it.
Statutes and Standards: We expect a system to be consistent with relevant statutes, acts, laws, regulations, or standards. Statutes, laws and regulations are mandated mostly by outside authority (though there is a meaning of “statute” that refers to acts of corporations or their founders). Standards might be mandated or voluntary, explicit or implicit, external to the development group or internal to it.
Since an oracle is a way of recognizing a problem, this list can be used as an oracle. It’s a wonderful thing to be able to keep a list like this in your head, so that you’re primed to recognize problems.
It is easy for us to recognize a problem when the product doesn’t meet a written specification. But sometimes a problem will come from someone’s expectations that are not included in the specification. We can be a lot more credible when we can describe where their expectations come from.