These terms are being thrown around a lot lately in the testing community, and they can cause some confusion the first time you hear them being used in this context. They definitely confused me the first time I heard them, so I will attempt to provide some clarification in this post.
What are heuristics and oracles, and why should you learn more about them?
I have found that heuristics and oracles provide a great starting point for testing, especially when little is known about the product, and they frequently result in more thorough tests. They also help to label methods and processes that would otherwise be difficult to explain. Labeling these processes makes it easier to improve them. But what are they…?
Heuristics are simply experience-based techniques for solving a problem or learning something new. They’re like mental shortcuts that you have learned by experience. For example, you use a trial and error heuristic when you are matching nuts and bolts together or when you are finding the pieces of a puzzle. They can also be simple rules used to make a decision; either learned or hard-wired.
“If it smells bad… don’t eat it.”
“If someone is running towards you at high speed, yelling angrily… run away.”
In testing, heuristics are used to guide your testing, and the great thing is, you do not need to personally experience them yourself. You can often apply other people’s heuristics to your own testing. For example, if we are using the consistency heuristic, we are testing all the ways that a product is consistent in some way. (See Using Oracle Heuristics below.)
One of the values of learning more about heuristics is in discovering how other people think, and becoming capable of describing your own thinking.
In case you’re thinking you are the only one having trouble with this word, here are some statements from other testers who are attempting to explain heuristics…
“As an adjective, ‘heuristic’ means ‘serving to discover’ or ‘helping to learn’. When Archimedes realized that things that sink displace their volume of water, and things that float displace their mass, he ran naked through the streets of Athens yelling, ‘Eureka!’ or ‘I’ve discovered it!’ ‘Eureka’ and ‘heuristic’ come from the same root word in Greek.
“Here’s one way of understanding heuristics: compare ‘heuristic’ with ‘algorithm’. An algorithm is a method for solving a problem that’s guaranteed to have a right answer. So an algorithm is like a rule that you follow; a heuristic is like a rule of thumb that you apply. Rules of thumb usually work, but not always.”
“When you’re in uncertain conditions, or dealing with imperfect or incomplete information, you apply heuristics – methods that might work, or that might fail.”
Heuristics can also give you the words you need to describe your testing. For example, I used to describe my technique for finding a problem most often by saying, “I just played with it and this error occurred.” Heuristics changed that. Now I can clearly articulate what I was doing and the heuristic I was using. It provides a bit of credibility to the testing effort.
An oracle in testing is an evaluation tool that will tell you whether the program has passed or failed a test.
In high-volume automated testing, the oracle is probably another program that generates results or checks the results of the software under test.
In manual testing, an oracle could be as simple as a list of heuristics to use as a guide for testing. The oracle is generally more trusted than the software under test, so a concern flagged by the oracle is worth spending time and effort to check.
Using Oracle Heuristics
There is a common list of consistency heuristics presented in the book, Lessons Learned in Software Testing: A Context-Driven Approach by Cem Kaner, James Bach, and Bret Pettichord. These have been used by testers around the world for over 10 years and have been discussed, updated, and expanded since then. They have grown to the following list with the acronym, FEW HICCUPPS, as described below. The descriptions are provided by James Bach and Michael Bolton (not the singer).
Familiarity: We expect the system to be inconsistent with patterns of familiar problems. When we watch testers, we notice that they often start testing a product by seeking problems that they’ve seen before. This gives them some immediate traction; as they start to look for familiar kinds of bugs, they explore and interact with the product, and in doing so, they learn about it.
Explainability: We expect a system to be understandable to the degree that we can articulately explain its behavior to ourselves and others. If, as testers, we don’t understand a system well enough to describe it, or if it exhibits behavior that we can’t explain, then we have reason to suspect that there might be a problem of one kind or another. On the one hand, there might be a problem in the product that threatens its value. On the other hand, we might not know the about the product well enough to test it capably. This is, arguably, a bigger problem than the first. Our misunderstanding might waste time by prompting us to report non-problems. Worse, our misunderstandings might prevent us for recognizing a genuine problem when it’s in front of us.
World: We expect the product to be consistent with things that we know about or can observe in the world. Often this kind of inconsistency leads us to recognize that the product is inconsistent with its purpose or with an expectation that we might have had, based on our models and schemas. When we’re testing, we’re not able to realize and articulate all of our expectations in advance of an observation. Sometimes we notice an inconsistency with our knowledge of the world before we apply some other principle.
History: The feature’s or function’s current behavior should be consistent with its past behavior, assuming that there is no good reason for it to change. This heuristic is especially useful when testing a new version of an existing program.
Image: The product’s look and behavior should be consistent with an image that the development organization wants to project to its customers or to its internal users. A product that looks shoddy often is shoddy.
Comparable products: We may be able to use other products as a rough, de facto standard against which our own can be compared.
Claims: The product should behave the way some document, artifact, or person says it should. The claim might be made in a specification, a Help file, an advertisement, an email message, or a hallway conversation, and the person or agency making the claim has to carry some degree of authority to make the claim stick.
Users’ desires: A feature or function should behave in a way that is consistent with our understanding of what users want.
The Product itself: The behavior of a given function should be consistent with the behavior of comparable functions or functional patterns within the same product unless there is a specific reason for it not to be consistent.
Purpose: The behavior of a feature, function, or product should be consistent with its apparent purpose.
Statutes: The product should behave in compliance with legal or regulatory requirements and standards.
Since an oracle is a way of recognizing a problem, this list can be used as an oracle. It’s a wonderful thing to be able to keep a list like this in your head, so that you’re primed to recognize problems.
It is easy for us to recognize a problem when the product doesn’t meet a written specification. But sometimes a problem will come from someone’s expectations that are not included in the specification. We can be a lot more credible when we can describe where their expectations come from.