Testing Survival Means Knowing What to Test

Does every feature and function in the software need to be 100% tested and approved? I don’t think so.

Firstly, let’s split testing into two major domains:

  1. Verification: Does the software do what the specifications and the developers intended it to do?
  2. Validation: Does the software do what the business wants and needs it to do?

While these concepts are intimately related, they are also independent. In the case of #1, the software may reliably do exactly what the developers intended but they may have misunderstood what the business wants — fail.

Conversely for #2, the software may do exactly what the business wants but that may not be enough to form a complete solution. This is a little harder to visualize so here’s an example. The business wants to be able to search for customers that meet defined criteria and see a list. Fair enough.

What happens if the search delivers 1,000,000 results? Will the user’s computer be unavailable for an extended period of time while the request is processed? What will happen if the user cancels the request or her computer freezes? You get the idea.

The business is responsible for defining the results they expect. The developers are responsible for explaining the many possible outcomes and working with the business to determine how to handle them.

Okay, back to the original question — test 100% of everything?

Ideally, the development team would have the people, equipment and time to thoroughly and completely test everything. I don’t know about your situation, but no project I’ve ever worked on had such luxury. There never seems to be enough time even if we’re lucky enough to have people and equipment available.

This problem is often worse on waterfall projects where the bulk of the testing takes place after the code is written and bench tested. QA tends to get squeezed between late software delivery and a fixed deployment date.

Agile projects such as those using Scrum, XP or Kanban are somewhat less susceptible to this problem because testing takes place during every sprint (iteration). However, even agile projects often squeeze the testing time as the code is rushed to completion and the sprint deadline looms.

The solution lies in prioritizing features and functions — ideally via stories. Following this, prioritize failure scenarios. Test the high-priority (i.e. high-visibility) stories well. Test the most likely failure scenarios well. Then, as time allows work through the remaining stories and failures testing broadly but not always deeply.

Is this a ‘best practice’?

Of course, not. It is a survival practice. Do the best you can with the time, people and tools available — and be proud of it. Make sure you openly communicate to senior management how the testing was conducted and what the risks are in shipping the product as is.

Testing software, just like writing software, is not a perfect science. You do your best, evaluate the risks, and make a business decision as to what to do next. Do you have anything to add?

Updated: May 11, 2011 — 10:09 pm