Fail, Learn, Assess and Try Again

failureI don’t know about you but I hate to fail. I’m not kidding. I REALLY hate failure. And that is one of the scariest aspects of agile software development. To succeed, you have to be willing to tolerate failure. In fact, if you’re not experiencing any failures, you’re not pushing yourself or your team hard enough.

Understand that I’m not referring to monumental failures. No one wants to see their million dollar product launch fail. (I don’t even want to see my $1,000 project fail.) That’s why agile principles encourage us to fail fast. Those quick failures tend to be small and relatively inexpensive. They are learning opportunities that help us avoid the big failures — the catastrophes.

Take a chance. Try an experiment. If you’re unsure about an approach or a technique, give a try. Run a test. If it fails, you haven’t invested much time or money so there’s little waste and no harm. It’s a lot better than spending days writing, reviewing and revising documents hoping that you haven’t missed anything.

Use Real Data

One area where you have to be extra vigilant lies in the data set used for your tests. You may get a favorable trial result using limited or artificial data inputs. When the software is deployed to a production environment with 1,000 times more data and many more boundary conditions, what worked in development may crash and burn.

I’ve seen this happen many times over the years. Software that responds instantly in development slows to a crawl in production. Or the software handles the artificial data set cleanly but when presented with real business data, numerous failure points are exposed.

Develop and test with real business data if you can. If time and money don’t allow for replicating the business data in development, do a trial production deployment. Be absolutely certain that you can back out whatever changes you make and restore the prior state quickly. It’s also a good idea to instrument the software. In other words, add extra logging and tracking functions so that you have good information for analysis in the event the trial fails.

If things go smoothly, you can re-deploy without the logging and tracking functions or, better still, simply turn off the logging via a configuration parameter.

Failure doesn’t have to threaten your career or bankrupt the company. Take prudent risks. When things don’t turn out as you hoped, learn, assess and try again. That’s agile!

photo credit: Honda News via photopin cc

Updated: March 12, 2013 — 10:16 pm

2 Comments

  1. I really like that you stress the importance of testing with real data. Far too often developers forget what the performance impact will be when they move from a dev environment with tens or maybe hundreds of records in a DB table to a production environment with millions of lines.

    While it is important to test on real data, I don’t think that it should exclude testing on artificial data. Carefully crafted test data is often much better at testing boundary conditions, especially if new functionality allows new patterns of data. Real data rarely push the maximum length limits of all fields or use strange unicode characters (I’m a Swede, and I often see ÅÄÖ get destroyed by encoding issues).

    I’d also like to add another reason to the list of cases where testing on real data is not an option: Privacy issues. I work with health care systems handling sensitive patient data. To us, it’s not an option to use real data in the daily development work. We have to rely on artificial data and then do final pre-production tests on a copy of the real data.

Comments are closed.