4 Risk Categories Are All You Need Worry About
Have you ever been involved in the software project from hell? Many of us have. The project begins with plenty of enthusiasm and optimism. Goals are defined. Staffing is assigned. Delivery dates and budgets are handed out. Everyone’s excited.
But somewhere along the road to done unexpected speed bumps impact the goals, dates and budgets. Some of the bumps are minor while others feel like a rough roller coaster ride. The project team regroups and moves on, yet the goals seem harder to achieve, the dates appear more and more unrealistic and the budget is all but exhausted.
How could this happen? What went wrong? The goals were reasonable. The schedule was adequate. The staffing was solid. What did the team miss?
The answer is risk.
Every project faces many risks during its lifetime. Risks are the opposite of opportunities. They are the bad things that can happen along the road to completion. Risks can be avoided using good planning. They can be mitigated with forethought. Or, they can be accepted because the benefits outweigh the drawbacks.
While risks can be sliced and diced into dozens of categories, let’s examine a simple risk management approach using just four categories:
Glitch: Minor Impact / Low Likelihood
“Glitches” are unlikely and easy to handle. They crop up from time to time on just about every project. You simply deal with them as they happen and move on. As long as some extra time is built into the schedule to manage glitches, they are not likely to cause trouble.
One example of a glitch from my own experience is a software application under development that was performing poorly. Because the production hardware would be much faster than the development hardware, we simply purchased the production system ahead of schedule and used it for development.
Obstacle: Minor Impact / High Likelihood
“Obstacles” are often unavoidable but easy to handle. You know that some number of them will happen so planning is essential. You need to identify the obstacles and either a) define a plan for dealing with them when the time comes, or b) plot a course around them.
Examples of obstacles are the defects/deficiencies present in an off-the-shelf software package purchased for a project. Assuming reasonable care went into the selection of the software, problems encountered with its use are unlikely to derail the project.
Surprise: Major Impact / Low Likelihood
“Surprises” are unwelcome and potentially painful. They aren’t likely so you don’t spend much time thinking about them but if they strike, your project is in trouble. As always, forewarned is forearmed. Lay out these potential surprises at the start of the project and decide what you can do up front to mitigate them. Then prepare an alternate plan to deal with the surprise should it actually occur.
For example, most of us have seen projects get into serious trouble when a key contributor leaves the team. However unlikely it may be that your “chief whatever” will leave before the project ends, it can happen and contingency plans are needed.
Disaster: Major Impact / High Likelihood
“Disasters” are often man-made and devastating. Any project facing risks in this category is in serious trouble at the outset. The management team better have lots of time and money available.
An all-too-frequent example is the geographically distributed project team that has little in common technically, culturally or linguistically, and a management team that expects them to march in unison. If you find yourself on such a project, drive away and don’t look back.
Here’s an all-too-common scenario in agile software development projects. You’ve defined fixed-length sprints. They are 4, 6 or 8 weeks long. The team is well into a sprint when the product owner shows up and says something like “We have to add another story to this sprint. The business will reject the software unless we add this story.” What do you do?
Some will argue that stories cannot be added or swapped during a sprint. The business will have to wait. Others will argue that the sprint should be scrubbed and a new one defined. Again, the business will have to wait. Are either of these positions reasonable?
As is often the case, the answer is, “It depends.” If the sprint end date is not mission-critical, it may be reasonable to argue for adding the new story to the next sprint or scrubbing the current sprint and starting anew. In either case, it will be several additional weeks before the business sees the software they want.
However, if the sprint end date is mission-critical, the team has a problem — potentially a big problem. Delivering software that the business doesn’t want is foolish. It will be rejected and the team will look bad. Yes, even if the business folks and the product owner made a mistake in selecting or defining stories, the development team will still end up looking bad. Why? Because they were given notice of the error and they failed to take corrective action. That’s not agile.
In this context, the team has 5 options. None of them are great choices. Some of them may not be viable in particular situations.
1. Add the new story to the sprint backlog and extend the sprint by a few days to accommodate it.
Assuming the business is willing to tolerate a delay of a few days, this is the simplest option. It will disrupt the team’s rhythm but that’s manageable. There may be a desire to extend the sprint by exactly one week rather than a partial week. Don’t do that! Adding more stories to the sprint to fill out the week will only make matters more complex.
2. Swap an existing story in the sprint backlog for the new one and keep the sprint timeline intact.
This may not be feasible. There are inter-dependencies among stories that complicate this option. It may also be difficult to find another story that can be readily swapped with the new one. That said, it’s worth taking a look at this. Consider swapping more than one existing story for the new one. Also consider splitting one or more larger stories to free up story points for the new story. You may be able to shorten the sprint using these techniques. Be creative.
3. Add a team member to work on the new story.
If someone is available and has worked with the team before, this is a viable option. However, bringing in a total stranger will be more hassle than it’s worth. Too much time will be lost getting the newbie up to speed.
4. Take on some technical debt.
Admittedly, this is a poor choice but in the interest of getting all the options out in the open, it’s a possibility. The team may be able to cut a few corners without impacting the usefulness or stability of the software and still deliver on time. Let everyone know what you’re doing and also let them know that the debt will be paid off during the next sprint. Just remember that business people don’t understand technical debt and don’t want to pay it off.
5. Have the team work some extra hours.
I don’t like this option. It’s the least desirable of the five to me. Once you demonstrate a willingness to work longer days or extra days, the precedent is established and you’ll have a tough time going back to a ‘normal’ schedule. Don’t offer this option unless the business situation is dire and be clear that it’s a one-time offer meant to address a crisis situation.
Can you think of any other options? Let me know.
#NoEstimates. Is it the latest agile software development trend or meaningless hype? Your guess is as good as mine. It’s an interesting question that deserves closer examination.
The general premise is that software estimates are unreliable. They are usually overly optimistic and are often grossly inaccurate. So is it better to avoid all the work that goes into estimating?
To answer any complex question, we need to establish context. Let’s start with a small team of 5-7 developers and testers. They are working on a relatively small, targeted, software application. The marketing department has defined the minimal viable product (MVP). The primary goal is to deliver the MVP as soon as possible. In other words, get minimal functionality out there fast and add to it in future releases.
Does this effort need estimates? Not really. Marketing needs a rough idea of when alpha-level code will be available. Allowing a few weeks for alpha testing and a few more for beta testing provides a timeline. That may well be good enough. Why spend a lot of time estimating individual stories, adding up estimates, and tracking all of it? What value would that add?
Well, there are two answers to that last question. There is little or no added value in the estimates for the current project. However, there may be value in helping the team make future estimates (assuming they ever will).
Now we’ll take a look at a different context.
We have a small team of 7-9 developers and testers. They are working on an enterprise-scale, software application. Again, marketing has defined the MVP but this time the MVP is big and complex. In other words, the first release has to be functionally complete in order to be competitive. Everyone understands that this effort will take several months to get to beta-level code.
Does this effort require estimates? If time and money are no object, why estimate? Just go for it. I don’t know about you, but I’ve never worked with an enterprise company that would start such a project without at least having epic-level estimates. Management needs some idea of what they’re getting into, how long it will take, and how much it will cost.
Also, in a lengthy project, early estimates help to fine tune later estimates. The team should get better and better at estimating as time passes. It’s not unusual, to get 50% of the way through a project only to realize that 80% remains. This happens when the early estimates were overly optimistic.
Should You Estimate?
That depends. Does your management team trust the software development team? If not, rolling with no estimates will never work. As soon as problems surface (and they inevitably will), the team’s lack of estimates will be blamed. A tough situation will cascade into turmoil.
If trust exists, you’ll need to break up the work into small chunks — the smaller the better. (I’m a fan of one-week sprints and four-week releases. Admittedly, those time-periods are tough for many to accept but that’s a discussion for another time.) If the team can demonstrate the ability to deliver as promised early on, management is much more likely to accept that they will keep delivering as promised. Estimates will add little, if any, value.
Estimates are a lot of work. You have to derive them, track them and report them. #NoEstimates? It’s a goal worth shooting for but will be hard to achieve. Work on building trust first. Your odds of winning acceptance of new ideas will improve dramatically.
So you’ve adopted an iterative approach to software development. Every ‘N’ weeks your team delivers working software. ‘N’ is usually 2-4 but could be 6-8. The point is that the project is divided into a series of iterations or sprints rather than building everything and delivering all of it in one big bang.
Congratulations! Your software development team is agile — or maybe not. We need to take a closer look at how the work is being broken down into chunks and spread out among the iterations. Let’s begin with a few basic assumptions. If you’re not doing these things, you’re not agile.
- The team includes a test function (i.e. there isn’t a separate QA group)
- The team delivers working software at the end of each iteration.
- There is a backlog of work items to be done.
Sounds good, right? But wait, let’s examine that backlog. Agile software development teams are strongly encouraged to define their work chunks (e.g. user stories) as vertical slices through the software system. This approach helps in growing the system little by little. It invokes the principle of progressive refinement — the software becomes more comprehensive sprint by sprint.
Unfortunately, many teams don’t approach the work effort this way. They begin by defining a set of software components that will be built semi-independently and ultimately joined to construct the system. They then build and test each component during separate iterations and deliver them as completed units.
Here’s the problem. The software won’t function in a useful way until nearly all the components are delivered. Thus the deliverables at the end of each iteration are largely useless to the business. The components don’t accomplish anything individually. They need to be integrated into a whole system to be useful.
This component approach is iterative but it’s NOT agile.
Go back and take a closer look at that backlog. Think about user activities (or business process flows). Those are your work chunks (i.e. stories). Map the activities to your software components. Then, isolate specific functionality within each component that’s needed to support the user activities.
The goal is to build out components little by little and as needed. This way the team will deliver functional software at the end of each iteration and the components will be progressively refined. That’s agile.
Imagine you had a team of ace software developers. This team is fabulous. They know how to get stuff done. They work well together and they work well with other teams. They always find a way to deliver good software.
Now imagine that they are given a terrible software development process to follow. The process is big, complex, redundant and bureaucratic. Every decision has to be validated, reviewed and approved. They are also given inadequate tools, poorly defined requirements, and a poor project manager. How could any team possibly succeed in such circumstances!
Yet, great teams overcome enormous obstacles. They find ways to succeed no matter what it takes. That’s what great teams do. They rally. They swarm. They fight. They win. The underlying process is simply a formality. The poor support structures are mere impediments.
The Opposite Won’t Work
Now imagine the opposite. Your team is given the world’s best software development process. This process is lean, efficient and simple. Many teams have followed it and its track record is nearly perfect. The team is also given excellent tools, clear requirements and a terrific project manager.
Unfortunately, the team members are rejects from other software development teams. They struggle with every decision. They argue amongst themselves constantly and they argue with everyone around them. They can’t agree on anything. Will this team find success by following the perfect process and having a great development environment?
No, they won’t. No development process, management support or technical training will save this team. They’re doomed!
Your Team Needs More Great People
Of course, real world software development teams are a mix of great people and not-so-great people. That’s why we need good processes, good tools, and good management support structures. These added elements help with team discipline and offer a path to consistency.
Want to improve your software development track record? Hire better quality people and pull all the stops to retain them. As for the poor performers, reassign them or fire them. Just get them off the team. They will only drag everyone else down.
It takes great people to build great software.
Some people have a hard time accepting iterative approaches to getting work done. It may be because many of us have an innate desire to finish something — we need to be done. Yet, when we iterate over a work item, it feels like it takes longer to get to done. Here’s a simple example taken from everyday life.
Let’s say you want to clean your entire house or apartment. Cleaning one room at a time seems like a good approach. You’d get a sense of accomplishment after finishing each room. But it’s not efficient. You’d be switching among cleaning products and cleaning tools — constantly wrapping up one task and preparing for the next one. You’d also run the risk of stopping before you ever got to little used areas of your home (which is why you should start with the least used area).
A more efficient and ultimately faster approach is to break up the housecleaning task into iterations where each iteration is focused on a particular cleaning task. For example, your iterations might look like this:
- Pick up all the clutter throughout the house and put items where they belong.
- Prepare for window cleaning and wash all the windows.
- Prepare for furniture dusting and dust all the furniture items as needed.
- Vacuum every room including drapes, upholstered furniture and floors.
- Prepare for floor washing and wash any remaining solid floor surfaces.
The downside is that you won’t get that same sense of satisfaction of being completely ‘done’ with a room until later. The upside is that you’ll finish the entire job sooner. Another upside is that this approach is more readily adaptable to multiple people doing the cleaning. Once step one is completed in a room, someone else can begin step two in that room and so on. The cleaning crew won’t be competing for cleaning items as they would in the one room at a time approach.
Anyway, it’s a simple example intended to make a point. There are many jobs that can be done more efficiently and faster by iterating over the solution space.
Writers Do It
Another great example is the job of writing any lengthy document. It can be a novel, short story, article, specification or presentation. Writers have been doing it for centuries. They don’t plod through documents one section at a time. The efficient process goes something like this:
- Create an outline. Layout the major sections.
- Make a complete pass through the document filling in the quick and simple parts.
- Make another pass adding details and supporting information.
- Make a final pass filling in missing pieces.
- Lastly, check, double check and wordsmith the entire document.
Agile Developers Do It Too
For agile software development, the same rules apply. The iterative process is similar and might look like this:
- Create a structure or foundation for a code module.
- Make a complete pass to get basic functionality working.
- Make another pass to add complex functional elements.
- Make a final pass for error handling and logging.
- Lastly, check everything and integrate the module with the rest of the system.
The precise details will vary depending on your skills, style and confidence. The point is that iterating over the solution space is ultimately a superior way of getting things done. Plodding through any job trying to completely finish one area before moving to the next one is inefficient and costly. You’ll inevitably be forced to go back and adjust something you thought was done.
Embrace the feeling of being ‘done’ when completing each task — not just when the entire job is compete.
The challenge isn’t learning new things. It’s unlearning old ones.
Change is all but doing something differently, which requires learning something new. No problem, right? Life is learning.
But there’s a flip side to change. You have to unlearn something old. You need to stop doing something you’ve learned to do. If you’re not particularly happy with the old way of doing things, unlearning should be a pleasant task. However, if you’re comfortable and skilled in the old way, unlearning it will be a challenge.
Consider this common example.
Let’s say we have a software development team that’s accustomed to receiving lengthy business requirements documents. They study them carefully, prioritize the contents, and lay out game plans for building the systems. Let’s assume that they know many things will change along the way but they like seeing the big picture at the outset. That is, all the major and minor requirements are defined in a single document. When things change (as they always do), the team will re-group and adjust the plan.
When this team begins receiving business requirements in the form of user epics and stories, there will be a bit of culture shock. The big picture will still be there but it will be in the form of epics that progressively drill down into detailed requirements via stories. The epics and stories form a hierarchy that presents the same business requirements in a very different format.
To make things more unnerving, not all of the epics will be decomposed into stories before the team begins to write code — some details will be missing. Having been through this transition several times, I can tell you that some developers and testers will be extremely uncomfortable.
You can train people in a new way of doing something and you have to untrain them in doing something else.
It’s not easy. Be patient. It may not be practical to completely abandon the way things are done today and jump into the new way of doing the same things tomorrow. Going from the current situation to the new one may require intermediate steps. Some people will have trouble letting go, needing time to gradually transition to the new way.
There may be a clash between those ready to leave the past behind and those clinging to it. Keep expectations under control. Move fast but not so fast that some people get left behind and unable to contribute.
I’ll bet you’ve seen this happen. It may even have happened to you. The business folks are using software they built to track and report something important. The software is a bit crude, maybe even primitive, but it works. It’s simple. (Often, Microsoft Excel and/or Microsoft Access are the preferred tools.)
Everyone loves the insights they get from the software. Now they want more — a lot more. The business wants to be able to mix, match, sort, filter, add, remove and combine just about every data element there is. And, oh by the way, they want the new software to be just as simple and intuitive as the current software.
The project is complex enough that they ask the IT department for help. IT decides that a new enterprise application is required and it will take several months to develop. The business people gasp — “SEVERAL MONTHS!”.
I’ve seen this scenario unfold many times over the years. It happens for three reasons. First, there’s a lack of trust. Neither side trusts the other to be open, honest and direct. Second, successful businesses become more complex over time. That complexity brings new challenges making it harder to get things done. Third, people get carried away. They wish for everything they can think of rather than focusing on what they really need.
Have you been there? What was the result? If you find yourself in this situation, consider the following advice.
1. Be transparent.
2. Be honest.
3. Commit to a long-term relationship.
Understand the current situation
4. Assess the data that’s being gathered and reported.
5. Be sure you fully understand what’s being analyzed today.
6. Evaluate how the data is processed.
7. Isolate the limitations of the current approach.
Define the desired situation
8. Gather stories that describe what people want to do — everything they want to do.
9. Find example user interface techniques from existing software that people like.
10. Determine if the current software can be changed/enhanced to meet the needs.
Build the new/enhanced system
11. Weed out the stories to get to what really matters.
12. Prioritize the remaining stories.
13. Build the new system in short iterations (2-4 weeks each).
14. Be responsive after each deployment.
15. Fix bugs — even those you don’t agree with.
Keep referring back to items 1, 2 and 3. It will take a lot of effort to build trust. It will take a lot of time to win people over. Like it or not, you have to do it. Without trust, none of the other steps will matter.
Here’s a question that I believe generates some of the controversy around agile software development techniques versus waterfall techniques. If your team delivers fewer software features and functions but the software is higher in quality, are your user base and your company better off?
Agile teams often claim to deliver better quality software. I agree with that sentiment as long as the team follows core agile principles, in particular, “Customer collaboration over contract negotiation”. My experience with both agile and waterfall teams supports the quality claim. All software development teams have a strong tendency to over-commit. Agile teams figure this out fairly early in the development cycle and make adjustments. Waterfall teams figure it out much later and tend to sacrifice quality to stay on track.
Despite these tendencies, don’t rush to judgment in answering the question. Every major software company has rushed products to market knowing that the products weren’t ready. Here are a few examples:
- Apple: MobileMe and iOS Maps
- Google (and its hardware partners): Releasing “Beta” products and leaving them in beta status indefinitely
- Microsoft: Windows Vista
- RIMM: Blackberry Storm and PlayBook
Despite the problems faced by these and many other rushed products, there are good reasons for getting to market fast.
- Obtain first mover advantage
- Generate cash to continue product development efforts
- Respond to customer demand for new capabilities
- Respond to competitive threats
- Establish a position in a new market
The original question generates follow-up questions such as “How do you define quality?” and “What do I have to give up to get better quality?”. The questions are simple but the answers never are.
Agile teams are not immune.
Agile development teams also get sucked into rush-to-market tempests. The pressure to deliver something — anything — can reach stratospheric levels. When it does, quality often gets sacrificed. Unfortunately, customers, investors and the trade press share some of the responsibility for pressuring companies to release products too soon. You can have it fast but something else has to be compromised. That something is often quality.
For me, quality trumps features. I’d rather have a small and reliable feature set than a large and flaky feature set. The project team will eventually run out of time. The software will have to ship — ready or not. That’s why it’s so critically important to prioritize features and deliver quality results in each iteration.
Do you agree?
I’ve criticized metrics in this blog before and I’m doing it again. I’m really a fan of metrics when they’re applied properly and interpreted correctly. Unfortunately, metrics are often misapplied and misinterpreted resulting in poor decision-making.
That said, let’s take a closer look at a popular Scrum metric — velocity. Here is the definition found at the Scrum Alliance:
“In Scrum, velocity is how much product backlog effort a team can handle in one sprint. This can be estimated by viewing previous sprints, assuming the team composition and sprint duration are kept constant. It can also be established on a sprint-by-sprint basis, using commitment-based planning.
Once established, velocity can be used to plan projects and forecast release and product completion dates.
How can velocity computations be meaningful when backlog item estimates are intentionally rough? The law of large numbers tends to average out the roughness of the estimates.”
Makes sense, right? Pay particular attention to the phrases “…assuming the team composition and sprint duration are kept constant” and “The law of large numbers tends to average out the roughness of the estimates.”
Scrum teams usually measure their software development effort in story points so velocity is the number of story points the team can finish during a sprint. You could use hours or any other measure you like. The key point is to develop a reliable way of sizing units of work (stories).
In a perfect world, every sprint would deliver the same number of story points. A velocity chart would then create a flat line as shown here:
This chart shows a steady velocity of 20 story points delivered per sprint. If a development team could achieve this, it would indicate that they are very good at sizing stories and very good at getting the stories to ‘done’ during each sprint. (It might also indicate that they are very good at gaming the system but let’s not go there.)
In the real world, velocity charts tend to look more like this:
The number of story points delivered varies — at times, it varies widely. There may even be sprints where the velocity drops to zero because the team encountered a major unforeseen impediment.
Does this mean velocity is a useless metric?
Yes, if your expectation is that velocity can predict the future. It can’t. Velocity can only tell you what has already happened. It offers a suggestion of what can be expected in the future — over several sprints — but it can’t tell us how many story points will be delivered in the next sprint.
In other words, if the team is averaging 20 story points per sprint over several sprints, they are most likely to deliver 100 story points over the next 5 sprints. Will they deliver 20 story points during the next sprint? Your guess is as good as mine.
Warning: I just made a huge assumption in stating that 100 points can be expected in the next 5 sprints. I assumed that the team would remain intact AND that the software environment would remain fairly static. If there’s a major change in personnel, technology or tools, velocity is likely to decline sharply before rebounding.
Bottom line: I like the velocity metric and I recommend you track it. Just don’t expect it to be a crystal ball.
- May 2013 (10)
- April 2013 (13)
- March 2013 (13)
- February 2013 (12)
- January 2013 (12)
- December 2012 (7)
- November 2012 (11)
- October 2012 (12)
- September 2012 (8)
- August 2012 (11)
- July 2012 (13)
- June 2012 (12)
- May 2012 (13)
- April 2012 (13)
- March 2012 (13)
- February 2012 (12)
- January 2012 (13)
- December 2011 (12)
- November 2011 (12)
- October 2011 (13)
- September 2011 (14)
- August 2011 (18)
- July 2011 (13)
- June 2011 (18)
- May 2011 (19)
- April 2011 (16)
- March 2011 (21)
- February 2011 (20)
- January 2011 (22)
- December 2010 (21)
- November 2010 (16)
- July 2010 (2)