Do people really matter? If you have a strong enough process in place, does it make any real difference who you have execute it? Couldn’t you just invest a lot of time defining and documenting a structured and disciplined, software engineering process, then hire cheap labor to follow it?
Many companies in various retail service industries do exactly that.
- Fast food chains like MacDonald’s and Burger King
- Big retailers like Walmart and Target
- Hospitality providers like Hilton and Sheraton
There’s a variation that goes even further. Major product companies go to great lengths to train independent service providers in the installation, maintenance and repair of their products. This applies to many kinds of products including automobiles, televisions, computers, lawn mowers, furnaces, refrigerators, medical equipment and many other gadgets.
Why not apply this approach to software engineering?
The standard answer is that software engineering includes an element of art. It’s not just about building something. The software needs to look good and behave well.
That rationale may sound good but the situation is more complex than artful engineering. Many professions include an artistic element. Plumbers and electricians often take great care in making their work look great.
Art aside, the software profession really is different — not only when compared to other diverse professions but also when compared to other engineering fields like electrical or mechanical engineering.
For one thing, software development is often a trial-and-error process. There is little risk or harm in trying something. If it works, leave it alone. If it doesn’t, try something else. If you’re not sure, send it to QA and let them deal with it.
This is quite different from many other professions where a mistake can be costly and second tries are hard to get.
Software is also a grass-roots movement. There is no single standards body or certifying agency. There are many. There is no preferred approach to managing software development, building software systems or writing code. There are many.
This makes it easy for almost anyone to give software engineering a try. It’s a low cost profession to enter and you can train yourself on your own time at home.
Software engineering is chaotic.
These characteristics make managing the software engineering process difficult. That’s why we have dozens of approaches to managing software projects and building software systems. No wonder the whole industry appears to be in chaos at times.
There is a powerful upside to the apparent chaos, however. It brings rapid change and fosters innovation. Yes, at times things move too fast and problems result but the industry is still learning.
Software engineering has it’s roots in the 1940′s but didn’t really come into it’s own as a profession until the 1960′s. That makes it very young — even electrical engineering can be traced back to the 1800′s.
Greater discipline will arrive in time. Until then, find an approach what works for you and keep making it better by using short cycle times, frequent checkpoints, and regular retrospectives.
Have you ever made a bad decision? The correct answer is “No, I’ve made many bad decisions.” [Sorry, no other response is acceptable.]
Project teams make many bad decisions. Bigger teams and longer projects result in many more bad decisions along the way. Why does this happen?
- The information being reviewed is wrong and no one realizes it.
- The data being analyzed is incomplete and misleading.
- The people making the decision don’t have sufficient experience to draw upon.
- The team feels pressured to do something they don’t feel good about.
- Someone lied (vendor, customer, manager, Scrum Master, Product Owner, etc).
(That last one is egregious and rare but it happens.) Thankfully, most bad decisions are minor and easily corrected. In most cases, making a bad decision is better than making no decision and leaving the team paralyzed.
Here lies a major weakness of BDUF (Big Design Up Front). Intelligent people don’t like to admit that some of the many detailed decisions they have to make on a project will prove to be bad, or at least much less than optimal. By the time those bad decisions have surfaced, code will have been written, test routines implemented, etc. The cost of fixing those issues is likely to be much higher than the cost of preventing them.
What can you do?
Clearly, any steps you can take to help the team make better decisions can only help. A simple approach is to help the team to self-organize so that major decisions can be made collectively. This is not easily done but start by reviewing the 12 Principles behind the Agile Manifesto. Encourage the whole team to be open, visible and collaborative.
Making this work well also requires just-in-time decision-making. It may be uncomfortable at first, but you really need to make decisions as late as possible in the project. Later usually comes with better information and greater knowledge. That doesn’t mean you should avoid decisions. It means not making them if you don’t need to.
There are always exceptions.
Life isn’t perfect and the above principles won’t always work. Sometimes a client or prospect (e.g. an RFP response) demands a detailed description of what the team will do, when they will do it, and how much it will cost. If you want to play, you have to make many design decisions up front. You can give yourself some wiggle room, but if you deviate too far from what you proposed, you could be in violation of an implied or written contract.
There are many ways to handle these situations and they mostly boil down to gaining the client’s trust so you are given greater flexibility. Agile approaches can be used in rigid, even fixed-price environments, if the client is willing to grant flexibility in at least one area — scope or time. If not, a command-and-control approach (i.e. one of the many waterfall variations) with a structured change control system will be needed.
Progress is making new mistakes.
The key word in the last sentence is ‘new’. Repetitive mistakes are killers and that’s why we use retrospectives to identify and eradicate them. New mistakes are okay — just make them as late in the project as possible.
Many software projects commit suicide. How? Here are a few gruesome ways.
- Not enough calendar time is allocated. High-quality software takes time. Throwing lots of people at the problem may get the project done faster but there will be many rough edges in the final software. Will customers accept that?
- The team is understaffed. This forces team members to wear many hats and perform jobs they either don’t like or aren’t good at. Quality will suffer as a result. Is that a price you are willing to pay?
- The budget is too small. Lack of funding will force the team to cut corners. This will cause a variety of headaches. The scope must align with the budget. Small budget, small scope.
- The wrong people are assigned to the project. Often a project team is cobbled together based on who happens to be available. The resulting software will have a cobbled together look and feel to it.
- The toolset is not up to the task. Skimping on tools is as bad as skimping on people. Enterprise-scale applications require enterprise-scale tools. That includes configuration management and automated test tools.
- Compromising on quality in order to meet time, scope or budget goals. Some managers deny that quality is a variable. It is and it should be tightly controlled.
- The process is overbearing. The process should fit the project. Subjecting a departmental project to the overhead and discipline of a major enterprise project will overburden the team and slow everyone down.
These problems aren’t easy to avoid. If you find yourself facing project suicide, you have to find negotiating room. For example, if calendar time is fixed, try to get scope flexibility. If the scope is fixed, try to get added funding. If the people mix is wrong, try to get training or coaching help.
Have you been on a suicide project? Tell me about it.
There are many passionate debates around the web on the subject of waterfall development versus one agile approach or another (e.g. Scrum, Kanban, Lean or XP). Most of those debaters are wrong. All wrong.
There is a fatal flaw in their logic. What is it? I’ll get to that.
Firstly, what is waterfall software development? I’m not going to introduce yet another slanted definition. Waterfall has many variants just like agile. Let me refer you to Wikipedia for what I hope is a simple, unbiased explanation of the waterfall model.
Can a waterfall variant deliver a successful software project outcome? Yes, of course, if it is properly applied in a situation where it fits. Where does it fit? Waterfall works best when the situation is relatively stable and the problem is well-defined. Ideally, the project should be short in duration though this it not a critical success factor.
Waterfall is also practical any time lots of documentation and audit trails are needed. This could be a heavily regulated environment or one where the customer simply demands a paper trail.
What about agile software development?
Again, there are many variants. I’ll refer you to Wikipedia’s definition of agile software development. (By the way, please take a little time to click through Wikipedia and read the “See Also” articles. There’s a wealth of useful information.)
Same question: Can an agile approach work? Same answer: Yes, if it is properly applied in a situation where it fits — smaller teams, experienced developers, corporate acceptance.
The fatal logic flaw I mentioned above? Almost everyone seems to assume that their preferred development approach will always be applied properly in a situation that is ready to adopt it. Yeah, right!
For example, waterfall teams are often forced to introduce complex change management practices. Why? Because the situation is dynamic and the problem cannot be completely defined up front. If too many changes are introduced during the project, an agile approach would have been a better choice.
Another example: Agile teams often operate without full business buy-in and participation. They go merrily along releasing software but no one outside of the team uses it. They could have followed an incremental waterfall approach with similar or better results.
Wikipedia has another great article called List of software development philosophies. Take a look. It is “a list of approaches, styles, and philosophies in software development.” There are 72 items listed and it’s acknowledged as incomplete. Seventy-two!
The waterfall versus agile debates are meaningless without context. I strongly prefer agile approaches simply because I operate in small, fluid environments where change is a daily experience. If I had to deal with change requests and documentation updates every day, I’d go mad. Instead, I add changes to the backlog and simply cycle them into the next planning meeting.
I also work with user communities that are heavily engaged in using and critiquing the software everyday.
Whatever approach you are using, be sure to keep metrics and ask your users what they think. If the approach is working, keep using it while seeking incremental improvements. If it’s not working, why are you still doing it?
Here’s the dilemma: Your Scrum team is nearing the end of a Sprint or a release cycle and they have experienced some unforeseen problems. Completing all the planned items appears unlikely. The final build will be deployed for end-user testing so there will be plenty of attention and feedback. You’re left with two choices:
- Rush to completion. Have the team work extra hours, cut some corners, relax your definition of done, and race to the finish line.
- Return a few stories to the backlog. Stabilize what you have and deliver good-quality results.
Which should you choose?
The answer is the often used “it depends”. It depends on the situation and what is most important to the business. This is one situation where the opinion of the Product Owner matters greatly.
All stories are not created equal. Regardless of the effort needed to implement them, stories have greater or lessor value to the business. That’s why they must be prioritized by the Product Owner. The team should be working on the high priority ones first. If they’ve done that and found themselves in the above situation, it’s easier to return low priority stories to the backlog.
If they haven’t prioritized, the decision gets much tougher and some long days are in their future. Either way, there is a significant downside.
- If the release is rushed and quality suffers, the users will get a bad impression of the software. If this will be the first time they use the software, things could get ugly. You know what they say about first impressions! It could take multiple additional releases to erase that memory.
- If features are left out, some users will be unhappy. They may refuse to do any testing because the software is not ready. A credibility gap is created simply because user expectations have not been met. Credibility gaps are very hard to fill!
Often, the developers and testers have little sensitivity to this type of issue. They’ve worked hard. They’ve accomplished a lot. Why is this such a big deal? That’s why it’s important to get everyone to make personal commitments, communicate often, and ask for help at the first sign of trouble.
You cannot entirely avoid this dilemma but you can take steps to mitigate it.
- Do not over commit. When in doubt, leave it out!
- Monitor your sprint deliveries carefully using burndowns. If you fall behind, odds are you will fall further behind. Catching up is hard to do!
- When you sense trouble, communicate. Let someone know. The earlier everyone knows, the simpler it is to take corrective action. Scrum Managers and Product Owners always appreciate an early warning!
Sounds obvious, right? So why do so many software development teams get into trouble? What does your Scrum team do to avoid trouble and meet commitments?
There’s an impression among some software developers that agile approaches like Scrum, Kanban, Lean and XP only apply to software products, not other types of software. That impression likely results from naming conventions such as “Product Owner”, “Product Backlog”, “Releases” and “User Stories”.
While many of us associate software with desktop applications, there are many types of software. The tools and techniques used to build them vary. Here’s a short list of common types:
- Application software – desktop applications
- Middleware – distributed systems software
- System software – operating systems, device drivers, etc.
- Databases – storage systems
- Firmware – low-level control software
- Programming tools – compilers, debuggers, etc.
Then we have software for smartphones, tablets, game boxes, and other interactive devices.
Product focus is a good thing.
Software doesn’t have to be sold to consumers in order to be described as a product. I think having a product focus can be a good thing for any software team. Products generate revenue. They keep companies in business. They have to be as good as or better than the competition.
Software development groups focused on internal users often lack the competitive energy that product groups have. I encourage everyone to think competitively regardless of the nature of the software being written.
The concept of releases also applies to any type of software development. Many projects seem to operate on the “one and done” principle — write lots of code, test it, release it — done. Toss it over the wall and it becomes someone else’s problem.
This attitude may succeed for small and narrow software solutions. Any software system tied to a major business process will require active user participation to validate functionality and identify defects. Furthermore, significant changes and upgrades will be needed as the business evolves. Plan on it.
User Stories don’t only apply to non-technical people. The user could be an external system issuing a request for status or data. The user could also be a system administrator making configuration changes.
Don’t get caught up in semantics.
Interpret the terms common to agile development within the context of your environment. Define ‘user’, ‘story’, ‘release’ and ‘product’ in terms that make sense to you and your team.
The concepts of short delivery cycles (sprints), brief, descriptive requirements (stories), and prioritized work lists (backlogs) can be applied to every project.
Google seems to have started a trend that is growing worse. Google has a history of releasing software labelled “beta”. The beta tag indicates that the software is not yet complete — missing features, increased likelihood of defects, and general usability issues.
The beta tag also provides an easy out. If users complain, simply remind them that it’s beta. If the software is poorly received, withdraw it from the market. Hey, it was beta!
Sometimes it’s not called beta.
Fair enough. Unfortunately, there seems to be a growing tendency toward releasing software that is incomplete, defect ridden, poorly designed and just not ready for prime time. Yet, there is no “beta” tag to be found.
Who’s doing this? Here are just a few examples:
- Apple (MobileMe)
- Canonical (Ubuntu’s Unity desktop)
- Google (Android & Chrome [early releases], Chromebooks)
- Microsoft (Vista)
- Mozilla (Firefox [early releases])
- RIM (PlayBook)
- Salesforce.com (Visualforce [early releases])
You get the idea. It’s a common tactic exacerbated by intense competition — the need to get a product out the door as soon as possible, ready or not. Many marketers believe that responding to the competition with an inferior product is better than not responding at all.
Okay, fine, but don’t let ‘inferior’ turn into ‘junk’.
Competition is not going away. In fact, it’s only going to get more intense as the pace of innovation accelerates. Does this mean we are all doomed to living with crappy software?
No! As consumers, we need to speak with our wallets and simply not purchase junk no matter how much esteem we have for the company shipping it. If we get duped into buying such a product, we need to complain to anyone and everyone, including regulators. False advertising is a Federal crime in the U.S.A.
For product companies and their retailers, it means being transparent, sharing information, and collaborating with customers. In a word, it means being agile. I’m willing to take a chance on an incomplete product if I know the company is committed to making rapid improvements and will exchange the device if the hardware is at fault.
This means releasing monthly software updates (quarterly updates might work but in a fast-paced market, a company could effectively be out of business in three months). It means retailers keeping smaller inventories. It means frequent blog postings, Twitter and Facebook updates, automatic notifications, etc. to keep customers informed.
It’s even more important for companies to listen. Ask for feedback. Take punches. Respond to complaints. Keep making improvements.
Bottom line: It’s not the software that wins or losses in the marketplace. It’s the company and the people behind it.
Applying metrics to agile projects is a controversial topic. Much of the controversy stems from the nature of agile development. Agile approaches are intended to be flexible and responsive. Metrics seem to imply rigid goals and expectations.
Fortunately, that does not have to be the case. Metrics can be valuable tools as long as they are used appropriately. The wrong metrics can be disruptive as teams will find ways to “game the system” in order to achieve the desired result.
The key point of agile metrics is to focus on team outcomes not individual activities.
Agile metrics are important from both the “how did we do” and the “how can we do better” perspectives. They help us assess the outcomes we’ve produced (lagging indicators) and the help us visualize trends (leading indicators) so that we can improve.
Some of the common measurement metrics for an agile team include:
- Burndown charts – Actual versus expected story points
- Number of story points completed during a sprint versus those expected
- Team velocity (the number of story points completed plotted over time)
- The number of defects (e.g., opened versus closed in the sprint)
- Budget (planned value) versus dollars spent (delivered or earned value)**
- Work hours allocated versus work hours spent**
- Product backlog size
- Test coverage
- Passed tests versus failed tests
- Average story size
- Estimated story development times vs actuals
- Team capacity over time (in story points)
** These only work if the problem and solution spaces are well understood. The greater the uncertainty associated with the project, the more planned value and allocated work hours will fluctuate.
The following metrics are largely ineffective for agile development:
- Simple checklists showing what has been done
- Lines of code
- Tasks completed (tasks outstanding cannot be determined in advance outside of a sprint)
- Individual developer/tester metrics instead of team metrics
- Hours worked (versus hour expected)
- Percent complete
Factors That Affect Metrics
Not every metric applies to every agile project or even to every sprint. Various agile approaches, such as Scrum, Kanban and XP, focus on different areas and place greater or lesser value on each metric above.
The nature of the work being done will also change the metric equation. A project focused on a simple upgrade to an existing software application is very different from one attempting to build something new and unproven.
The way the team is organized makes a difference. A team composed of senior, experienced developers/testers will measure themselves differently than one composed largely of inexperienced staffers.
The characteristics of the organization also come into play. A highly structured, disciplined organization will want more metrics than a more informal one.
The team’s goals and objectives are in some ways the most important factor. If the team is attempting to improve in a given area, they will need metrics that help them monitor and adjust along the way.
A Few Final Points
Finally, keep the following points in mind as you ponder which metrics are appropriate to your projects. There is a cost associated with gathering and reporting metrics. The more complex the measurement, the more expensive it will be to track and report.
Some metrics can be employed for brief periods to help teams achieve a goal or improve a process. You don’t have to commit to measuring something long term. Use metrics situationally.
Most metrics are not comparable across teams or across projects. Most measurements are ephemeral. They are only useful for a brief period of time. (Unless you like to dig back into history and reminisce about the past.)
Keep the metrics simple and gather data at regular intervals. Simpler is almost always better (and cheaper). It’s also important to keep a regular cadence thus making the measurements more meaningful.
Finally, remember that “Working software is the primary measure of progress.”, according to the Agile Manifesto. What metrics have you found valuable…or not?
Here’s a scenario…your organization decides to implement an agile approach to software development. It could be Scrum, Kanban, Lean, XP, etc. — doesn’t matter. The basics are implemented — small teams, product owners, stories, backlogs, daily meetings, retrospectives, etc.
These are all good steps to take but they don’t magically make an organization agile. True agility demands a mindset — a way of conducting business — that is different from the past.
Consider that all agile approaches have at least one thing in common — frequent face-to-face communication. Daily meetings are not enough. Agile teams need frequent face time. Without it, they can quickly regress to the old way of developing software.
The reason behind small teams is to encourage more face time. Too often we become dependent on technology solutions and ignore what has worked for millenniums. The problem with technology solutions is the layer of abstraction they add.
A simple change in behavior can help make any team more agile.
Encourage everyone to follow these guidelines whenever there is a need to communicate with someone else inside or outside the core team. The guidelines are in priority order from most desirable to least desirable.
- Walk down the hall and talk to the other person. Failing that…
- Pick up the phone and call the other person. If you can’t reach the person…
- Send an email. Indicate that you will follow-up with a phone call. Still no luck…
- If the person is online, send an instant message. Get a conversation started, then walk down the hall or use the phone. Lastly…
- Send a text message and follow up with a phone call. If all else fails…
- Schedule a meeting.
Eliminate the abstraction layers and you’ll get more done in less time. That’s agile.
Scrum teams can fail for many reasons just like any other team, agile or not. Describing how to succeed is tough because success is often situational. Describing how to avoid failure is simpler. Let’s explore some of most preventable ways to fail.
1. Wrong people on the team especially SM or PO
Every project team needs the right people with the right attitude. If you’ve got the wrong people on the team or people with the right skills but a poor fit, the project will suffer. This is particularly true of anyone in a leadership role such as the ScrumMaster or the Product Owner. Find good people. [If you end up with a bad apple, eliminate that person from the team at the earliest opportunity. You, the team and person being eliminated will be better off.]
2. Team over-commits and loses credibility
Many of us have a tendency to over-commit. We’re generally optimists and it often hurts us. Teams need to find an equilibrium between commitment and delivery as quickly as possible. Credibility is hard to gain and easy to lose.
3. Poor definition of done or poor control over done
Much has been written about the subject of being “done”. I won’t belabor it here. Get the team to agree on what actions must occur before a team member can move from one objective to another. Then, enforce it.
4. Velocity is not tracked or not used properly
Measuring velocity is critical to understanding how much the team can get done in a unit of time. You have to measure it and you have to refer to it when planning future sprints. No excuses.
5. Skipping retrospectives or not following through
Retrospectives are a critical learning tool. How can the team improve without assessing it’s strengths and weaknesses regularly? Those assessments must be translated into actionable improvement steps at every opportunity.
6. Technical debt piles up
Every Scrum team will incur technical debt. That’s not such a bad thing as long as the team recognizes the debt and takes action to reduce it in the current or subsequent sprints. If technical dept is allowed to pile up, the project will topple. Don’t let that happen.
7. Disorganized backlog(s) and not adding defects to them
Product, release and sprint backlogs have to be managed regularly. New defects that cannot be repaired during the current sprint must be added to the product or release backlog. Ignoring these actions will result in increasing chaos over time.
8. Version control and branching mismanaged
Good configuration management practices are important to any moderately sized project regardless of methodology. CM is even more important to Scrum and other agile projects because of the frequent builds and releases. Find a good open source or commercial CMS and use it.
9. Sprints and product release cycles are too long
Sprints of more than four weeks are too long. Product releases composed of more than three to four sprints are too long. Sure, there are exceptions but these general guidelines are worth remembering.
10. Inadequate testing (too manual)
The Achilles heel of waterfall is testing — it often gets shortchanged because it’s jammed at the end of the project. The same thing can happen to Scrum teams at the end of sprints or release cycles. Don’t let it. Automate as much testing as you reasonably can and test, test, test.
- May 2013 (8)
- April 2013 (13)
- March 2013 (13)
- February 2013 (12)
- January 2013 (12)
- December 2012 (7)
- November 2012 (11)
- October 2012 (12)
- September 2012 (8)
- August 2012 (11)
- July 2012 (13)
- June 2012 (12)
- May 2012 (13)
- April 2012 (13)
- March 2012 (13)
- February 2012 (12)
- January 2012 (13)
- December 2011 (12)
- November 2011 (12)
- October 2011 (13)
- September 2011 (14)
- August 2011 (18)
- July 2011 (13)
- June 2011 (18)
- May 2011 (19)
- April 2011 (16)
- March 2011 (21)
- February 2011 (20)
- January 2011 (22)
- December 2010 (21)
- November 2010 (16)
- July 2010 (2)