4 Risk Categories Are All You Need Worry About
Have you ever been involved in the software project from hell? Many of us have. The project begins with plenty of enthusiasm and optimism. Goals are defined. Staffing is assigned. Delivery dates and budgets are handed out. Everyone’s excited.
But somewhere along the road to done unexpected speed bumps impact the goals, dates and budgets. Some of the bumps are minor while others feel like a rough roller coaster ride. The project team regroups and moves on, yet the goals seem harder to achieve, the dates appear more and more unrealistic and the budget is all but exhausted.
How could this happen? What went wrong? The goals were reasonable. The schedule was adequate. The staffing was solid. What did the team miss?
The answer is risk.
Every project faces many risks during its lifetime. Risks are the opposite of opportunities. They are the bad things that can happen along the road to completion. Risks can be avoided using good planning. They can be mitigated with forethought. Or, they can be accepted because the benefits outweigh the drawbacks.
While risks can be sliced and diced into dozens of categories, let’s examine a simple risk management approach using just four categories:
Glitch: Minor Impact / Low Likelihood
“Glitches” are unlikely and easy to handle. They crop up from time to time on just about every project. You simply deal with them as they happen and move on. As long as some extra time is built into the schedule to manage glitches, they are not likely to cause trouble.
One example of a glitch from my own experience is a software application under development that was performing poorly. Because the production hardware would be much faster than the development hardware, we simply purchased the production system ahead of schedule and used it for development.
Obstacle: Minor Impact / High Likelihood
“Obstacles” are often unavoidable but easy to handle. You know that some number of them will happen so planning is essential. You need to identify the obstacles and either a) define a plan for dealing with them when the time comes, or b) plot a course around them.
Examples of obstacles are the defects/deficiencies present in an off-the-shelf software package purchased for a project. Assuming reasonable care went into the selection of the software, problems encountered with its use are unlikely to derail the project.
Surprise: Major Impact / Low Likelihood
“Surprises” are unwelcome and potentially painful. They aren’t likely so you don’t spend much time thinking about them but if they strike, your project is in trouble. As always, forewarned is forearmed. Lay out these potential surprises at the start of the project and decide what you can do up front to mitigate them. Then prepare an alternate plan to deal with the surprise should it actually occur.
For example, most of us have seen projects get into serious trouble when a key contributor leaves the team. However unlikely it may be that your “chief whatever” will leave before the project ends, it can happen and contingency plans are needed.
Disaster: Major Impact / High Likelihood
“Disasters” are often man-made and devastating. Any project facing risks in this category is in serious trouble at the outset. The management team better have lots of time and money available.
An all-too-frequent example is the geographically distributed project team that has little in common technically, culturally or linguistically, and a management team that expects them to march in unison. If you find yourself on such a project, drive away and don’t look back.
Here’s an all-too-common scenario in agile software development projects. You’ve defined fixed-length sprints. They are 4, 6 or 8 weeks long. The team is well into a sprint when the product owner shows up and says something like “We have to add another story to this sprint. The business will reject the software unless we add this story.” What do you do?
Some will argue that stories cannot be added or swapped during a sprint. The business will have to wait. Others will argue that the sprint should be scrubbed and a new one defined. Again, the business will have to wait. Are either of these positions reasonable?
As is often the case, the answer is, “It depends.” If the sprint end date is not mission-critical, it may be reasonable to argue for adding the new story to the next sprint or scrubbing the current sprint and starting anew. In either case, it will be several additional weeks before the business sees the software they want.
However, if the sprint end date is mission-critical, the team has a problem — potentially a big problem. Delivering software that the business doesn’t want is foolish. It will be rejected and the team will look bad. Yes, even if the business folks and the product owner made a mistake in selecting or defining stories, the development team will still end up looking bad. Why? Because they were given notice of the error and they failed to take corrective action. That’s not agile.
In this context, the team has 5 options. None of them are great choices. Some of them may not be viable in particular situations.
1. Add the new story to the sprint backlog and extend the sprint by a few days to accommodate it.
Assuming the business is willing to tolerate a delay of a few days, this is the simplest option. It will disrupt the team’s rhythm but that’s manageable. There may be a desire to extend the sprint by exactly one week rather than a partial week. Don’t do that! Adding more stories to the sprint to fill out the week will only make matters more complex.
2. Swap an existing story in the sprint backlog for the new one and keep the sprint timeline intact.
This may not be feasible. There are inter-dependencies among stories that complicate this option. It may also be difficult to find another story that can be readily swapped with the new one. That said, it’s worth taking a look at this. Consider swapping more than one existing story for the new one. Also consider splitting one or more larger stories to free up story points for the new story. You may be able to shorten the sprint using these techniques. Be creative.
3. Add a team member to work on the new story.
If someone is available and has worked with the team before, this is a viable option. However, bringing in a total stranger will be more hassle than it’s worth. Too much time will be lost getting the newbie up to speed.
4. Take on some technical debt.
Admittedly, this is a poor choice but in the interest of getting all the options out in the open, it’s a possibility. The team may be able to cut a few corners without impacting the usefulness or stability of the software and still deliver on time. Let everyone know what you’re doing and also let them know that the debt will be paid off during the next sprint. Just remember that business people don’t understand technical debt and don’t want to pay it off.
5. Have the team work some extra hours.
I don’t like this option. It’s the least desirable of the five to me. Once you demonstrate a willingness to work longer days or extra days, the precedent is established and you’ll have a tough time going back to a ‘normal’ schedule. Don’t offer this option unless the business situation is dire and be clear that it’s a one-time offer meant to address a crisis situation.
Can you think of any other options? Let me know.
#NoEstimates. Is it the latest agile software development trend or meaningless hype? Your guess is as good as mine. It’s an interesting question that deserves closer examination.
The general premise is that software estimates are unreliable. They are usually overly optimistic and are often grossly inaccurate. So is it better to avoid all the work that goes into estimating?
To answer any complex question, we need to establish context. Let’s start with a small team of 5-7 developers and testers. They are working on a relatively small, targeted, software application. The marketing department has defined the minimal viable product (MVP). The primary goal is to deliver the MVP as soon as possible. In other words, get minimal functionality out there fast and add to it in future releases.
Does this effort need estimates? Not really. Marketing needs a rough idea of when alpha-level code will be available. Allowing a few weeks for alpha testing and a few more for beta testing provides a timeline. That may well be good enough. Why spend a lot of time estimating individual stories, adding up estimates, and tracking all of it? What value would that add?
Well, there are two answers to that last question. There is little or no added value in the estimates for the current project. However, there may be value in helping the team make future estimates (assuming they ever will).
Now we’ll take a look at a different context.
We have a small team of 7-9 developers and testers. They are working on an enterprise-scale, software application. Again, marketing has defined the MVP but this time the MVP is big and complex. In other words, the first release has to be functionally complete in order to be competitive. Everyone understands that this effort will take several months to get to beta-level code.
Does this effort require estimates? If time and money are no object, why estimate? Just go for it. I don’t know about you, but I’ve never worked with an enterprise company that would start such a project without at least having epic-level estimates. Management needs some idea of what they’re getting into, how long it will take, and how much it will cost.
Also, in a lengthy project, early estimates help to fine tune later estimates. The team should get better and better at estimating as time passes. It’s not unusual, to get 50% of the way through a project only to realize that 80% remains. This happens when the early estimates were overly optimistic.
Should You Estimate?
That depends. Does your management team trust the software development team? If not, rolling with no estimates will never work. As soon as problems surface (and they inevitably will), the team’s lack of estimates will be blamed. A tough situation will cascade into turmoil.
If trust exists, you’ll need to break up the work into small chunks — the smaller the better. (I’m a fan of one-week sprints and four-week releases. Admittedly, those time-periods are tough for many to accept but that’s a discussion for another time.) If the team can demonstrate the ability to deliver as promised early on, management is much more likely to accept that they will keep delivering as promised. Estimates will add little, if any, value.
Estimates are a lot of work. You have to derive them, track them and report them. #NoEstimates? It’s a goal worth shooting for but will be hard to achieve. Work on building trust first. Your odds of winning acceptance of new ideas will improve dramatically.
I’ve criticized metrics in this blog before and I’m doing it again. I’m really a fan of metrics when they’re applied properly and interpreted correctly. Unfortunately, metrics are often misapplied and misinterpreted resulting in poor decision-making.
That said, let’s take a closer look at a popular Scrum metric — velocity. Here is the definition found at the Scrum Alliance:
“In Scrum, velocity is how much product backlog effort a team can handle in one sprint. This can be estimated by viewing previous sprints, assuming the team composition and sprint duration are kept constant. It can also be established on a sprint-by-sprint basis, using commitment-based planning.
Once established, velocity can be used to plan projects and forecast release and product completion dates.
How can velocity computations be meaningful when backlog item estimates are intentionally rough? The law of large numbers tends to average out the roughness of the estimates.”
Makes sense, right? Pay particular attention to the phrases “…assuming the team composition and sprint duration are kept constant” and “The law of large numbers tends to average out the roughness of the estimates.”
Scrum teams usually measure their software development effort in story points so velocity is the number of story points the team can finish during a sprint. You could use hours or any other measure you like. The key point is to develop a reliable way of sizing units of work (stories).
In a perfect world, every sprint would deliver the same number of story points. A velocity chart would then create a flat line as shown here:
This chart shows a steady velocity of 20 story points delivered per sprint. If a development team could achieve this, it would indicate that they are very good at sizing stories and very good at getting the stories to ‘done’ during each sprint. (It might also indicate that they are very good at gaming the system but let’s not go there.)
In the real world, velocity charts tend to look more like this:
The number of story points delivered varies — at times, it varies widely. There may even be sprints where the velocity drops to zero because the team encountered a major unforeseen impediment.
Does this mean velocity is a useless metric?
Yes, if your expectation is that velocity can predict the future. It can’t. Velocity can only tell you what has already happened. It offers a suggestion of what can be expected in the future — over several sprints — but it can’t tell us how many story points will be delivered in the next sprint.
In other words, if the team is averaging 20 story points per sprint over several sprints, they are most likely to deliver 100 story points over the next 5 sprints. Will they deliver 20 story points during the next sprint? Your guess is as good as mine.
Warning: I just made a huge assumption in stating that 100 points can be expected in the next 5 sprints. I assumed that the team would remain intact AND that the software environment would remain fairly static. If there’s a major change in personnel, technology or tools, velocity is likely to decline sharply before rebounding.
Bottom line: I like the velocity metric and I recommend you track it. Just don’t expect it to be a crystal ball.
I’ve read quite a few blog posts lately regarding software estimates. Should we estimate epics, stories and tasks or not? Are estimates useful to software development teams or are they a waste of time? Do estimates add value to the development process or are they inaccurate and misleading?
As is often the case, the answers depend on your context. For example, an experienced team working in a familiar environment will find it easy to accurately estimate new epics, stories and tasks. A less experienced team working in an unfamiliar environment will have a difficult time estimating anything reliably.
Are estimates worth the effort?
If a development team’s estimates are unreliable, is it worth the effort? Yes and no. It may be worthwhile as a learning experience if nothing else. The team needs to have some idea of how complex new stories are. However, it’s probably not worth it from a business planning perspective. Businesses need accurate information to operate successfully.
Let’s look at this from a different perspective. I think we can agree that small activities are easier to estimate than larger ones. For example, if I ask you how long it will take to add a confirmation dialog box to a web form, your answer will likely be very precise. If I ask you how long it will take to create a complex database query and display the results, your answer will likely be less precise.
Keep user stories small and simple.
So, perhaps the real issue isn’t whether to estimate or not. Perhaps we need to think about what’s being estimated and how the estimate is being delivered. If you want your estimates to be reliable, your best bet is to derive stories that are small. In fact, if the stories are small enough, why bother to estimate them at all?
I recommend getting your sprint backlog stories down to one or two days of work. If you have 10 stories, you’re in the range of 10-20 days with 15 being the most likely. Want to be more accurate? Ten stories that are each 4-8 hours of work amount to 5-10 days in total with 7.5 being most likely. Worst case? You’re a couple of days late rather than a week or more.
By keeping user stories small and simple, estimates add little value. Some will argue that estimates are needed to derive a final delivery date for the project. Not true! The business establishes the delivery date. The software development team does the best it can to complete as many stories as possible within the time allowed. That’s how agile software development is supposed to work.
Crappy software. It’s everywhere. There is far more poorly-designed software than well-designed software. So much more, in fact, that we are literally drowning in crapware. It gives all of us in the software business a bad reputation. I hate it.
You’ll find lots of information on how to build better software (just click here). However, tools and techniques can only get you so far. If your company is serious about building the best software in its market segment, change your attitude. Here are a few suggestions.
- Lower the stress level. I know there are occasional needs to crank it up and push hard to get a project done. However, in some companies, every project is like that. It’s wrong. It generates waste, rework and paranoia. Lighten up. I’ll bet software quality improves and productivity increases too.
- Cut back on the meetings. Everyone seems to hate meetings and yet they are ubiquitous. Just stop! Cancel them! Move on! Having a meeting must not be your de facto approach to dealing with problems. Meetings should only be held when there is a clear and pressing need. Try walking down the hall and having a one-on-one conversation instead.
- Reduce the formality. Lighten up. Relax. Make every day, casual Friday. Even Steve Jobs wore jeans and a turtle neck. People should be able to speak freely. Having to carefully select every word and make every detail politically correct wastes time and stifles innovation.
- Don’t be a pointy-haired boss or a seagull manager. Managers and tech leads do not always know what’s best. That’s worth repeating — managers and tech leads do not always know what’s best! Assembling a collaborative team is the best way to determine the optimal solution to a problem.
- Make it fast. When developers have to wait for software to load or simple file copy operations to complete, their minds wonder. They’re more likely to pick up the smartphone and check Twitter and Facebook. Invest in high performance systems. They’re worth every penny. Then, identify and eliminate bottlenecks. See how much more gets done.
- Mix it up. Many companies like dedicated and highly-focused employees. I hope they also like high turnover because that’s what they’ll get. Cross-train and assemble multidisciplinary teams. Everyone will be more satisfied and productive.
- Offer developers multiple development environments and tools to choose from. Having everyone use identical configurations makes sense. They are easier to maintain and troubleshoot. But at what cost? Forcing developers to use tools they don’t like only slows them down and encourages them to find work-arounds. It’s not worth it. Embrace diversity.
- Upgrade. Some companies stay with what they know too long. They use a software package until they just can’t use it any more and are forced into upgrading. That’s dumb. Stay current. You don’t need to jump on every new release but you should upgrade at least once a year. (If the vendor isn’t producing new releases at least once a year, find another vendor!)
- Implement reasonable security constraints. Don’t lock down your environment so tightly that even simple tasks are hard to do. Implement my simple productivity rule; any group that mandates a new process or procedure that reduces productivity, must provide an offsetting productivity improvement. Let them wrestle with that mandate.
- Ask more open-ended questions. We all need to tell less and listen more. If you’re still wondering why your software is crap, go back to the top of this post and read it again.
I like to move fast. I like having the “first mover advantage”. I don’t believe that my team needs to be smarter than yours or better at what we do (though those attributes certainly help). If we can simply move faster, we’ll have the advantage and we’ll be more likely to succeed.
Regrettably, I work with many software managers who don’t see things that way. They believe that success depends upon staying in control. They want all activities to be planned, documented and approved. I get it, but that attitude builds a major barrier.
All that planning, documenting and approving takes time — lots of time. I’ve seen software development projects requiring less than a month of building and testing — that’s 3-4 weeks of writing code and thoroughly testing it. Yet, 6-8 additional weeks are spent planning, prioritizing, documenting, reviewing and approving. So a 3-4 week project takes 9-12 weeks!
You can’t lock down the team and still be agile.
Yeah, I know. I’m exaggerating a bit. We need a few days of planning and documenting even within agile development projects. So my 3-4 weeks is probably more like 4-5 weeks — fine. It’s still about half of the total calendar time actually expended.
I’ll also point out that not all of that 6-8 weeks of administrivia is work time. A significant portion of it is spent waiting — waiting for meetings, waiting for reviews, waiting for approvals, and waiting some more.
Warning: While your big enterprise is waiting, your small competitors are gaining first mover advantage. You may release a better software system but it could be too little, too late. You end up doing the right thing at the wrong time.
You might think that the solution lies in agile software development but it doesn’t — not really. The solution lies in your mindset, your behavior. Software development teams need guidance not controls. They need to be led not directed.
If you put your software developers in lockdown, you’ll spend vast amounts of time managing the lockdown and little time delivering results. If you give the software developers some direction and running room, you’ll spend more time evaluating results and little time issuing directives.
Will mistakes happen? Of course, they will. But the mistakes are more likely to be small, easy to spot and recoverable. The team will come out way ahead of where it would have been with a heavy handed approach. Big design up front often results in big mistakes at end.
Here are a few tips for releasing control and offering guidance instead.
- Keep deliverable cycles short. (Team deliverables should be no more than 4 weeks though I prefer 1-2 weeks. Individual deliverables should be no more than 5 days and 1-2 days is better).
- Use whatever metrics you like to track activities over time but mix them up. (Metrics are notoriously easy to game. Once people learn the system, they also learn how to use it to their advantage. Mix up your metrics.)
- Trash the long, fancy document templates. Reward people who can get to the point and convey an idea quickly.
- Offer samples. If there’s a document, code snippet, diagram, etc. that you really like, display it as a guideline and let the team try to improve upon it.
- Set high quality standards and don’t compromise. Set firm deadlines and hold them. Reduce scope to hit target dates and/or budget constraints.
Let the deliverables and the results drive the team’s progress. They’ll get more done in less time with better quality.
Some habits are good. Others are productivity killers.
Here’s one example. Someone schedules a meeting. That’s bad enough, right? And the person decides to make it a recurring meeting — every week for as far into the future as anyone can imagine. Then it gets worse. Week after week, zombies (sorry, I meant people) arrive at the meeting wondering why they have to attend and hoping it gets cancelled. It becomes a habit and it just goes on and on.
Here’s another example. Each week a report is generated and distributed to thousands (okay, maybe just a few dozen people). Some dutifully look at it each week while others simply ignore it. Few remember the original purpose for the report. They merely recall that they are supposed to look at it each week. Now it’s a habit.
Habits are hard to break.
Crazy, right? It is and it also happens all the time in companies around the world. People get into a habit — a recurring event — and they just do it. Those recurring meetings are particularly nasty. Often, attendees will multitask. They bring a laptop to the meeting so they can be doing something else. Or, they call in even though their offices are just down the hall.
Multitasking? Really? Why not just cancel the meeting? Call a meeting when needed and only if needed. Before calling a meeting ask yourself if a one-on-one conversation is all that’s required. You can inform everyone else via email. If you believe that every decision needs to be a group decision, don’t bother, your project is doomed.
As for that weekly report distribution, kill it. Automate the process such that information is tracked and automatically presented in a dashboard for anyone who cares. Send out an email reminder if you must. (Pull people to the information. Don’t push the information to them! If the information has no pull, why bother?)
The curse strikes agile development teams too.
The same effect happens with daily stand-up meetings and retrospectives. They become so routine — boring — that no one pays attention. Teamwork deteriorates into individual contributors touting their efforts. Continuous improvement deteriorates into self-preservation. That’s the curse of recurrence and habit.
I’m a proponent of both daily stand-ups and retrospectives when they’re done right. You know the drill. There are plenty of references on this website and others on running proper stand-ups and retrospectives. Therein lies a hidden danger — the curse.
If those meetings become too prim and proper — too structured and routine — new and innovative thinking gets cast aside. Everyone learns the drill. They arrive. They participate as required. They leave — and get back to work. “Get back to work”, as if the meeting is not considered “work”.
- If the team shows up and everyone always stands or sits in the same place, you’re cursed.
- If everyone speaks in the same order every time, you’re cursed.
- If everyone uses the same words and phrases time after time, you’re cursed.
- If people are continually late for the meeting or distracted during it, you’re cursed.
- If the results of the meetings are always the same, you’re cursed.
Variety and variation will break the curse. Employ various techniques and styles to make the meetings interesting. Always start on time. Always end on time. Mix up the agenda and the format. Have different people run the meetings each time. Ask unexpected questions. Present a problem and brainstorm solutions. Watch the attendees. If any of them look bored, ask what can be done to improve the sessions. Take the advice. Experiment.
Stay agile. Don’t let the curse of recurrence and habit destroy your productivity.
I don’t know about you but I hate to fail. I’m not kidding. I REALLY hate failure. And that is one of the scariest aspects of agile software development. To succeed, you have to be willing to tolerate failure. In fact, if you’re not experiencing any failures, you’re not pushing yourself or your team hard enough.
Understand that I’m not referring to monumental failures. No one wants to see their million dollar product launch fail. (I don’t even want to see my $1,000 project fail.) That’s why agile principles encourage us to fail fast. Those quick failures tend to be small and relatively inexpensive. They are learning opportunities that help us avoid the big failures — the catastrophes.
Take a chance. Try an experiment. If you’re unsure about an approach or a technique, give a try. Run a test. If it fails, you haven’t invested much time or money so there’s little waste and no harm. It’s a lot better than spending days writing, reviewing and revising documents hoping that you haven’t missed anything.
Use Real Data
One area where you have to be extra vigilant lies in the data set used for your tests. You may get a favorable trial result using limited or artificial data inputs. When the software is deployed to a production environment with 1,000 times more data and many more boundary conditions, what worked in development may crash and burn.
I’ve seen this happen many times over the years. Software that responds instantly in development slows to a crawl in production. Or the software handles the artificial data set cleanly but when presented with real business data, numerous failure points are exposed.
Develop and test with real business data if you can. If time and money don’t allow for replicating the business data in development, do a trial production deployment. Be absolutely certain that you can back out whatever changes you make and restore the prior state quickly. It’s also a good idea to instrument the software. In other words, add extra logging and tracking functions so that you have good information for analysis in the event the trial fails.
If things go smoothly, you can re-deploy without the logging and tracking functions or, better still, simply turn off the logging via a configuration parameter.
Failure doesn’t have to threaten your career or bankrupt the company. Take prudent risks. When things don’t turn out as you hoped, learn, assess and try again. That’s agile!
In this post, I’m sharing a technique for writing a software module with you. I’m assuming that we are implementing a user story and that the acceptance criteria are clear. The story has been divided into a set of tasks and the task durations are known.
It’s an approach that works well for me though your mileage may vary. I strongly urge you to refine and tune the approach to your needs and most importantly, to your work style.
If you’ve ever had to write a report, proposal or even a specification, you may have heard advice like the following:
- Establish a time interval, say 10-15 minutes and set a timer.
- Work on the outline.
- Get your major thoughts written down.
- Don’t think too much, don’t analyze, just write.
- If you get stuck on an item, skip it.
- When the timer expires, go back and organize what you wrote into sections or chapters.
- Reset your timer for another 10-15 minutes.
- Write a section until the timer expires.
- Forget about grammar and spelling, just write.
- Go back to step 7 and repeat for each section.
- Now go back and do some editing for grammar and spelling.
- Make additional passes as needed until the document is complete.
What does this have to do with writing software?
Of course, writing software, in any computer language, is more complex than writing a report, proposal or specification. It’s not necessarily more difficult. It’s more disciplined. There are many more rules to follow and external dependencies to worry about.
Despite the added complexity, you can follow a structured approach like the one above. Let’s say you’re implementing a user story. You’ve likely broken it down into a set of tasks. You should have a rough idea of how long each task will take. Many developers will approach the story implementation serially. That is, they’ll work through the task list sequentially.
I do it differently. I iterate over the list of tasks. Here’s a basic approach that you should customize for your situation.
- Establish a time interval, say 25% of the total estimated story implementation time, and set a timer.
- Create a skeleton of the code (there may be 2 skeletons — one for test code and one for the story).
- Get the major sections of the code defined.
- Don’t think too much, don’t analyze, just create the structure.
- If you get stuck on a section, skip it.
- When the timer expires, go back and review what you wrote. Note any areas that need more thought.
- Reset your timer for 25% of the story time divided by the number of code sections you have.
- Write a section of code until the timer expires.
- Forget about covering every detail (e.g. error conditions), just write.
- Go back to step 7 and repeat for each section.
- Now go back and do some editing, testing and debugging.
- Make additional passes as needed until the code is complete.
The approach will vary depending on the situation. For example, if I’m unsure about how something might work or if it can even be done in the manner I’m thinking, I’ll make a first pass targeting the area I have questions about. Once I get that ironed out, I’ll build out the rest of the code as described above.
There you have it — a simple iterative technique for writing software modules. Give it a try.
- May 2013 (10)
- April 2013 (13)
- March 2013 (13)
- February 2013 (12)
- January 2013 (12)
- December 2012 (7)
- November 2012 (11)
- October 2012 (12)
- September 2012 (8)
- August 2012 (11)
- July 2012 (13)
- June 2012 (12)
- May 2012 (13)
- April 2012 (13)
- March 2012 (13)
- February 2012 (12)
- January 2012 (13)
- December 2011 (12)
- November 2011 (12)
- October 2011 (13)
- September 2011 (14)
- August 2011 (18)
- July 2011 (13)
- June 2011 (18)
- May 2011 (19)
- April 2011 (16)
- March 2011 (21)
- February 2011 (20)
- January 2011 (22)
- December 2010 (21)
- November 2010 (16)
- July 2010 (2)