I've seen a lot of Scrums over the years: Good Scrums, bad Scrums, and everything in between. Sometimes the Scrums are productive, transformative machines, but, in other cases, Scrum simply means walking through the meeting mechanics, following the basic rules, and an attempt to incorporate basic Scrum roles.
I believe the secret is to embrace Scrum’s focus on quality. Too many development groups are doing Scrum without baking quality into their products from requirements to deployment. The results are faster development of the "wrong product" and an accumulation of unmaintainable code, which ultimately leads to reduced velocity as technical debt builds up. Conversely, a quality product addresses market needs, and the ease of maintainability means that velocity isn't weighed down by cumbersome code.
So how do you bake quality into your product? While that's too broad of a topic to cover here in detail, I've found one commonly overlooked thing that often qualifies as an excellent first step:
To succeed with Scrum, you must clearly define what you mean by "done."
I'm referring to user stories or product backlog items in sprints. Before a team sets out to work on user stories in a sprint, the team and Product Owner must have a clear agreement on what it means for each work item to be accepted as "done." This agreement is pivotal to Scrum: without it, you will find only frustration and miscommunication.
A user story should set the stage and provide some context, but the definition of "done" in a user story is the meat of the contract between the development team and the Product Owner. I usually structure product backlog items as follows:
Definition of Done:
So why is this so hard to get right? You can go wrong by going overboard in either direction. Provide too little detail and you end up with sprint review meetings where expectations are not met or the wrong product is built. Give too much detail and you're struggling to get started because your user stories are over-analyzed.
Let's start with the first case: too little definition. This one is pretty obvious: if there's any room for interpretation, you can bet that the team and Product Owner will have diverging conceptions of what "done" means. The team has a natural incentive to minimize the scope of product backlog items, while the Product Owner typically wants as much stuff built as possible. These incentives lead to misconceptions in terms of what qualifies as "done."
Tracking our example above, here's an example of an under-developed definition of "done:"
See why this is a recipe for disaster? It leaves out too many details around the verification of a user's account before resetting the password. Unless your team is particularly gracious and/or motivated, you'll likely see a sprint review demo without email address verification. Perhaps that's what was agreed upon, but likely there will be a lot of head-scratching at the review meeting. Maybe you're excited that your team built anything at all, but that's another story...
Now let’s consider the opposite problem. Perhaps you're an analytical go-getter of a Product Owner. That's good. But it can be counter-productive and detrimental to the Scrum framework if you go too far over the top when you define done. Here's an example:
You may be thinking that all this detail is actually rather good. The reality is that you will not get through sprint planning in half a day if you've pre-written all your product backlog items in this level of detail. You will get differing opinions and discussion on each and every point as your team will often provide insights for improvement that negate a lot of the details you spec out in advance. Remember that in Scrum detailed requirements analysis happens during the sprint, not before and not during sprint planning.
Another issue with the above example is that it details things like db schema, stuff that's meant to be left to the team to decide since product backlog items define "what" is to be built, not "how." That is to say, let your team handle detailed requirements during the sprint and let your technical folks handle implementation details.
So what is just right? You need enough to get exactly what you want without overloading it to the point of detailing implementation details. In our example, here are the main things I want to see in the finished product at sprint review:
Notice that I'm not implying a UI. It's generic enough to allow for detailed UI-specification later during the sprint while ensuring I get what I need.
Again notice that I don't specify implementation details, only my purpose.
Here again I've avoided lengthy and detailed specifications regarding the exact implementation, but I've made the security protocol clear nevertheless.
It takes a bit of practice to find the sweet spot with "done" criteria. Once you have it down you'll see an immediate impact with your Scrum teams as they gel around clear, actionable goals.
There's another important dimension to being "done." Explicitly agreeing on the software's functionality is one thing, but what about the non-functional requirements? How do you ensure that the team isn't cutting corners on quality or getting the product into technical debt to make it happen? I’ll answer those questions in a follow-up post, so be sure to check back soon.
Technical Debt and Design Death
INVEST and SMART User Stories
How Do We Know We Are Done?
Topher Cyll is a software engineer in Portland, Oregon, who’s lucky enough to write about half of his projects in Ruby. Topher wrote the Multiple Dispatch and S-Expression RubyGems used in this article. He also volunteers on the Tech Team of the progressive political group The Oregon Bus Project and is an active member of the Portland Ruby Brigade.