Sponsored Link •
At last week's SPA conference, Paul Dyson and I ran a workshop on planning non-functional requirements in agile projects. Here is a personal account.
Agile projects measure progress in terms of the business value realized. This is a huge leap forward from the practice of implementing a technical infrastructure first and subsequently layering business logic on top. Since commercial stakeholders are the best judges of value, prioritizing and planning feature implementation is their prerogative. But has the pendulum swung too far? In the abhorrence of 'Big Design Upfront', architecture has been discredited. As a result, many agile projects struggle with planning technical work that apparently does not add any value, yet must be performed for long-term system viability. Many techniques have been tried to reconcile planning according to business value with the need for maintaining and improving infrastructure, including reserving a portion of the development effort for 'technical work' and hence not giving commercial planners access to the complete team's time budget, or even planning entire iterations around technical improvements without any demonstrable business value. These are, at best, a fudge and detract from the original intuition behind agile planning. Hence, in this workshop we aimed to grasp the nettle and look for a more fundamental resolution of the tension between the desire to create value and the need to provide a solid technical infrastructure.
The question is not whether and when to let economics guide planning as opposed to technical considerations. In the end, economics always win. The problem, in my view, is that we tend to see the value of a new feature, but not its cost. By cost, I do not mean the effort we need to invest into implementing the feature, but rather the cost of the nightmare scenario's that may execute as the system offers some new functionality. I developed this approach in the context of my work in secure development (see my paper on agile security requirements): it is easy to see how a successful application creates an attractive target for malicious users. For example, it is likely that an online casino will attract players that cheat. Therefore, according to one nightmare scenario an attacker is able to consistently beat the bank. If this were to happen, this would clearly be costly. The impact of such a successful attack must, however, be tempered by its likelihood. The cost we should therefore take into consideration is the loss incurred by a successful attack times the probability of its occurrence. This calculation is at the heart of the insurance industry and has kept it very profitable.
The cost of nightmare scenario's, should be taken into account when optimizing the business value in an iteration plan.
If this approach works for security requirements, it is but a small step to apply it to other non-functional requirements. If it works for the wicked, it is likely to work for the incompetent. If it tells us how much effort to expend on fending off Denial of Service attacks, it should be solid enough to decide how much attention performance issues deserve.
We confronted a planning technique that makes nightmare scenario's explicit with a more traditional approach which only makes use of user stories in a simulation with 4 teams. 2 teams wrote explicit nightmare scenario's and estimated them for cost and effort alongside the user stories. The 2 other teams took a more traditional approach and only produced user stories. However, they did not try to fudge the non-functional requirements, but rather factored them into existing user stories as acceptance criteria, or wrote new user stories to capture them. In the former case, the cost of things going wrong is replaced by an increased implementation estimate as additional acceptance criteria must be satisfied. In the latter, customers are asked what value they attach to a non-functional requirement being met. This can work well: a business person should be able to assess what value it brings to be able to serve 100 versus 10 concurrent users.
The 'everything is a user story' approach definitely has the advantage of simplicity. As one is writing nightmare scenario's, it soon becomes apparent that pessimism is prolific and contagious. Nightmare scenario's quickly outnumber the user stories and all looks bleak. The optimization problem becomes intractable as the set of nightmare scenario's that an iteration plan should take into account depends on the mix of user stories to be implemented. Therefore, for those non-functional requirements amenable to the user story approach, this seems to be the way to go. For the others, ignore them, unless they prove to be particularly costly. In such case, tracking them explicitly is probably wise.
In the second part of the workshop, there was an open discussion intended to mine the participants' experience.
The point was made that, like functional requirements, non-functional requirements need failing tests. Without failing tests, it is all too easy to get stuck in a mire of '-ilities' that lack precision and cannot be validated. Unfortunately, tests for non-functional requirements are substantially harder to write and perform than functional tests. One of the challenges is running the tests in an environment that sufficiently resembles the target to yield significant results. Scalability, for example, is difficult to test for unless the test environment makes the same provision for load balancing as the production environment. Hence, projects increasingly make use of a staging environment on the production servers.
Many non-functional requirements are orthogonal to user stories. This is an impediment to planning their implementation as part of the user stories.
Like functional requirements, non-functional requirements deserve to be revisited at each iteration. It may be comforting to think that, if you get it wrong, you get a second crack at the whip. On the other hand, you are never really done since new non-functional requirements may emerge throughout the duration of the project and old requirements that were initially deemed of secondary importance may take on an increased significance.
What does it mean when a customer wants the system to be maintainable? Someone put it like this: could you work faster and not charge as much? This is an understandable sentiment which we try to accommodate in the agile community, but it is hardly a testable requirement. Or is it? One suggestion was to measure maintainability by the velocity of the project. In my opinion this is tricky since the capabilities of the team also influence velocity; for example, as a team gains confidence, velocity goes up, when people take holidays, velocity goes down.
This was my take on the workshop. If you were there and feel I left something important out, or misrepresented some of the discussion, please leave a comment. In any case, I would like to hear from you if you have any views on how to treat non-functional requirements, particularly in agile projects.
|Johan Peeters is an independent software architect who spends a lot of time plumbing and generally fixing leaks.|