An iterative incremental software development process

Scrum Software Development

Subscribe to Scrum Software Development: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Scrum Software Development: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Scrum Authors: Pat Romanski, Stackify Blog, Ken Schwaber, Eric Robertson, XebiaLabs Blog

Related Topics: Scrum Software Development, Agile Software Development, SOA & WOA Magazine

Article

Balancing Completion and Perfection in Agile/Scrum Projects

The fundamentals of collaboration

I occasionally get questions from clients who are using Agile and Scrum frameworks for software development.

Techniques and tools aside, it is often questions about the fundamentals of collaboration that seem to be getting in the team's way.

Hi Gordon:

One thing you enlightened me on was the peril of raising defects instead of recognizing and reporting that a story is simply not actually done by pulling the story back to in progress.

I'm having trouble convincing some team mates of the simplicity of that concept. Oddly enough, logic like "If it's not done, don't report it as done" does not seem to be winning the argument for me. (Logic is overrated, I find.)

Thanks and best regards,

Tom C.

Unlike most of the rest of the universe, the concept of a piece of software being "done" is always going to be the crossing of an imaginary line with invisible endpoints. Most project sponsors and product managers would think of bugs as simply features that don't work, and form part of the value of the software they are acquiring. But I think it is possible to take one of several logical positions regarding "doneness" versus "bugginess:"

The Client: "I paid for a working system; now that I see bugs, I have to wonder about what I haven't found yet"

The Project Manager: "It has met all the requirements that have been objectively measured: there will only be additional work if the budget and schedule allows it."

The Product Manager: "While there are variances from the original requirements, the cost and delay of converging will erode the overall value of the program beyond the value of compliance"

The perfectionist developer: "Done is for quitters!"

The third party tester: "Oh please send me your bugs! The more trivial and obvious, the more money I make and the better I look for finding them!"

I suspect that the pushback regarding doneness means that the developers are aligning themselves with the needs and goals of a project manager's perspective. That makes sense - that is the conduit through which the project's goals and drivers are most directly communicated. But by being concerned with overarching delivery dates and the broad scope of the program, they are losing sight of some cardinal Agile Manifesto principles, including an insistence on valuing quality and consistency over feature scope, or the Agile principle that every developer (or XP coding pair) should take full responsibility for their output. Perhaps worse, and more difficult to turn around without high-level management inputs, would be a perception that they are valued for the amount of work that the do rather than the quality of the result.

When a story is submitted for testing, or accepted by the client managers as plausibly working, I am not sure how a developer could be confident that it is truly "done." It may have been integrated for bench or unit testing to some degree, but components of enterprise systems have to stand up to stress testing, weird data cases and other scenarios that may not be available for the developer to consider during development. It does no good to moan about "inadequate acceptance test criteria" and such, because that is just how the production system will be experienced by the clients, too. A module of a larger system, despite the rigors of testing available to developers, just can't be considered "done" until its full context is understood and locked down. Thinking in user story terms makes this situation more extreme; implementing a user story can involve tinkering with any number of code modules, configuration files, etc. Unless the developers have a masochistic desire to start documenting the implementation in a most un-agile way, it can't be good for overall quality to expect a downstream developer to track down all the changes that led to the defect.

In truth the labels on the status should be seen simply as triggering mechanisms to clarify who has the next responsibility to do something, not as goals unto themselves. This is not a rejection of the utility of burndown as a project management concept, but of the idea that having dozens of modules marked "done, but not tested," or "done, but with bugs," represents a measurable progress in a project. If the developers see themselves as production-line workers punching a clock for standard quanta of productive output, "done" signifies that they have done all they can under the circumstances. If they are owners of the code they write it can only mean that they are attesting to its quality on behalf of the whole project team.

Taking an orthodox Agile approach sounds easy in this case, but there are always complications and opportunities. What about the case where a dedicated SWAT * team tracks down and neutralizes the defects instead of sending the code back to the original developer immediately? Does that make the status of the code "in-progress" or "done, but working on it," or something else? The efficiency of that approach depends on the nature of the bugs found by the testing/integration team and the ability of the SWAT team to effectively traverse the full extent of code and find the root causes of the defects.

As the scrummaster, I would recommend taking a close look at two important Agile processes in your environment: performing sprint retrospectives and setting expected contributions for each team member within the sprint. Of course these are the two Scrum components that more closely resemble mystical art than management science, so they will always be a work in progress.

The orthodox method of scrum estimating involves reaching a consensus on the relative work effort by having the team input on what they believe the story will entail. On many teams this is routinely deferred to the team member who is either the SME on the story in question, or at least the person most likely to do the coding, which generally yields acceptable results on small, stable, experienced teams. The way this process is meant to add value is by linking it to a virtuous cycle that continually examines the quality of the estimating process itself during the retrospective. If you ask "What's the correlation between story points and the number was of hours expended?" and discover a one-to-one, perfect correlation, it is possible that you have simply substituted one label for another (tsk-tsk-tsk!). If there is a markedly weak correlation, there are other factors in play that are making developers feel "pushed" to recognize stories as done before they are fully implemented, such as having stories that are inherently too large for the sprint or that have special testing conditions. At the end of the day, your team's ability to estimate individual stories should show gradual improvement over the life of the project: disputes over "doneness" point to a vulnerability in that area.

Finally, there is the matter of handling the individual coding and collaboration styles that can make or break a scrum project. I would ask whether you have some "code hoarders" who regard themselves as uniquely capable of implementing the stories they have been assigned; if their code is routinely marked "done" but continues to suffer defects at testing time, the stories themselves are probably too large, reducing the effective transparency of the development process. You could create a blanket rule like: "We will not have individual stories that consume more than half of the available points for a single coder for a sprint." Sure, it's a numbers game; the actual number coding hours won't probably change in the short run, but it will allow overruns and conservative estimates to be detected at the sprint level, before the entire project has begun to suffer delivery challenges.

(SWAT: Software Was Actually Tested)

More Stories By Gordon Hay

Gordon Hay is an IT strategy specialist at PA Consulting Group with a long-term focus on aligning IT organizations to the enterprise strategy. He has developed and implemented both technical and organizational change programs for global corporations, government organizations and technically focused project teams. Gordon has led projects to adapt frameworks and methodologies to specific needs in software development, IT operations and portfolio management. His expertise also includes IT process transformation and sourcing in government, energy and retail industries, and selecting and integrating ERP systems and custom solutions in complex, high-transaction environments.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.