Software Development
July 2002

Duking It Out

A recent IEEE Computer article by Dr. Barry Boehm questions the limits of agile software development, spurring a spirited—and contentious—debate in an April electronic workshop.

By Scott W. Ambler

Chances are that your organization is seriously looking at one or more agile methodologies such as Dynamic System Development Methodology (DSDM), Scrum, Extreme Programming (XP), Feature Driven Development (FDD) and Agile Modeling (AM). Several have been around for a long time—DSDM since the early 1990s and Scrum since the mid 1990s—whereas others, such as XP and AM, are relatively new. Many misconceptions persist, and even industry gurus are still struggling with the fundamentals of agility, which is problematic for developers trying to learn about these new techniques.

In the January 2002 issue of IEEE Computer (www.computer.org/computer/), Dr. Barry Boehm published the paper "Get Ready for Agile Methods, with Care." Boehm is known for his spiral software development lifecycle and COCOMO II estimating technique. Despite his decades of experience in software development, Boehm's article included several misconceptions—and worse yet, the folks at IEEE Computer, not exactly slouches themselves, didn't notice. For example, he mistakenly implied that XP entails little more planning than hacking efforts, but in reality, XP teams spend upward of 20 percent of their time on planning. Needless to say, the agile community had a few issues with Dr. Boehm's paper and similar misconceptions about agility, so an electronic workshop was held on April 8, with Dr. Boehm and 21 other software methodologists.

Team and Subteam
The first discussion point was "Agile development works better for smaller teams; for example, refactoring can be done only with small systems and great developers." The agile developers agreed that agile development can and does work well for teams of 20 to 30 people, but that with larger groups, the inherent communication problems must be dealt with effectively. Significant discussion centered on organizing a larger team into subteams with a "core team" focused on developing and evolving a common architecture. This technique, which I promote at
www.agilemodeling.com/essays/agileArchitecture.htm, is a divide-and-conquer strategy familiar to non-agile approaches, as well.

Alistair Cockburn, author of Agile Software Development (Addison-Wesley, 2002), discussed OOPSLA panel leader Ron Crocker's "Grizzly" method based on his experiences with two large projects at Motorola, which used three international cross-project subteams. Representatives of the subteams met regularly to keep in sync, and this focus on communication enabled the teams to work together successfully. Technically, the team used simulators and stubs so they could work to a common architecture. Crocker's book is forthcoming (a paper posted at www.xp2001.org/xp2001/conference/papers/Chapter15-Crocker.pdf describes his findings).

Boehm's assertion that only great developers could succeed at agile development wasn't well received. According to general consensus, a team consisting of a few skilled developers with mentoring ability could accommodate a remainder of "normal" developers. This is true of any development effort—at least a few people need to know what they're doing to have any hope of success.

Where's the Architecture?
Next, we talked about the statement, "Agility's emphasis is on designing for current needs, not for future ones. Therefore, agile methods work best when the future is unknown, and are less than optimal for projects in which future requirements are known." The group quickly agreed that agile software development techniques work well when the requirements evolve to reflect changes in your environment and your changing understanding of the system.

Xerox's Peter Hantos was concerned about refactoring at the architecture level, stating: "If any requirement pops up down the line that questions any aspects of the architecture, refactoring won't work, or will take forever. I conducted risk assessments on many major programs, and most of them failed because even though the teams knew about performance, exception handling, requirements and so on, they chose to deal with them at a very late stage of the game." Alistair Cockburn responded that poor risk management can happen to any project, indicating that he respects architecture, but also expects it to evolve. He noted that agility doesn't mean addressing architectural issues belatedly—that's why project sponsors hire a competent and experienced lead designer in the first place. Ken Auer also disagreed with Peter, describing a project in which he and his team had to "rip out the guts" at a late stage due to an unforeseen change, and claimed that a non-agile team couldn't have responded as quickly. Bil Kleb stated that a set of automated tests is a good safety net when taking an emergent approach to design, claiming that big architecture changes are no longer scary, nor require much effort.

I argued that when you do big design up front (BDUF), you're likely to be more locked into the resulting architecture. Dr. Boehm claimed that if you do BDUF to a snapshot set of requirements, you'll usually get a hard-to-evolve point solution, but that if evolution requirements are well defined, stable and reasonably accurate, the resultant architecture will accommodate future evolution.

Getting the Right Fit
Boehm's comment, "Agile methods fit applications that can be built quickly and don't require extensive quality assurance. Agile methods work less well for critical, reliable and safe systems," provoked the next discussion point. This misperception arose in part from the fact that most agile techniques eschew traditional walkthroughs and inspections for greater participation earlier in the lifecycle—and in part from quoting me out of context. An agile process such as XP doesn't require code inspections because of its strict adherence to coding standards and its practices of collective ownership and pair programming, resulting in many eyes perusing the source code as development progresses. On the surface, XP appears to lack quality assurance, yet in truth, it has a stronger quality assurance focus than the vast majority of prescriptive processes. In his article, Dr. Boehm quoted me as saying that I would be leery of applying AM on life-critical projects, without reporting that I had said this because I hadn't yet applied AM in such a situation and therefore couldn't fairly suggest applying AM on such projects.

For some participants, the issue of using an agile methodology for critical systems depends on how the criticality, reliability and safety requirements are handled. Ken Auer indicated that these issues should be identified as such at the project's inception so that agile developers can ensure the appropriate level of testing and emphasis, and noted that non-agile methods will also collapse if they don't identify these issues as requirements.

Dr. Boehm agreed with the participants on this point, but expressed his concern that the Agile Manifesto emphasizes responding to change over following a plan, noting that many crashes of satellites, telephone systems and other crucial systems result from departing from plan to follow an impulse. Kent Beck, author of Extreme Programming Explained (Addison-Wesley, 1999), demurred, revealing that he adopted the concept of feature-driven fixed-length iterations from a pacemaker project at Johns Hopkins. Alistair Cockburn offered an interesting take on the issue, recommending asking "How agile can we be, given that this is going to be critical, reliable and safe?" He then noted that the answer provides strategies that involve reviews, tests and interface simulators, and also drive up the collaboration and morale components. Cockburn added that many catastrophes arise from following a plan without awareness of changing conditions, and claimed that you must respond to change with foresight, concocting a new plan based on the current situation.

Documentation Can Be Agile
The group also discussed documentation on agile projects. I pointed out that many organizations demand more documentation than is needed, and that documentation is a poor form of communication. Though the group acknowledged that it's sometimes necessary to retain critical information over time, they maintained that it should be updated only "when it hurts." I believe that your goal should be to communicate effectively, and documentation should be one of the last options to fulfill that goal. Dr. Boehm mentioned that a documented project makes it easier for an outside expert to diagnose problems, but Kent Beck quickly disagreed, saying that as an outside expert who spends considerable time diagnosing projects, he looks for "people stuff," like quiet asides—not technical details in the documentation.

You're In Trouble When …
We wrapped up the workshop by discussing the best ways to recognize trouble spots. Ken Schwaber said that on Scrum projects, you quickly find out during the daily meetings because each developer is explicitly asked to indicate what's going wrong for them. Other red flags include: when software isn't being produced and morale is down; when the team stops focusing on what's supposed to be delivered; when activities occur that don't provide positive value to the overall effort; when useless documentation is being produced; when the customer refuses to fund another iteration; and when you get behind on planned iterations.

From the diversity of this discussion, it's obvious that many people, including leading software gurus, are struggling to understand agile software development. What can you do to facilitate your own agility? Approach agile techniques with an open mind and think for yourself as you wade through the sea of opinions. Most importantly, try these techniques yourself. When you do, you'll quickly discover that they're agile, not fragile.