Table of Contents Previous Section Next Section

The Top 10 Requirements Review Errors

The flip side of the principles we just discussed takes the form of a number of common errors that our students make when they're doing requirements review for their projects. Our "Top 10" list follows.

graphics/01icon10.gif Don't review requirements at all. Instead, invite "feature-itis" by letting the coders build whatever they want.

One of the fundamental tenets of XP is that since requirements change every day, it doesn't make much sense to try to deal with them explicitly. People who follow this approach, or something similar, lose not only traceability of requirements but also the ability to build trust between customers and developers that can only result from intensive face-to-face negotiation. The likely outcome is that coders build a cool system that doesn't have a whole lot to do with what the customers think they're paying for.

The XP folks even have cool slogans to describe this phenomenon. Kent Beck used it to diagnose the failure of the C3 project (XP's big claim to fame) on their Wiki Website: ".the fundamental problem was [that] the Gold Owner and Goal Donor weren't the same. The customer feeding stories to the team didn't care about the same things as the managers evaluating the team's performance..The new customers who came on wanted tweaks to the existing system more than they wanted to turn off the next mainframe payroll system. IT management wanted to turn off the next mainframe payroll system." Translating: In XP lingo, the Goal Donor is the customer representative who sits in the room with the coders, who explain that it's okay to change requirements in midstream, while the Gold Owner is the project sponsor-the one who owns the gold. In the case of C3 (which was a Y2K mainframe payroll replacement project), the Gold Owner "inexplicably" pulled the plug in February of 2000 when the program (no doubt complete with cool features) was only paying one third of the employees, after something on the order of four years of labor. (We suggest that you visit http://c2.com/cgi/wiki?CthreeProjectTerminated and think very carefully about what it says there.)

Would requirements reviews have saved this project? We can't say for certain. But we can say that "feature-itis," which comes at the expense of schedule, is a common and predictable result of letting the programming team decide (and continuously change) the priority of the requirements (in other words, make it up as they go along) and not reviewing this prioritization with the "gold owner" project sponsors.

graphics/01icon09.gif Don't make sure the use case text matches the desired system behavior.

The phrase "use case driven" refers to the principle of using use cases, which capture the "what" that the system needs to do, to drive analysis, design, testing, and implementation (the "how"). If your use case text doesn't offer high correlations with what your users want the system to do, you're going to build the wrong system. Period.

graphics/01icon08.gif Don't use any kind of GUI prototype or screen mockup to help validate system behavior.

Prototypes, whether they take the form of fully workable front ends, drawings on scraps of paper, or something in between, generally provide a "jump start" for the task of discovering and exploring use cases. Making sure that your use case text matches the navigation that a prototype shows is an excellent way to ensure that you're going to build the right system. If you don't have any visual frame of reference, you run the risk that user interface people will build stuff that doesn't match your users' requirements as expressed in the use cases.

graphics/01icon07.gif Keep your use cases at such a high level of abstraction that your nontechnical clients have no clue what they're about.

Good use cases have enough details to enable their use in driving the development of a system from requirements discovery all the way through code. They also serve as a very effective tool for negotiating requirements with customers and managing customer expectations. This only works, though, if the use case text is specific: the actor does this, the system does this. A customer can't sign off on a use case that he or she doesn't understand.

graphics/01icon06.gif Don't make sure that the domain model accurately reflects the real-world conceptual objects.

You're going to build code from class diagrams that have ample detail on them. These diagrams evolve from high-level class diagrams that show the initial domain model as you explore the dynamic behavior of the system you're designing. This evolution simply won't happen the way it should if you don't get the right set of domain objects in place to start with.

graphics/01icon05.gif Don't make sure the use case text references the domain objects.

The main reason we discussed domain modeling (in Chapter 2) before we talked about use case modeling (in Chapter 3) is that a key goal of domain modeling is to build a glossary of terms for use case text to use. This technique will help you considerably in your effort to be specific in your use cases, and it'll also help you focus on traceability, because your use cases and your class diagrams will work together. Plus, it's quite a bit easier to do robustness analysis (the subject of Chapter 5) quickly if you've already named your objects in your use case text.

graphics/01icon04.gif Don't question any use case with no alternate courses of action.

It's been our experience that upwards of 90 percent of good use cases have at least one alternate course of action. The appearance of a word such as "check" or "ensure" or "validate" or "verify" in use text is a clear signal that there's at least one alternate course, associated with an error condition. A use case is also likely to have at least one path that the actor takes more frequently than the one specified by the basic course. You need to be diligent about digging for these other paths.

graphics/01icon03.gif Don't question whether all alternate courses of action have been considered on every use case.

One technique that works well in finding alternate courses is to question every sentence in your basic course. What could possibly go wrong? Is there something else that the actor could do besides this action? Could the system respond in different ways? As we stated earlier, you should be relentless in your search for alternate courses; much of the interesting behavior of your system will be reflected in them, not in the basic courses.

graphics/01icon02.gif Don't worry if your use cases are written in passive voice.

Your use cases should describe actions: ones that the actors perform, and ones that the system performs. Actions are best expressed with action verbs, and active voice is the appropriate voice for action verbs. Passive voice is appropriate only when the writer doesn't know who or what is performing a given action, and that should never be the case within use case text.

graphics/01icon01.gif Don't worry if your use cases are four pages long.

The basic course of a use case should be one or two paragraphs long. Each alternate course should be a sentence or two. Sometimes you'll have shorter use cases, especially when they serve as "connecting tissue" (for example, use cases centered around selecting from a menu). And there are times when you need longer use cases. But you should use techniques such as the invokes and precedes constructs we talk about in Use Case Driven Object Modeling with UML to factor out common behavior so that you can write concise use cases that you're more likely to be able to reuse-and you should definitely stay away from lengthy use case templates that generate considerably more heat than light.

Table of Contents Previous Section Next Section