Table of Contents Previous Section Next Section

9.3 Architectural Judgment

All architectural benefits depend upon a critical assumption: that architecture decisions are fundamentally sound and will not be subject to significant change. If architectural decisions are no better (or even worse) than chance, then it would be appropriate to conduct a software project without architectural planning. In particular, this is why the quality of judgment of the architect is vital. Architecture is all about making important technical decisions for a system or project. By definition, the scope of architecture comprises the important decisions, also known as "architecturally significant" decisions.

How do architects use judgment? Judgment guides their advice to project management and developers. Judgment is used in the evaluation and selection of technologies. Judgment is used in the definition of a "system vision," including the envisioning of architectural frameworks that are detailed to realize the design. Judgment is used in virtually every detail of creating a software architecture (e.g., designing subsystem interfaces, elaborating enterprise requirements, and allocating engineering objects). Architects rely on judgment in many cases because more logical engineering methods are not available or are inapplicable to many intuition-based architectural decisions.

A key role of the architect is to assess the impacts of changes in requirements and technologies. This is a proper role for architectural judgment because the architect must assess whether these changes have an impact on "the architecture," which also means "affect important system decisions and assumptions." With a systemwide view, the architect is in the best position to make such judgments. The architect should also rely upon specialists to provide answers about specific technologies, as inputs to a decision.

Judgment is the application of the intuitive aspects of architecture. Here "intuitive" does not imply ad hoc guesswork. Usually the architects' judgment is backed up by intelligence gathering and experience, as well as by systematic decision-making processes. It is infeasible to justify every decision in writing, so architects attribute much of what they do to intuitive judgment. Even if all decisions could be documented, all their experiences cannot be re-created for the reader, so that he or she will draw the exact same conclusions as the architects. It is essential to have management and developers trust the architects' judgment if they are to be effective. They usually do this by enlisting one or more of the lead developers into the architectural decision-making process.

Problem Solving

Architectural judgment is one form of problem solving. If problem solving is considered as a paradigm, one can argue that it fits many human activities. The problem-solving paradigm can be mapped upon most project activities, including what is done in meetings and day-to-day on the job. In order to be good problem solvers, a problem-solving process should be used for important decisions.

Some alternatives to problem solving include ad hoc decisions, "whoever yells the loudest," management by caveat, and flipping a coin. Sometimes these are expedient approaches; sometimes it is more important to move on to the next topic, rather than dwell on an inconsequential decision.

To establish a process, the problem-solving paradigm is first defined as a reference model. In the general problem-solving paradigm, the question to be addressed is decided upon and then alternative solutions are identified and elaborated upon, the alternatives are selected, and the solution is implemented [VanGundy 1988]. At each step, there are decisions to be made about which process to utilize and which content alternative to select. A problem-solving process is based on each of the generic problem-solving steps:

  1. Identify the question. The first step is to define the problem. What questions should be answered to resolve the situation? The search for the proper question can be a miniature problem-solving exercise in itself. In the case of architecture, the questions may be as broad and complex as the solutions are. In a meeting situation, one of the best ways to identify the question is to write down some candidate question (on a flip chart or whiteboard) and let the group edit it through discussion.

  2. Identify alternative solutions. The second step is to discover several potential solutions. In a perfect world it would be nice to identify all possible solutions, but this is seldom feasible (or desirable) in practice. Finding a reasonable number of candidate solutions that are all worth investigating further is important. Sometimes if there are many potential solutions, it is useful to redefine the problem or to down-select the alternatives before detailed study.

  3. Elaborate the alternative solutions. Each alternative can be studied further (e.g., by detailing the steps involved in implementing that solution). Simply creating a written description of the proposed solutions is a major step toward reducing ambiguity. In this step, information about the proposed solution must be shared to make a more informed decision. In many cases, it is necessary to "make up" information about a solution (e.g., by providing a strawman definition of a plan of implementation).

  4. Select among the alternatives. Given the sufficiently elaborated alternatives, the studies are done, and it is time to make a decision. Decision making itself can be a drawn-out process, or it can be a simple choice among obvious tradeoffs. By understanding the more complete decision-making processes, one can effectively simplify with known consequences. In particular, decision analysis is a process based upon a matrix (also called Olympic scoring) [Kepner 1981]. The alternatives are listed in columns, and decision criteria are listed in rows. The criteria are in two categories: the essentials and the desirables. The desirables are sorted by priority. Note that we need a problem-solving process to select criteria. The alternatives are scored in rank order: 1, 2, 3, . . . . Then the scores are tabulated with respect to priority weightings, and the best score wins. The full decision analysis process is considerably more rational and objective than ad hoc decision making. The winner is usually a good choice, and the decision makers have a rationale for explaining the decision in the form of the decision matrix.

  5. Implement the solution. After a particular solution has been selected, the design can be elaborated and a plan implemented for that solution in order to realize the results. Having made a sound decision and eliminated consideration of many unnecessary options makes the implementation step much more focused.

Sometimes the powers that be will disagree with a carefully rationalized decision. One way to explain this mismatch is that the decision criteria have different priorities than the real-world priorities. It is an interesting spreadsheet exercise to revisit the decision analysis and discover the likely priorities.

In any decision-making process, the ability to prioritize is essential. Viewing each choice as an exclusive selection is not productive because that arbitrarily excludes desirable choices. Instead, it is preferable to prioritize among options or among criteria in order to rank-order the alternatives or considerations. One of the most effective ways to prioritize is to use situation analysis, essentially scoring each option by its seriousness, urgency, and growth in importance as high/medium/low, and then ranking the results [Kepner 1981]. This prioritization process can be used with arbitrary lists of ad hoc concerns. It is not always necessary to rank equal items, and by no means should perfection be an important criterion before considering rank ordering. What is important is to determine priorities, and then focus energies on exploring those alternatives. All this advice can be summarized in the saying "First things first and second things never." Determining what's first (i.e., most important) and what's second is done through a process of prioritization.

Review and Inspection

In some organizational cultures, every meeting is a review. Review is an important process, but it tends to be overused and overestimated. Any time more than six people attend a meeting, it is by default a virtual review. With six or more people (and typical meeting processes), it is very difficult to design and proceed creatively. However, it is relatively easy to get sidetracked on discussions.

What's wrong with the review process is that its results are uneven. At its best, reviews help to form consensus for good ideas. At its worst, it is a pernicious form of group-think, where everybody concedes to the boss's wishes. Most likely, the review process will focus on issues that are not the most important. And some people with long meeting experience can manipulate the review process by exploiting its weaknesses. One macabre review game is to search for the question that can't be answered (e.g., "What about security?"). It does not have to be the most important question, or even a significant one. Groups are easily led in such a direction, even though it may be irrelevant to the accomplishment of the group's purpose.

All too frequently, every idea in a review meeting is criticized. This often happens when multiple competing interests are present, such as competing software companies. One interesting process, used by Sun's JDBC team, is to bring one company in at a time, instead of the more typical multicompany meetings. Without the pressure of imminent competition, the companies were more willing to share their technical opinions and help with the creative process.

One firm-and-fast rule that the authors insist upon in review meetings is that no redesigning is to take place on the spot. Technical design decisions should be considered carefully, off-line, and not become the victims of group-think. Untold numbers of bad design decisions are made in review meetings, for spurious reasons. Each review comment is considered to be a design force that must be balanced with other forces in order to make a reasonable choice. Often, many design forces are resolved with single changes, or the solution can be explained in terms of the current design, and how it can be used more effectively.

Also, it is important to define clearly which meetings really are review meetings and which are not. For example, a tutorial is not a review meeting. In some cases, meetings are held to disseminate completed specifications. Sometimes it is necessary to switch from review mode in order to stabilize work, distinguishing which decisions are closed and which are open for choice. Otherwise, every decision is up for reconsideration at virtually every meeting.

There is another, more structured, version of review called software inspection [Gilb 1993]. Although the process is not described completely here, suffice it to say that this process is very effective. Some experts claim that software inspection "always works."

Instead of an unstructured review, software inspection is a highly structured process. Proper inspection requires a list of quality criteria as well as a basis document (e.g., requirements) with which to compare the designs. Inspection differs primarily from review in that it involves a closer examination. Forty-five minutes per page is not uncommon in an inspection process. The inspections are performed off-line, outside meetings. At inspection meetings, the potential defects are collected as efficiently as possible from the inspection team members. No document can enter the inspection process without meeting certain quality criteria beforehand. These entry criteria are assessed by the inspection leader, a key role in this process.

Inspection can be used at any phase of software development. It is most effective while reviewing written specifications and architectures, although it has been used for code review.

    Table of Contents Previous Section Next Section