Книга: Agile Software Development

An Ecosystem That Ships Software

The purpose of this chapter is to discuss and boil the topic of methodologies it until the rules of the methodology design game, and how to play that game, are clear.

"Methodology Concepts" covers the basic vocabulary and concepts needed to design and compare methodologies. These include the obvious concepts such as roles, techniques, and standards and also less-obvious concepts such as weight, ceremony, precision, stability, and tolerance. In terms of "Levels of Audience" as described in the introduction, this is largely Level 1 material. It is needed for the more advanced discussions that follow.

"Methodology Design Principles" discusses seven principles that can be used to guide the design of a methodology. The principles highlight the cost of moving to a heavier methodology as well as when to accept that cost. They also show how to use work-product stability in deciding how much concurrent development to employ.

"XP under Glass" applies the principles to analyze an existing, agile methodology. It also discusses using the principles to adjust XP for slightly different situations.

"Why Methodology at All?" revisits that key question in the light of the preceding discussion and presents the different uses to which methodologies are put.

An Ecosystem That Ships Software

"Methodology is a social construction," Ralph Hodgson told me in 1993. Two years went by before I started to understand.

Your "methodology" is everything you regularly do to get your software out. It includes who you hire, what you hire them for, how they work together, what they produce, and how they share. It is the combined job descriptions, procedures, and conventions of everyone on your team. It is the product of your particular ecosystem and is therefore a unique construction of your organization.

All organizations have a methodology: It is simply how they do business. Even the proverbial trio in a garage have a way of working?a way of trading information, of separating work, of putting it back together?all founded on assumed values and cultural norms. The way of working includes what people choose to spend their time on, how they

I use the word methodology as found in the Merriam-Webster dictionaries: "A series of related methods or techniques." A method is a "systematic procedure," similar to a technique.

(Readers of the Oxford English Dictionary may note that some OED editions only carry the definition of methodology as "study of methods," while others carry both. This helps explain the controversy over the word methodology.)

The distinction between methodology and method is useful. Reading the phrases "a method for finding classes from use cases" or "different methods are suited for different problems," we understand that the author is discussing techniques and procedures, not establishing team rules and conventions. That frees the use of the word methodology for the larger issues of coordinating people's activities on a team.

Only a few companies bother to try to write it all down (usually just the large consulting houses and the military). A few have gone so far as to create an expert system that prints out the full methodology needed for a project based on project staffing, complexity, deadlines, and the like. None I have seen captures cultural assumptions or provides for variations among values or cultures.

Boil and condense the subject of methodology long enough and you get this one-sentence summary: ?A methodology is the conventions that your group agrees to.?

"The conventions your group agrees to" is a social construction. It is also a construction that you can and should revisit from time to time.

Coordination is important. The same average people who produce average designs when working alone often produce good designs in collaboration. Conversely, all the smartest people together still won't produce group success without coordination, cooperation, and communication. Most of us have witnessed or heard of such groups. Team success hinges on cooperation, communication, and coordination.

Structural Terms

The first methodology structure I saw contained about seven elements. The one I now draw contains 13 (see Figure 4-1). The elements apply to any team endeavor, whether it is software development, rock climbing, or poetry writing. What you write for each box will vary, but the names of the elements won't.

Methodology Concepts


Figure 4-1. Elements of a methodology.

Roles. Who you employ, what you employ them for, what skills they are supposed to have. Equally importantly, it turns out, is the personality traits expected of the person. A project manager should be good with people, a user interface designer should have nature visual talents and some empathy for user behavior, an object-oriented program designer should have good abstraction faculties, and a mentor should be good at explaining things.

It is bad for the project when the individuals in the jobs don't have the traits needed for the job (for example, a project manager who can't make decisions or a mentor who does not like to communicate). Skills. The skills needed for the roles. The "personal prowess" of a person in a role is a product of his training and talent.

Programmers attend classes to learn object-oriented, Java programming and unit-testing skills.

User interface designers learn how to conduct usability examinations and do paper-based prototyping.

Managers learn interviewing, motivating, hiring, and critical-path task-management skills.

The best people draw heavily upon their natural talent, but in most cases adequate skills can be acquired through training and practice. Teams. The roles that work together under various circumstances.

There may be only one team on a small project. On a large project, there are likely to be multiple, overlapping teams, some aimed at harnessing specific technologies and some aimed at steering the project or the system's architecture. Techniques. The specific procedures people use to accomplish tasks. Some apply to a single person (writing a use case, managing by walking around, designing a class or test case), while others are aimed at groups of people (project retrospectives, group planning sessions). In general, I use the word technique if there is a prescriptive presentation of how to accomplish a task, using an understood body of knowledge.

Activities. How the people spend their days. Planning, programming, testing, and meeting are sample activities.

Some methodologies are work-product intensive, meaning that they focus on the work products that need to be produced. Others are activity-intensive, meaning that they focus on what the people should be doing during the day. Thus, where the Rational Unified Process is tool- and work-product intensive, Extreme Programming is activity intensive. It achieves its effectiveness, in part, by describing what the people should be doing with their day (pair programming, test-first development, refactoring, etc.).

Process. How activities fit together over time, often with pre- and post-conditions for the activities (for example, a design review is held two days after the material is sent out to participants and produces a list of recommendations for improvement). Process-intensive methodologies focus on the flow of work among the team members.

Process charts rarely convey the presence of loopback paths, where rework gets done. Thus, process charts are usually best viewed as workflow diagrams, describing who receives what from whom. Work products. What someone constructs. A work product may be disposable, as with CRC design cards, or it may be relatively permanent, as the usage manual or source code.

I find it useful to reserve deliverable to mean "a work product that gets passed across an organizational boundary." This allows us to apply the term deliverable at different scales: The deliverables that pass between two subteams are work products in terms of the larger project. The work products that pass between a project team and the team working on the next system are deliverables of the project and need to be handled more carefully.

Work products are described in generic terms such as "source code" and "domain object model." Rules about the notation to be used for each work product get described in the work product standards. Examples of source-code standards include Java, Visual Basic, and executable visual models. Examples of class diagram standards could be UML or OML. Milestones. Events marking progress or completion. Some milestones are simply assertions that a task has been performed, and some involve the publication of documents or code.

A milestone has two key characteristics: It occurs in an instant of time, and it is either fully met or not met (it is not partially met). A document is either published or not, the code is delivered or not, the meeting was held or not. Standards. The conventions the team adopts for particular tools, work products, and decision policies.

A coding standard might declare this: "Every function has the following header comment..."

A language standard might be this: "We'll be using fully portable Java."

A drawing standard for class diagrams might be this: "Only show public methods of persistent functions."

A tool standard might be this: "We'll use Microsoft Project, Together/J, JUnit, ..."

A project-management standard might be this: "Use milestones of two days to two weeks and incremental deliveries every two to three months."

Quality. Quality may refer to the activities or the work products.

In XP, the quality of the team's program is evaluated by examining the source code work product: "All checked-in code must pass unit tests at 100% at all times."

The XP team also evaluates the quality of their activities: Do they hold a stand-up meeting every day? How often do the programmers shift programming partners? How available are the customers for questions? In some cases, quality is given a numerical value; in other cases, a fuzzy value ("I wasn't happy with the team morale on the last iteration."). Team Values. The rest of the methodology elements are governed by the team's value system. An aggressive team working on quick-to-market values will work very differently than a group that values families and goes home at a regular time every night.

As Jim Highsmith likes to point out, a group whose mission is to explore and locate new oil fields will operate on different values and produce different rules than a group whose mission is to squeeze every barrel out of a known oil field at the least possible cost.

Types of Methodologies

Rechtin (1997) categorizes methodologies themselves as being either normative, rational, participative, or heuristic.

Normative methodologies are based on solutions or sequences of steps known to work for the discipline. Electrical and other building codes in house wiring are examples. In software development, one would include state diagram verification in this category.

Rational methodologies (no connection with the company) are based on method and technique. They would be used for system analysis and engineering disciplines.

Participative methodologies are stakeholder based and capture aspects of customer involvement.

Heuristic methodologies are based on lessons learned. Rechtin cites their use in the aerospace business (space and aircraft design).

As a body of knowledge grows, sections of the methodology move from heuristic to normative and become codified as standard solutions for standard problems. In computer programming, searching algorithms have reached that point. The decision about whether to put people in common or private offices has not.

Most of software development is still in the stage where heuristic methodologies are appropriate.

Milestones

Milestones are markers for where interesting things happen in the project. At each milestone, one or more people in some named roles must get together to affect the course of a work product.

Three kinds of milestones are used on projects, each with its particular characteristics. They are

· Reviews

· Publications

· Declarations

In a review, several people examine a work product. With respect to reviews, we care about the following questions: Who is doing the reviewing? What are they reviewing? Who created that item? What is the outcome of the review? Few reviews cause a project to halt; most end with a list of suggestions that are supposed to be incorporated.

A publication occurs whenever a work product is distributed or posted for open viewing. Sending out meeting minutes, checking source code into a configuration-management system, and deploying software to users' workstations are different forms of publication. With respect to publications, we care about the following: What is being published? Who publishes it? Who receives it? What causes it to be published?

The declaration milestone is a verbal notice from one person to another, or to multiple people, that a milestone was reached. There is no object measure for a declaration; it is simply an announcement or a promise. Declarations are interesting because they construct a web of promises inside the team's social structure. This form of milestone came as a surprise to me, when I first detected it. Discovering Declarations

The first declaration milestone I detected was made during a discussion with the manager of the technical writers on a 100-person project. I asked how she knew when to assign a person to start writing the on-line help text (its birth event).

She said it was when a team lead told her that a section of the application was "ready" for her.

I asked her what "ready" meant, whether it meant that the screen design was complete.

She said it only meant that the screen design was relatively stable. The team lead was, in essence, making the following promise:

"We estimate that the changes that we are still going to make are relatively small compared to the work the tech writer will be doing, and the rework the writer will do will be relatively small compared to the overall work. So this would be a good time to get the writing started."

That assertion is full of social promises. It is a promise, given by a trained person, that in his judgement the tradeoffs are balanced and that this is a good time to start.

A declaration ("It's ready!") is often the form of milestone that moves code from development to test, alpha delivery, beta delivery, and even deployment.

Declarations are interesting to me as a researcher, because I have not seen them described in process-centric methodologies, which focus on process entry and exit criteria. They are easier to discuss when we consider software development as a cooperative game. In a cooperative game, the project team's web of interrelationships, and the promises holding them together, are more apparent.

The role-deliverable-milestone chart is a quick way to view the methodology in brief and has an advantage over process diagrams in that it shows the parallelism involved in the project quite clearly. It also allows the team to see the key stages of completion the artifacts go through. This helps them manage their actions according to the intermediate states of the artifacts, as recommended in some modern methodologies (Highsmith 1999).


Figure 4-2. The three dimensions of scope.

Scope

The scope of a methodology consists of the range of roles and activities that it attempts to cover (Figure 4-2).

The earliest object-oriented methodologies presented the designer as having the key role and discussed the techniques, deliverables, and standards for the design activity of that role. These methodologies were considered inadequate in two ways:

· They were not as broad as needed. A real project involves more roles than just the OO designer, and each role involves more activities, more deliverables, and more techniques than these books presented.

· They were too constricting. Designers need more than one design technique in their toolbox.

Groups with a long history of continuous experience, such as the U.S. Department of Defense, Andersen Consulting, James Martin and Associates, IBM, and Ernst & Young already had methodologies covering the standard life-cycle of a project, even starting from the point of project sales and project setup. Their methodologies cover every person needed on the project, from staff assistant through sales staff, designer, project manager, and tester.

The point is that both are "methodologies." The scope of their concerns is different.

The scope of a methodology can be characterized along three axes: lifecycle coverage, role coverage, and activity coverage (Figure 4-3).

· Life-cycle coverage indicates when in the life cycle of the project the methodology comes into play, and when it ends.

· Role coverage refers to which roles fall into the domain of discussion.

· Activity coverage defines which activities of those roles fall into the domain of discussion. The methodology may take into account filling out time sheets (a natural inclusion as part of the project manager's project monitoring and scheduling assignment) and may omit vacation requests (because it is part of basic business operations).


Figure 4-3. Scope of Extreme Programming.

Clarifying a methodology's intended scope helps take some of the heat out of methodology arguments. Often, two seemingly incompatible methodologies target different parts of the life cycle or different roles. Discussions about their differences go nowhere until their respective scope intentions are clarified.

In this light, we see that the early OO methodologies had a relatively small scope. They addressed typically only one role, the domain designer or modeler. For that role, only the actual domain modeling activity is represented, and only during the analysis and design stages. Within that very narrow scope, they covered one or a few techniques and outlined one or a few deliverables with standards. No wonder experienced designers felt they were inadequate for overall development.

The scope diagram helps us see where methodology fragments combine well. An example is the natural fit of Constantine and Lockwood's user interface design recommendations (Constantine 1999) with methodologies that omit discussion of UI design activities (leaving that aspect to authors who know more about the subject).


Figure 4-4. Scope of Constantine & Lockwood's Design for Use methodology fragment.

Without having these scoping axes at hand, people would ask Larry Constantine, "How does your methodology relate to the other Agile Methodologies on the market?" In a talk at Software Development 2001, Larry Constantine said he didn't know he was designing a methodology, he was just discussing good ways to design user interfaces.

Having the methodology scope diagram in view, we easily see how they fit. XP's scope of concerns is shown in Figure 4-3. Note that it lacks discussion of user interface design. The scope of concerns for Design for Use is shown in Figure 4-4. We see, from these figures, that the two fit together. The same applies for Design for Use and Crystal Clear.

Conceptual Terms

To discuss the design of a methodology, we need different terms: methodology size, ceremony, and weight, problem size, project size, system criticality, precision, accuracy, relevance, tolerance, visibility, scale, and stability.

Methodology Size The number of control elements in the methodology. Each deliverable, standard, activity, quality measure, and technique description is an element of control. Some projects and authors will wish for smaller methodologies; some will wish for larger.

Ceremony The amount of precision and the tightness of tolerance in the methodology. Greater ceremony corresponds to tighter controls (Booch 1995). One team may write use cases on napkins and review them over lunch. Another team may prefer to fill in a three-page template and hold half-day reviews. Both groups write and review use cases, the former using low ceremony, the latter using high ceremony.

The amount of ceremony in a methodology depends on how life critical the system will be and on the fears and wishes of the methodology author, as we will see. Methodology Weight The product of size and ceremony, the number of control elements multiplied by the ceremony involved in each. This is a conceptual product (because numbers are not attached to size and ceremony), but it is still useful.

Problem Size The number of elements in the problem and their inherent cross-complexity.

There is no absolute measure of problem size, because a person with different knowledge is likely to see a simplifying pattern that reduces the size of the problem. Some problems are clearly different enough from others that relative magnitudes can be discussed (launching a space shuttle is a bigger problem than printing a company's invoices).

The difficulty in deciding the problem size is that there will often be controversy over how many people are needed to deliver the product and what the corresponding methodology weight is.

Project Size The number of people whose efforts need to be coordinated: staff size. Depending on the situation, you may be coordinating only programmers or an entire department with many roles.

Many people use the phrase "project size" ambiguously, shifting the meaning from staff size to problem size even within a sentence. This causes much confusion, particularly because a small, sharp team often outperforms a large, average team.

The relationship between problem, staff, and methodology size are discussed in the next section.

System Criticality The damage from undetected defects. I currently classify criticality simply as one of loss of comfort, loss of discretionary money, loss of irreplaceable money, or loss of life. Other classifications are possible.

Precision How much you care to say about a particular topic. Pi to one decimal place of precision is 3.1, to four decimal places is 3.1416. Source code contains more precision than a class diagram; assembler code contains more than its high-level source code. Some methodologies call for more precision earlier than others, according to the methodology author's wishes.

Accuracy How correct you are when you speak about a topic. To say "Pi to one decimal place is 3.3" would be inaccurate. The final object model needs to be more accurate than the initial one. The final GUI description is more accurate than the low-fidelity prototypes. Methodologies cover the growth of accuracy as well as precision.

Relevance Whether or not to speak about a topic. User interface prototypes do not discuss the domain model. Infrastructure design is not relevant to collecting user functional requirements. Methodologies discuss different areas of relevance.

Tolerance How much variation is permitted.

The team standards may require revision dates to be put into the program code?or not. The tolerance statement may say that a date must be found, either put in by hand or added by some automated tool. A methodology may specify line breaks and indentation, leave those to peoples' discretion, or state acceptable bounds. An example in a decision standard is stating that a working release must be available every 3 months, plus or minus one month.

Visibility How easily an outsider can tell if the methodology is being followed. Process initiatives such as ISO9001 focus on visibility issues. Because achieving visibility creates overhead (cost in time, money, or both), agile methodologies as a group lower the emphasis on such visibility. As with ceremony, different amounts of visibility are appropriate for different situations. Scale How many items are rolled together to be presented as a single item. Booch's former "class categories" provided for a scaled view of a set of classes. The UML "package" allows for scaled views of use cases, classes, or hardware boxes. Project plans, requirements, and designs can all be presented at different scales.

Scale interacts somewhat with precision. The printer or monitor's dot density limits the amount of detail that can be put onto one screen or page. However, even if it could all be put onto one page, some people would not want to see all that detail. They want to see a rolled-up or high-level version.

Stability How likely it is to change. I use only three stability levels: wildly fluctuating, as when a team is just getting started; varying, as when some development activity is in mid-stride; and relatively stable, as just before a requirements / design / code review or product shipment.

One way to find the stability state is to ask: "If I were to ask the same questions today and in two weeks, how likely would I be to get the same answers?"

In the wildly fluctuating state, the answer is "Are you kidding? Who knows what this will be like in two weeks!"

In the varying state, the answer is "Somewhat similar, but of course the details are likely to change."

In the relatively stable state, the answer is "Pretty likely, although a few things will probably be different."

Other ways to determine the stability may include measuring the "churn" in the use case text, the diagrams, the code base, the test cases, and so on (I have not tried these).


Figure 4-5. A project map: a low-precision version of a project plan.

Precision

Precision is a core concept manipulated within a methodology. Every category of work product has low-, medium-, and high-precision versions.

Here are the low-, medium-, and high-precision versions of some key work products.

The Project Plan

The low-precision view of a project plan is the project map (Figure 4-5). It shows the fundamental items to be produced, their dependencies, and which are to be deployed together. It may show the relative magnitudes of effort needed for each item. It does not show who will do the work or how long the work will take (which is why it is called a map and not a plan).

Those who are used to working with PERT charts will recognize the project map as a coarse-grained PERT chart showing project dependencies, augmented with marks showing where releases occur.

This low-precision project map is very useful in organizing the project before the staffing and timelines are established. In fact, I use it to derive timelines and staffing plans.

The medium-precision version of the project plan is a project map expanded to show the dependencies between the teams and the due dates.

The high-precision version of the project plan is the well-known, task-based GANTT chart, showing task times, assignments, and dependencies.

The more precision in the plan, the more fragile it is, which is why constructing GANTT charts is so feared: it is time-consuming to produce and gets out of date with the slightest surprise event. Behavioral Requirements / Use Cases Behavioral requirements are often written with use cases.

The lowest level of precision version of a set of use cases is the Actors-Goals list, the list of primary actors and the goals they have with respect to the system (Figure 4-6). This lowest-precision view is useful at the start of the project when you are prioritizing the use cases and allocating work to teams. It is useful again whenever an overview of the system is needed.


Figure 4-6. An Actors-Goals list: the lowest-precision view of behavioral requirements.

The medium level of precision consists of a one-paragraph brief synopsis of the use case, or the use case's title and main success scenario.

The medium-high level of precision contains extensions and error conditions, named but not expanded.

The final, highest level of precision includes both the extension conditions and their handling.

These levels of precision are further described in (Cockburn 2000).

The Program Design

The lowest level of precision in an object-oriented design is a Responsibility-Collaborations diagram, a very coarse-grained variation on the UML object collaboration diagram (Figure 4-7). The interesting thing about this simple design presentation is that people can already review it and comment on the allocation of responsibilities.

A medium level of precision is the list of major classes, their major purpose, and primary collaboration channels.

A medium-high level is the class diagram, showing classes, attributes, and relationships with cardinality.

A high level of precision is the list of classes, attributes, relations with cardinality constraints, and functions with function signatures. These often are listed on the class diagram.

The final, highest level of precision is the source code.


Figure 4-7. A Responsibility-Collaboration diagram: the low-precision view of an object-oriented design.

These levels for design show the natural progression from Responsibility-Driven Design (Beck 1987, Cunningham URL-CRC) through object modeling with UML, to final source code. The three are not in opposition, as some imagine, but rather occur along very natural progression of precision.

As we get better at generating final code from diagrams, the designers will add precision and code-generation annotations to the diagrams. As a consequence, the diagrams plus annotations become the "source code." The C++ or Java stops being source code and becomes generated code.

The User Interface Design

The low-precision description of the user interface is the screen flow diagram, which states only the purpose and linkage of each screen.

The medium level of precision description consists of the screen definitions with field lengths and the various field or button activation rules.

The highest precision definition of the user interface design is the program's source code.


Figure 4-8. Using low levels of precision to trigger other activities.

Working with "Precision"

People do a lot with these low-precision views. During the early stages of the project, they plan and evaluate. At later stages, they use the low-precision views for training.

I currently think of a level of precision as being reached when there is enough information to allow another team to start work. Figure 4-8 shows the evolution of six types of work products on a project: the project plan, the use cases, the user interface design, the domain design, the external interfaces, and the infrastructure design.

In looking at Figure 4-8, we see that having the actor-goal list in place permits a preliminary project plan to be drawn up. This may consist of the project map along with time and staffing assignments and estimates. Having those, the teams can split up and capture the use-case briefs in parallel. As soon as the use-case briefs?or a significant subset of them?are in place, all the specialist teams can start working in parallel, evolving their own work products.

One thing to note about precision is that the work involved expands rapidly as precision increases. Figure 4-9 shows the work increasing as the use cases grow from actors, to actors and goals, to main success scenarios, to the various failure and other extension conditions, and finally to the recovery actions. A similar diagram could be drawn for each of the other types of work products.

Because higher-precision work products require more energy and also change more often than their low-precision counterparts, a general project strategy is to defer, or at least carefully manage, their construction and evolution.


Figure 4-9. Work expands with increasing precision level (shown for use cases).

Stability and Concurrent Development

Stability, the "likelihood of change," varies over the course of the project (Figure 4-10).

A team starts in a situation of instability. Over time, team members reduce the fluctuations and reach a varying state as the design progresses. They finally get their work relatively stable just prior to a design review or publication. At that point, the reviewers and users provide new information to the development team, which makes the work less stable again for a period.

On many projects, instability jumps unexpectedly on occasions, such as when a supplier suddenly announces that he will not deliver on time, a product does not perform as predicted, or an algorithm does not scale as expected.

You might think that you should strive for maximum stability on a project.

However, the appropriate amount of stability to target varies by topic, by project priorities, and by stage in the project. Different experts have different recommendations about how to deal with the varying rates of changes across the work products and the project stages.


Figure 4-10. Reducing fluctuations over the course of a project.


Figure 4-11. Successful serial development takes longer (but fewer workdays) compared to successful concurrent development.

The simplest approach is to say, "Don't start designing until the requirements are Stable (with a capital 'S'); don't start programming until the design is Stable,? and so on. This is serial development. Its two advantages make it attractive to many people. It is, however, fraught with problems.

The first advantage is its simplicity. The person doing the scheduling simply sequences the activities one after the other, scheduling a downstream activity to start when an upstream one gets finished.

The second advantage is that, if there are no major surprises that force a change to the requirements or the design, a manager can minimize the number of work-hours spent on the project, by carefully scheduling when people arrive to work on their particular tasks.

There are three problems, though.

The first problem is that the elapsed time needed for the project is the straight sum of the times needed for requirements, design, programming, test, and so on. This is the longest time that can be needed for the project. With the most careful management, the project manager will get the longest elapsed time at the minimum labor cost. For projects on which reducing elapsed time is a top priority; this is a bad tradeoff.

The second problem is that surprises usually do crop up during the project. When one does, it causes unexpectedly revision of the requirements or design, raising the development cost. In the end, the project manager minimizes neither the labor cost nor the development time.

The third problem is absence of feedback from the downstream activities to the upstream activities.

In rare instances, the people doing the upstream activity can produce high-quality results without feedback from the downstream team. On most projects, though, the people creating the requirements need to see a running version of what they ordered, so they can correct and finalize their requests. Usually, after seeing the system in action, they change their requests, which forces changes in the design, coding, testing, and so on. Incorporating these changes lengthens the project's elapsed time and increases total project costs.

Selecting the serial-development strategy really only makes sense if you can be sure that the team will be able to produce good, final requirements and design on the first pass. Few teams can do this.


Figure 4-12. In serial development, each workgroup waits for the upstream workgroup to achieve complete stability before starting.


Figure 4-13. In concurrent development, each group starts as early as its communications and rework capabilities indicate. As it progresses, the upstream group passes update information to the downstream group in a continuous stream (the dashed arrows).

A different strategy, concurrent development, shortens the elapsed time and provides feedback opportunities at the cost of increased rework. Figure 4-11 and Figure 4-13 illustrate it, and Principle 7, "Efficiency is expendable away from bottleneck activities," on page ???, analyzes it further. [Insert cross-reference. Verify figure numbers.]

In concurrent development, each downstream activity starts at some point judged to be appropriate with respect to the completeness and stability of the upstream team's work (different downstream groups may start at different moments with respect to their upstream groups, of course). The downstream team starts operating with the available information, and as the upstream team continues work, it passes new information along to the downstream team.

To the extent that the downstream team guesses right about where the upstream team is going and the upstream team does not encounter major surprises, the downstream team will get its work approximately right. The team will do some rework along the way, as new information shows up.

The key issue in concurrent development is judging the completeness, stability, rework capability, and communication effectiveness of the teams.

The advantages of concurrent development are twofold, the exact opposites of the disadvantages of serial development:

· The upstream teams get feedback from the downstream teams. The designers can indicate how difficult the requirements are to implement. The programmers may produce code soon enough for the requirements group to get feedback on the desirability of the requirements.

· Although each downstream activity takes longer than it would if done serially and the upstream team never changed its mind, the downstream activity starts much earlier. The net effect is that the downstream team finishes sooner than it otherwise would, possibly just a few days or weeks after the upstream work is finished.

Such concurrent development is described as the Gold Rush strategy in Surviving Object-Oriented Projects (Cockburn 1998). The Gold Rush strategy presupposes both good communication and rework capacity. The Gold Rush strategy is suited to situations in which the requirements gathering is predicted to go on for longer than can be tolerated in the project plan, so there would simply not be enough time for proper design if the next team had to wait for the requirements to settle.

Actually, many projects fit this profile.

Gold-Rush-type strategies are not risk free. There are three pitfalls to watch out for:

· The first pitfall is overdoing the strategy; for example, allowing the design team to get ahead of the requirements team (Figure 4-14). One such team announced one day that its design was already stable and ready for review. The team was just waiting for the requirements people to hurry up and generate the requirements!


Figure 4-14. Keeping upstream activities more stable than downstream activities. The wavy lines show the instability of work products in requirements and design. In the healthy situation (left) both fluctuate at the same time, but the requirements fluctuation is smaller than the design. In the unhealthy situation, the design is already stable before the requirements have even started settling down!

· The second pitfall is when the communications path between the teams is not rich enough. If the teams are geographically separated, for example, it is harder for the upstream team to pass along its changing information. As the communications cost rises, it eventually becomes more effective to wait for greater stability in the upstream work before triggering the downstream team.

· The third pitfall is making a mistake in estimating a team's rework capacity. Where a team has little or no spare capacity, it must be given much more stable inputs to work from.

16 Smalltalkers, 2 Database Designers

One project had 16 Smalltalk programmers and only two database designers.

In this situation, we could let the Smalltalk programmers start working as soon as the requirements were starting to shape up. At the same time, we could not afford to trigger the database designers to start their work until the object model had been given passing marks in its design review.

Only after the object model had passed "stable enough for review" and actually been reviewed, with the DBAs in the review group, could the DBAs take the time to start their design work.

The complete discussion about when and where to apply concurrent development is presented in Principle 7 of methodology design, "Rework is acceptable away from bottleneck activities," on page ???. [Insert cross-reference.]

The point to understand now is that stability plays a role in methodology design.

Both XP and Adaptive Software Development (Highsmith 2000) suggest maximizing concurrency. This is because both are intended for situations with strong time-to-market priorities and requirements that are likely to change as a consequence of producing the emerging system.

Fixed-price contracts often benefit from a mixed strategy: In those situations, it is useful to have the requirements quite stable before getting far into design. The mix will vary by project. Sometimes, the company making the bid may do some designing or even coding just to prepare its bid.


Figure 4-15. Role-deliverable-milestone view of a methodology.

Publishing a Methodology

Publishing a methodology has two components: the pictorial view and the text itself.

The Pictorial View

One way to present the design of a methodology is to show how the roles interact across work products (Figure 4-15). In such a "Role-Deliverable-Milestone" view, time runs from left to right across the page, roles are represented as broad bands across the page, and work products are shown as single lines within a band. The line of a work product shows critical events in its life: its birth event (what causes someone to create it), its review events (who examines it), and its death event (at what moment it ceases to have relevance, if ever).

Although the Role-Deliverable-Milestone view is a convenient way to capture the work-product dependencies within a methodology, it evidently is also good for putting people to sleep: Methodology Chart as Sleeping Aid I once created the proverbial wall chart of the methodology for a large project, meticulously showing the several hundred interlocking parts of the group's methodology using the Role-Deliverable-Milestone view to condense the information.

Many people had been asking to see the entire methodology, so I printed the chart, several feet on each side, and put it on a large wall. It was interesting to watch people's eyes glaze over whenever I was pointing to the time line for another project role, such as the project managers or technical writers, and only come back into focus when I got to their own section. It turned out that most people really only wanted to see the section of the methodology that affected them and not what everyone in the organization was doing.

The pictorial view misses the practices, standards, and other forms of collaboration so important to the group. Those don't have a convenient graphical portrayal and must be listed textually.

The Methodology Text

In published form, a methodology is a text that describes the techniques, activities, meetings, quality measures, and standards of all the job roles involved. You can find examples in Object-Oriented Methods: Pragmatic Considerations (Martin 1996), and The OPEN Process Specification (Graham 1997). The Rational Unified Process has its own Web site with thousands of Web pages.

Methodology texts are large. At some level there is no escape from this size. Even a tiny methodology, with four roles, four work products per role, and three milestones per work product has 68 (4 + 16 + 48) interlocking parts to describe, leaving out any technique discussions. And even XP, which initially weighed in at only about 200 pages (Beck 1999), now approaches 1,000 pages when expanded to include additional guidance about each of its parts (Jeffries 2000, Beck 2000, Auer 2001, Newkirk 2001).

There are two reasons why most organizations don't issue a thousand-page text describing their methodology to each new employee:

· The first is what Jim Highsmith neatly captures with the distinction, "documentation versus understanding."

The real methodology resides in the minds of the staff and in their habits of action and conversation.

Documenting chunks of the methodology is not at all the same as providing understanding, and having understanding does not presuppose having documentation. Understanding is faster to gain, because it grows through the normal job experiences of new employees.

· The second is that the needs of the organization are always changing.

It is impractical, if not impossible, to keep the thousand-page text current with the needs of the project teams. As new technologies show up, the teams must invent new ways of working to handle them, and those cannot be written in advance. An organization needs ways to evolve new variants of the methodologies on the fly and to transfer the good habits of one team to the next team. You will learn how to do that as you proceed through this book.

Reducing Methodology Bulk

There are several ways to reduce the physical size of the methodology publication:

Provide examples of work products

Provide worked examples rather than templates. Take advantage of people's strengths in working with tangibles and examples, as discussed earlier.

Collect reasonably good examples of various work products: a project plan, a risk list, a use case, a class diagram, a test case, a function header, a code sample.

Place them online, with encouragement to copy and modify them. Instead of writing a standards document for the user interface, post a sample of a good screen for people to copy and work from. You may need to annotate the example showing which parts are important.

Doing these things will lower the work effort required to establish the standards and will lower the barrier to people using them.

One of the few books to show deliverables and their standards is Developing Object-Oriented Software (OOTC 1997), which was prepared for IBM by its Object-Oriented Technology Center in the late 1990s and was then made public.

Remove the technique guides

Rather than trying to teach the techniques by providing detailed descriptions of them within the methodology document, let the methodology simply name the recommended techniques in the methodology, along with any known books and courses that teach them.

Techniques-in-use involve tacit knowledge. Let people learn from experts, using apprenticeship-based learning, or let them learn from a hands-on course in which they can practice the technique in a learning environment.

Where possible, get people up to speed on the techniques before they arrive on the project, instead of teaching the technique as part of a project methodology on project time. The techniques will then become skills owned by people, who simply do their jobs in their natural ways.

Organize the text by role

It is possible to write a low-precision but descriptive paragraph about each role, work product, and milestone, linking the descriptions with the Role-Deliverable-Milestone chart. The sample role descriptions might look something like these:

Executive Sponsor. A person who acts in the capacity to support and monitor the progress of an approved project. Responsible for scoping, prioritizing, and funding at the project level. Cross-team Lead. A person who is responsible for the progress of multiple teams, for uniting the efforts of these teams, for establishing priorities across teams, and for allocating resources (people) across teams.

Team Lead. A person who is responsible for the direction and progress of one team. Developer. A technical person who develops the software product. This may include UI, business classes, infrastructure, or data. Writer. A person who publishes technical communication about various subjects through media such as manuals, white papers, shared drives, intranet, or Internet. Rollout. One or more persons who communicate and coordinate field technicians and customer representatives and who roll out the products. External Tester. One or more persons who perform QA-related test functions outside of the development groups.

Maintainer. A person who makes necessary changes to the product after it ships.

For the work products, you need to record who writes them, who reads them, and what they contain. A fuller version would contain a sample, noting the tolerances permitted and the milestones that apply. Here are a few simple descriptions:

Overall Project Plan

Writer: Cross-team Lead.

Readers: Executive Sponsor, Team Leads, newcomers.

Contains: Across all teams, what is planned to be in the next several releases, the cross-team dependencies between their contents, the planned timing of development.

Dependency Table

Writer: Team Lead.

Readers: Team Leads, Cross-team Leads. Contains: What this team needs from every other team, and the date each item is needed. May include a fallback plan in case the item is not delivered on time.

Team Status Sheet

Writer: Team Lead.

Readers: Cross-team Lead, Developers. Contains: The current state of the team: rolled up list of things being worked on, next milestone, what is holding up progress, and stability level for each.

For the review milestones, record what is being reviewed, who is to review it, and what the outcome is. For example:

Release Proposal Review

Reviewers: Application Team Lead, Cross-team Lead, and Executive Sponsor.

Purpose: Basically a scope review. Reviewing: Use case summary, use cases, actors, external system description, development plan. Outcome: Modifications to scope, priorities, dates, possibly corrections to actor list or external systems.

Application Design Review

Reviewers: Team Lead, related Cross-team Leads, Cross-Team Mentors, Business experts. Purpose: Check quality, correctness, and conformance of the application design. Reviewing: Use cases, actors, domain class diagram, screen flows, screen designs, class tables (if any), and interaction diagrams (if any). Outcome: Factual corrections to the domain model, to the screen details. Suggestions or requirements for improved UI or application design, based on either quality or conformance considerations.

With these short paragraphs in place, the methodology can be summarized by role (as the following two examples show). The written form of the methodology, summarized by role, is a checklist for each person that can be fit onto one sheet of paper and pinned up in the person?s workspace. That sheet of paper contains no surprises (after the first reading) but serves to remind team members of what they already know.

Here is a slightly abridged example for the programmers:

Designer-Programmer

Writes

Weekly status sheet

Source code

Unit tests

Release notes ...

Reads:Actor descriptions

UI style guide ...

Reviews:

Application design review

(etc.)

Publishes:

Application. configuration

Test cases

(etc.)

Declares: UI Stable

You can see that this is not a methodology used to stifle creativity. To a newcomer, it is a list outlining how he is to participate on the team. To the ongoing developer, it is a reminder.

Using the Process Miniature

Publishing a methodology does not convey the visceral understanding that forms tacit knowledge. It does not convey the life of the methodology, which resides in the many small actions that accompany teamwork. People need to see or personally enact the methodology.

My currently favorite way of conveying the methodology is through a technique I call the Process Miniature.

In a Process Miniature, the participants play-act one or two releases of the process in a very short period of time

On one team I interviewed, new people were asked to spend their first week developing a (small) piece of software all the way from requirements to delivery. The purpose of the week-long exercise was to introduce the new person to the people, the roles, the standards, and the physical placement of things in the company.

More recently, Peter Merel invented a one-hour process miniature for Extreme Programming, giving it the nickname Extreme Hour. The purpose of the Extreme Hour is to give people a visceral encounter with XP so that they can discuss its concepts from a base of almost-real experience.

In the Extreme Hour, some people are designated "customers." Within the first 10 minutes of the hour, they present their requests with developers and work through the XP planning session.

In the next 20 minutes, the developers sketch and test their design on overhead transparencies. The total length of time for the first iteration is 30 minutes.

In the next 30 minutes, the entire cycle is repeated so that two cycles of XP are experienced in just 60 minutes.

Designer-Programmer

Writes Weekly status sheet

Source code

Unit tests

Release notes ... Reads:Actor descriptions

UI style guide ... Reviews: Application design review

(etc.) Publishes: Application. configuration

Test cases

(etc.)

Usually, the hosts of the Extreme Hour choose a fun assignment, such as designing a fish-catching device that keeps the fish alive until delivering them to the cooking area at the end of the day and also keeps the beer cold during the day. (Yes, they do have to cut scope during the iterations!)

We used a 90-minute process miniature to help the staff of a 50-person company experience a new development process we were proposing (you might notice the similarity of this process miniature experience to the informance described on page ???) [insert cross-ref]

In this case, we were primarily interested in conveying the programming and testing rules we wanted people to use. We therefore could not use a drawing-based problem such as the fish trap but had to select a real programming problem that would produce running, tested code for a Web application. A Process Miniature Experience

We wanted to demonstrate two full iterations of the process in 90 minutes. We wanted to show people negotiating over requirements and then creating and testing production of code, using the official five-layer architecture, execution database, configuration management system, official Web style sheets, and fully automated regression test suites. We therefore had to choose a tiny application. We elected to construct a simple up-down counter that would stick at 0 and 20 and could be reset to 0. The counter would use a Web browser interface and store its value in the official company database.

To meet the constraint of 45 minutes per iteration, we choreographed the show to a small extent. The marketing analysts were told to ask for more than the team could possibly deliver in 30 minutes of programming ("Could we please have a graphical, radial dial for the counter, in three colors?"). We did this in order to let the audience experience scope negotiation as they would encounter it in real life.

We also rehearsed how much the programmers would bid to complete the first iteration and how they might cut scope during the middle of the iteration so that the audience could see this in action.

The point of scripting these pieces was to give the entire company a view of what we wanted to establish as the social conventions for normal scope negotiation during project runs. We left the actual programming as live action. Even though the team knew the assignment, they still had to type it all in, in real time, as part of the experience. The audience, sitting through all of the typing, came to appreciate the amount of work that went into even such a trivial system.

Whatever form of Process Miniature you use, plan on replaying it from time to time in order to reinforce the team?s social conventions. Many of these conventions, such as the scope negotiation rules just described, won't find a place in the documentation but can be partially captured in the play.

Methodology Design Principles

Designing a methodology is not at all like designing software, hardware, bridges, or factories. Four things, in particular, get in the way:

· Variations in people. People are not the reliable components that designers count on when designing the other systems.

· Variations across projects. The appropriate methodology varies by project, nationality, and local culture.

Long debug cycles. The test and debug cycle for a methodology is on the order of months and years.

Changing technologies. By the time the methodology designer debugs one methodology design, the technologies, techniques, and cultures have changed and the design needs updating.

Common Design Errors

People who come freshly to their assignment of designing a methodology make a standard set of errors:

One size for all projects

Here is a conversation that I have heard all too often over the years: "Hi, Alistair. We have projects in many technologies all over the globe. We desperately need a common methodology for all of them. Could you please design one for us?"

"I'm afraid that would not be practical: The different technologies, cultures, and project priorities call for different ways of working."

"Right, got that. Now, please do tell us what our common methodology will be." "...!!?" This request is so widespread that I spend most of the next chapter on methodology tailoring.

The need for localized methodologies may be clear to you by now, but it will not be clear to your new colleague who gets handed the assignment to design the corporation's common methodology. Intolerant

Novice methodology designers have this notion that they have the answer for software development and that everyone really ought to work that way.

Software development is a fluid activity. It requires that people notice small discrepancies wherever they lie and that they communicate and resolve the discrepancies in whatever way is most practical. Different people thrive on different ways of working.

A methodology is, in fact, a straightjacket. It is exactly the set of conventions and policies the people agree to use: It is the size and shape of straightjacket they choose for themselves.

Given the varying characteristics of different people, though, that straightjacket should not be made any tighter than it absolutely needs to be.

Techniques are one particular section of the methodology that usually can be made tolerant. Many techniques work quite well, and different ones suit different people at different times.

The subject of how much tolerance belongs in the methodology should be a conscious topic of discussion in the design of your methodology.

Heavy

We have developed, over the years, an assumption that a heavier methodology, with closer tracking and more artifacts, will somehow be "safer" for the project than a lighter methodology with fewer artifacts.

The opposite is actually the case, as the principles in this section should make clear. However, that initial assumption persists, and it manifests itself in most methodology designs.

The heavier-is-safer assumption probably comes from the fear that project managers experience when they can't look at the code and detect the state of the project with their own eyes. Fear grows with the distance from the code. So they quite naturally request more reports summarizing various states of affairs and more coordination points. The alternative is to ... trust people. This can be a truly horrifying thought during a project under intense pressure. Being a Smalltalk programmer, I felt this fear firsthand when I had to coordinate a COBOL programming project.

Fear or no fear, adding weight to the methodology is not likely to improve the team's chance of delivering. If anything, it makes the team less likely to deliver, because people will spend more time filling in reports than making progress. Slower development often translates to loss of a market window, decreased morale, and greater likelihood of losing the project altogether.

Part of the art of project management is learning when and how to trust people and when not to trust them. Part of the art of methodology design is to learn what constraints add more burden than safety. Some of these constraints are explored in this chapter.

Embellished

Without exception, every methodology I have ever seen has been unnecessarily embellished with rules, practices, and ideas that are not strictly necessary. They may not even belong in the methodology. This even applies to the methodologies I have designed. It is so insidious that I have posted on the wall in front of me, in large print: "Embellishment is the pitfall of the methodologist." Embellishing a Methodology

I detected this tendency in myself while designing my first methodology. I asked a programmer colleague, a very practical person freshly returned from a live project, to double-check, edit, and trim my design. He indeed found the embellishments I was worried about. However, he then added one chapter to the methodology, calling for the production of contract-based design and deliverables he had just read about. I phoned him. "Surely you don't mean to say you used these on your last project?" I asked.

He replied, "Well, no, not on that project. But it's a really good idea and I think we ought to do it."

From this experience, I learned that the words "ought to" and "should" indicate embellishment. If someone says that people "should" do something, it probably means that they have never done it yet, they have successfully delivered software without it, and there probably is no chance of getting people to use it in the future.

Here is a sample story about that.

Discovering "Should"

Tester: "And then the developers have a meeting with the testers in which they describe the basic design."

Me: "Really, do they do that?"

Tester: "What do you mean? Of course they do."

Me: "Oh, yeah. They really do that, do they?"

Tester: "They've got to, or else the testers can't do their job!"

Me: "Right. Um ... In that case, there was such a meeting, and I can interview those people to find out what happened in the meeting. Can you tell me the date of such a meeting, and who was in the room?"

Tester: "Well, we were going to have it. I mean, you really should have that meeting, it's really valuable ..."

We didn't have to go much farther than that. Of course, no such meeting had taken place. Further, it was doubtful that we could enforce such a meeting in that company at that time, however useful it might have been.

There is another side to this embellishment business. Typically, the process owner has a distorted view of how developers really work. In my interviews, I rarely ever find a team of people who works the way the process owner says they work. This is so pervasive that I have had to mark as unreliable any interview in which I only got to speak with the manager or process designer.

The following is a sample, and typical, conversation from one of my interviews. At the time, I was looking for successful implementations of Object Modeling Technique (OMT). The person who was both process and team lead told me that he had a successful OMT project for me to review. I flew to California to interview this team, and the process and team lead told me that the team had a successful project for me to review. Uncovering Process Shortcuts

Me: "These are samples of the work products?... This is a state diagram?"

Leader: "Well, it's not really one. It's more of a flow diagram. I have to teach them how to make state diagrams properly."

Me: "But these are actual samples of the work products produced. Did you use an iterative and incremental process?"

Developer nods.

Leader: "We used a modification of Boehm's spiral model."

Me: "OK. And did the requirements or the design change in the second iteration?"

Developer: "Of course."

Me: "OK. ... How did you manage to update all these diagrams in the second iteration?" Developer: "Oh, we didn't. We just changed the code..."

Extreme Programming stands in contrast to the usual, deliverable-based methodologies. XP is based around activities. The rigor of the methodology resides in people carrying out their activities properly.

Not being aware of the difference between deliverable-based and activity-based methodologies, I was unsure how to investigate my first XP project. After all, the team has no drawings to keep up to date, so obviously there would be no out-of-date work products to discover!

An activity-based methodology relies on activities in action. XP relies on programming in pairs, writing unit tests, refactoring, and the like.

When I visit a project that claims to be an XP project, I usually find pair programming working well (or else they wouldn't declare it an XP project). Then, while they are pair programming, the people are more likely to write unit tests, and so I usually see some amount of test-writing going on.

The most common deviation from XP is that the people do not refactor their code often, which results in the code base becoming cluttered in ways that properly developed XP code shouldn't.

In general, though, XP has so few rules to follow that most of the areas of embellishment have been removed. XP is a special case of a methodology, and I'll analyze it separately at the end of the chapter.

Personally, I tend to embellish around design reviews and testing. I can't seem to resist sneaking an extra review or an extra testing activity through the "should" door ("Of course they should do that testing!" I hear you cry. Shouldn't they?!).

The way to catch embellishment is to have the directly affected people review the proposal. Watch their faces closely to discover what they know they won't do but are afraid to say they won?t do.

Untried

Most methodologies are untried. Many are simply proposals created from nothing. This is the fullblown "should" in action: "Well, this really looks like it should work."

After looking at dozens of methodology proposals in the last decade, I have concluded that nothing is obvious in methodology design. Many things that look like they should work don't (testing and keeping documentation up to date, for example), and many things that look like they shouldn't work actually do work (pair programming and test-first development, for example).

The late Wayne Stevens, designer of the IBM Consulting Group's Information Engineering methodology in the early 1990s, was well aware of this trap.

Whenever someone proposed a new object-centered / object-based / object-hybrid methodology for us to include in the methodology library, he would say, "Try it on a project, and tell us afterwards how it worked." They would typically object, "But that will take years! It is obvious that this is great!" To my recollection, not one of these obvious new methodologies was ever used on a project.

Since that time, I use Wayne Stevens' approach and see the same thing happen.

How are new methodologies made? Here's how I work when I am personally involved in a project:

· I adjust, tune, and invent whatever is needed to take the project to success.

· After the project, I extract those things I would repeat again under similar circumstances and add them to my repertoire of tactics and strategies.

· I listen to other project teams when they describe their experiences and the lessons they learned.

But when someone sends me a methodology proposal, I ask him to try it on a project first and report back afterwards.

Used once

The successor to "untried" is "used once." The methodology author, having discovered one project on which the methodology works, now announces it as a general solution. The reality is that different projects need different methodologies, and so any one methodology has limited ability to transfer to another project.

I went through this phase with my Crystal Orange methodology (Cockburn 1998), and so did the authors of XP. Fortunately, each of us had the good sense to create a "Truth in Advertising" label describing our own methodology?s area of applicability.

We will revisit this theme throughout the rest of the book: How do we identify the area of applicability of a methodology, and how do we tailor a methodology to a project in time to benefit the project?

Methodologically Successful Projects

You may be wondering about these project interviews I keep referring to. My work is based on looking for "methodologically successful" projects. These have three characteristics:

· The project was delivered. I don't ask if it was completed on time and on budget, just that the software went out the door and was used.

· The leadership remained intact. They didn't get fired for what they were doing.

· The people on the project would work the same way again.

The first criterion is obvious. I set the bar low for this criterion, because there are so many strange forces that affect how people refer to the "successfulness" of a project. If the software is released and gets used, then the methodology was at least that good.

The second criterion was added after I was called in to interview the people involved with a project that was advertised as being "successful." I found, after I got there, that the project manager had been fired a year into the project because no code had been developed up to that time, despite the mountains of paperwork the team had produced. This was not a large military or life-critical project, where such an approach might have been appropriate, but it was a rather ordinary, 18-developer technical software project.

The third criterion is the difficult one. For the purpose of discovering a successful methodology, it is essential that the team be willing to work in the prescribed way. It is very easy for the developers to block a methodology. Typically all they have to say is, "If I do that, it will move the delivery date out two weeks." Usually they are right, too.

If they don't block it directly, they can subvert it. I usually discover during the interview that the team subverted the process, or else they tolerated it once but wouldn't choose to work that way again.

Sometimes, the people follow a methodology because the methodology designer is present on the project. I have to apply this criterion to myself and disallow some of my own projects. If the people on the project were using my suggestions just to humor me, I couldn't know if they would use them when I wasn't present.

The pertinent question is, ?Would the developers continue to work that way if the methodology author was no longer present??

So far, I have discovered three methodologies that people are willing to use twice in a row. They are

· Responsibility-Driven Design (Wirfs-Brock 1991)

· Extreme Programming (Beck 1999)

· Crystal Clear (Cockburn 2002)

(I exclude Crystal Orange from this list, because I was the process designer and lead consultant. Also, as written, it deals with a specific configuration of technologies and so needs to be reevaluated in a different, newly adapted setting.)

Even if you are not a full-time methodology designer, you can borrow one lesson from this section about project interviews. Most of what I have learned about good development habits has come from interviewing project teams. The interviews are so informative that I keep on doing them.

This avenue of improvement is also available to you. Start your own project interview file, and discover good things that other people do that you can use yourself.

Author Sensitivity

A methodology's principles are not arrived at through an emotionally neutral algorithm but come from the author's personal background. To reverse the saying from The Wizard of Oz, "Pay great attention to the man behind the curtain."

Each person has had experiences that inform his present views and serve as their anchor points. Methodology authors are no different.

In recognition of this, Jim Highsmith has started interviewing methodology authors about their backgrounds. In Agile Software Development Ecosystems (Highsmith 2002), he will present not only each author's methodology but also his or her background.

A person's anchor points are not generally open to negotiation. They are fixed in childhood, early project experiences, or personal philosophy. Although we can renormalize a discussion with respect to vocabulary and scope, we cannot do that with personal beliefs. We can only accept the person's anchor points or disagree with them.

When Kent Beck quipped, "All methodology is based on fears," I first thought he was just being dismissive. Over time, I have found it to be largely true. One can almost guess at a methodology author's past experiences by looking at the methodology. Each element in the methodology can be viewed as a prevention against a bad experience the methodology author has had.

· Afraid that programmers make many little mistakes? Hold code reviews.

· Afraid that users don't know what they really want? Create prototypes.

· Afraid that designers will leave in the middle of the project? Have them write extensive design documentation as they go.

Of course, as the old saying goes, just because you are paranoid doesn't mean that they aren't after you. Some of your fears may be well founded. We found this in one project, as told to us over time by an adventuresome team leader. Here is the story as we heard it in our discussion group: Don't Touch My Private Variables

A team leader wanted to simplify the complex design surrounding the use of not-quite-private methods that wrote to certain local variables.

Someone in our group proposed making all methods public. This would simplify the design tremendously.

The team leader thought for a moment and then identified that he was operating on a fear that the programmers would not follow the necessary programming convention to keep the software safe. He wanted the programmers to use those public methods only for the particular programming situation that was causing trouble.

He was afraid that in the frenzy of deadlines, they would use them all the time, which would cause maintenance problems. He was willing to try the experiment of making them public and just writing on the team's whiteboard the very simple rule restricting their use.

I said, "Maybe your fears are well founded. How about if you don't just trust the people to behave well, but also write a little script to check the actual use of those methods over time? This way you will discover whether your fears are well founded or not."

The team leader agreed. The team leader went on vacation for two weeks. When he returned, he ran the script and found that the programmers had, in fact, been using the new, public methods, ignoring the note on the whiteboard.

(One person at the table chimed in here, "Well, sure, those were the only documented methods!")

This story raises an interesting point about trust: As much as I love to trust people, a weakness of people is being careless. Sometimes it is important to simply trust people, but sometimes it is important to install a mechanism to find out whether people can be trusted on a particular topic.

The final piece of personal baggage of the methodology authors is their individual philosophy. Some have a laissez-faire philosophy, some a military control philosophy. The philosophy comes with the person, shaping his experiences and being shaped by his experiences, fears, and wishes.

It is interesting to see how much of an author's methodology philosophy is used in his personal life. Does Watts Humphreys use a form of the Personal Software Process when he balances his checkbook? Does Kent Beck do the simplest thing that will work, getting incremental results and feedback as soon as he can? Do I travel light, and am I tolerant of other people's habits?

Here are some key bits of my background that either drive my methodology style or at least are consistent with it.

I travel light, as you might guess. I use a small laptop, carry a small phone, drive a small car, and see how little luggage I need when traveling. In terms of the eternal tug-of-war between mobility and armor, I am clearly on the side of mobility.

I have lived in many countries and among many cultures and keep finding that each works. This perhaps is the source of my sensitivity to development cultures and why I encourage tolerance in methodologies.

I also like to think very hard about consequences, so that I can give myself room to be sloppy. Thus, I balance the checkbook only when I absolutely have to, doing it in the fastest way possible, just to make sure checks don't bounce. I don't care about absolute accuracy. Once, when I built bookshelves, I worked out the fewest places where I had to be accurate in my cutting (and the most places where I could be sloppy) to get level and sturdy bookshelves.

When I started interviewing project teams, I was prepared to discover that process rigor was the secret to success. I was actually surprised to find that it wasn?t. However, after I found that using light methodologies, communicating, and being tolerant were effective, it was natural that I would capitalize on those results.

Beware the methodology author. Your experiences with a methodology may have a lot to do with how well your personal habits align with those of the methodology author.

Seven Principles

Over the years, I have found seven principles that are useful in designing and evaluating methodologies:

1. Interactive, face-to-face communication is the cheapest and fastest channel for exchanging information.

2. Excess methodology weight is costly.

3. Larger teams need heavier methodologies.

4. Greater ceremony is appropriate for projects with greater criticality.

5. Increasing feedback and communication lowers the need for intermediate deliverables.

6. Discipline, skills, and understanding counter process, formality, and documentation.

7. Efficiency is expendable in non-bottleneck activities.

Following is a discussion of each principle.

Principle 1. Interactive, face-to-face communication is the cheapest and fastest channel for exchanging information.

The relative advantages and appropriate uses of warm and cool communications channels was discussed in the last chapter. Generally speaking, we should prefer to use warmer communication channels in software development, since we are interested in reducing the cost of detecting and transferring information.

Principle 1 predicts that people sitting near each other with frequent, easy contact will find it easier to develop software, and the software will be less expensive to develop. As the project size increases and interactive, face-to-face communications become more difficult to arrange, the cost of communication increases, the quality of communication decreases, and the difficulty of developing the software increases.


Figure 4-16. Effectiveness of different communication channels (Repeat of Figure 3-14).

The principle does not say that communication quality decreases to zero, nor does it imply that all software can be developed by a few people sitting in a room. It implies that a methodology author might want to emphasize small groups and personal contact if productivity and cost are key issues. The principle is supported by management research (Plowman 1995, Sillince 1996, among others). [double-check refs]

We also used Principle 1 in the story, "Videotaped Archival Documentation," on page ???[insert cross ref], which describes documenting a design by videotaping two people discussing that design at a whiteboard.

The principle addresses one particular question: "How do forms of communication affect the cost of detecting and transferring information?"

One could ask other questions to derive other, related principles. For example, it might be interesting to uncover a principle to answer this question: "How do forms of communication affect a sponsor's evaluation of a team's conformance to a contract?" This question would introduce the issue of visibility in a methodology. It should produce a very different result, probably one emphasizing written documents.

Principle 2. Excess methodology weight is costly.

Imagine six people working in a room with osmotic communication, drawing on the printing whiteboard. Their communication is efficient, the bureaucratic load low. Most of their time is spent developing software, the usage manual, and any other documentation artifacts needed with the end product.

What size problem can a given number of people attack, using various methodology weights?

Problem size

Many people using a heavier methodology

Many people using a very heavy methodology

Many people using a light methodology

Methodology Weight


Figure 4-17. Effect of adding methodology weight to a large team.

Now ask them to maintain additional intermediate work products, written plans, GANTT charts, requirements documents, analysis documents, design documents, and test plans. In the imagined situation, they are not truly needed by the team for the development. They take time away from development.

Productivity under those conditions decreases. As you add elements to the methodology, you add more things for the team to do, which pulls them away from the meat of software development.

What size problem can a given number of people attack, using various methodology weights?

Problem size a few people using a light methodology a few people using a heavy methodology.

Methodology Weight


Figure 4-18. Effect of adding methodology weight to a small team.

In other words, a small team can succeed with a larger problem by using a lighter methodology (Figure 4-18).

Methodology elements add up faster than people expect. A process designer or manager requests a new review or piece of paperwork that should "only take a half hour from time to time." Put a few of these together, and suddenly the designers lose an additional 15-20% of their already cramped week. The additional work items disrupt design flow. Very soon, the designers are trying to get their design thinking done in one- or two-hour blocks which, as you saw earlier, does not work well.

This is something I often see on projects: designers unable to get the necessary quiet time to do their work because of the burden of paperwork and the high rate of distractions.

This principle contains a catch, though.

If you try to increase productivity by removing more and more methodology elements, you eventually remove those that address code quality. At some point the strategy backfires, and the team spends more time repairing bad work than making progress.

The key word, of course, is excess. Different methodology authors produce different advice as to where "excess" methodology begins. Based on the strengths of people we have discussed so far?being communicating beings and being good citizens?I find that a project can do with a lot less methodology than most managers expect. Jim Highsmith is more explicit about this. His suggestion would be that you start lighter than you think will possibly work!

There are two points to draw from this discussion:

· Adding a "small" amount of bureaucratic burden to a project adds a large cost.

· Some part of the methodology should address the quality of the output.

Principle 3. Larger teams need heavier methodologies.

With only four or six people on the team, it is practical to put them together in a room with printing whiteboards and allow the convection currents of information to bind the ongoing conversation in their cooperative game of invention and communication. After the team size exceeds 8 or 12 people, though, that strategy ceases to be so effective. As it reaches 30-40 people, the team will occupy a floor. At 80 or 100 people, the team will be spread out on multiple floors, in multiple buildings, or in multiple cities.

With each increase in size, it becomes harder for people to know what others are doing and how not to overlap, duplicate, or interfere with each other's work. As the team size increases, so does the need for some form of coordination and communication.


Figure 4-17 shows the effect of adding methodology to a large team. With very light methodologies, they work without coordination. As they start to coordinate their work, they become more effective (this is the left half of the curve). Eventually, for any size group, diminishing returns set in and they start to spend more time on the bureaucracy than on developing the system (the right half of the curve).

Principle 2 describes the left half of the curve: "Larger teams need heavier methodologies." The right half of the curve is described in Principle 3, "Excess methodology weight is costly."

Principle 4. Greater ceremony is appropriate for projects with greater criticality.

This principle addresses ceremony and tolerance, as discussed in the second section of this chapter. A Portfolio of Projects

In the IT department of the Central Bank of Norway, we worked on many kinds of projects.

One was to allow people to order dinners from the cafeteria when they worked late.

One was to provide SQL programming support for staff who were investigating financialinvestments.

A third was to track all the bank-to-bank transactions in the country. A fourth was to convert the entire NB system to be Year-2000 safe.

The cost of leaving a fault in the third and fourth systems was quite different from the cost of leaving a fault in the first two. I use the word criticality for this distinction. It was more critical to get the work correct in the latter two than in the former two projects.

Just as communications load affects the appropriate choice of methodology, so does criticality. I have chosen to divide criticality into four categories, according to the loss caused by a defect showing up in operation:

· Loss of comfort. The cafeteria produces lasagne instead of a pizza.

At the worst, the person eats from the vending machine.

· Loss of discretionary moneys. Invoicing systems typically fall into this category.

If a phone company sends out a billing mistake, the customer phones in and has the bill adjusted.

Many project managers would like to pretend that their project causes more damage than this, but in fact, most systems have good human backup procedures, and mistakes are generally fixed with a phone call. I was surprised to discover that the bank-to-bank transaction tracking system actually fit into this category. Although the numbers involved seemed large to me, they were the sorts of numbers that the banks dealt in all the time, and they had human backup mechanisms to repair computer mistakes.

· Loss of essential moneys.

Your company goes bankrupt because of certain errors in the program. At this level of criticality, it is no longer possible to patch up the mistake with a simple phone call.

Very few projects really operate at this level. I was recently surprised to discover two.

One was a system that offered financial transactions over the Web. Each transaction could be repaired by phone, but there were 50,000 subscribers, estimated to become 200,000 in the following year, and a growing set of services being offered. The call-in rate was going to increase by leaps and bounds. The time cost of repairing mistakes already fully consumed the time of one business expert who should have been working on other things and took up almost half of another business expert's time. This company decided that it simply could not keep working as though mistakes were easily repaired.

The second was a system to control a multiton, autonomous vehicle. Once again, the cost of a mistake was not something to be fixed with a phone call and some money. Rather, every mistake of the vehicle could cause very real, permanent, and painful damage. · Loss of life.

Software to control the movement of the rods in a nuclear reactor fall into this category, as do pacemakers, atomic power plant control, and the space shuttles. Typically, members of teams whose programs can kill people know they are working on such a project and are willing to take more care.

As the degree of potential damage increases, it is easy to justify greater development cost to protect against mistakes. In keeping with the second principle, adding methodology adds cost, but in this case, the cost is worth it. The cost goes into defect reduction rather than communications load.

Principle 4 addresses the amount of ceremony that should be used on a project. Recall that ceremony refers to the tightness of the controls used in development and the tolerance permitted. More ceremony means tighter controls and less tolerance.

Consider a team-building software for the neighborhood bowling league. The people write a few sentences for each use case, on scraps of paper or a word processor. They review the use cases by gathering a few people in a room and asking what they think.

Consider, in contrast, a different team-building software for a power plant. These people use a particular tool, fill in very particular fields in a common template, keep versions of each use case, and adhere to strong writing style conventions. They review, baseline, change control, and sign off the use cases at several stages in the life cycle.

The second set of use cases is more expensive to develop. The team works that way, though, expecting that fewer mistakes will be made. The team justifies being less tolerant of variation by the added safety of the final result.

Principle 5. Increasing feedback and communication reduce the need for intermediate deliverables.

Recall that a deliverable is a work product that crosses decision boundaries. An intermediate deliverable is one that is passed across decision boundaries within the team. These might include the detailed project plan, refined requirement documents, analysis and design documents, test plans, inter-team dependencies, risk lists, etc.

I refer to them also as "promissory notes," as in:

"I promise that the system will look like these requirements describe."

"I promise that this analysis model will work as the core for the system's design."

"I promise that this design will work well over time." There are two ways to reduce the need for promissory notes:

1. Deliver a working piece of the system quickly enough that the sponsor can tell whether the team understood the requirements properly. Delivering a working piece of the system quickly leads to these other benefits:

· The requirements writers will be able to tell whether the requirements they wrote are actually going to meet the user?s needs.

· The team needs fewer requirements reviews and can often simplify the requirements process in other ways.

· The designers can see the effects of their decisions early rather than after many other decisions have been built on top of a mistake.

· Test planning becomes much simpler. Sometimes another intermediate work product, the Test Plan, can be replaced by the running test cases.

2. Reduce the team size, putting everyone close enough together that they can simply tell each other what they are doing instead of writing internal documents to each other.

Note the word internal. The sponsors may still require written documentation of different sorts as part of the external communication needs.

Principle 6. Discipline, skills, and understanding counter process, formality, and documentation.

When Jim Highsmith says, "Don't confuse documentation with understanding," he means that much of the knowledge that binds the project is tacit knowledge, knowledge that people have inside them, not on paper anywhere.

The knowledge base of a project is immense, and much of that knowledge consists of knowing the team's rituals of negotiation, which person knows what information, who contributed heavily in the last release, what pieces of discussion went into certain design decisions, and so on. Even with the best documentation in the world, a new team cannot necessarily just pick up where the previous team left off. The new team will not start making progress until the team members build up their tacit knowledge base.

When referring to "documentation" for a project, be aware that the knowledge that becomes documentation is only a small part of what there is to know. People who specialize in technology transfer know this. As the one IBM Fellow put it, "The way to get effective technology transfer is not to transfer the technology itself but to transfer the heads that hold the technology!" ("Jumping Gaps across Time," on page ??? [insert cross ref])

Jim continues, "Process is not discipline." Discipline involves a person choosing to work in a way that requires consistency. Process involves a person following instructions. Of the two, discipline is the more powerful. A person who is choosing to act with consistency and care will have a far better effect on the project than a person who is just following instructions.

The common mistake is in thinking that somehow a process will impart discipline.

Jim's third distinction is, "Don't confuse formality with skill."

Insurance companies are in an unusual situation. We fills in forms, send them to the insurance back office, and receive insurance policies. This is quite amazing. Probably as a consequence of their living in this unusual realm, I have several times been asked by insurance companies to design use case and object-oriented design forms. Their goal, I was told on each occasion, was to make it fool-proof to construct high-quality use cases and OO designs.

Sadly, our world is not built that way. A good designer will read a set of use cases and create an OO design directly, one that improves as he reworks the design over time. No amount of form filling yet replaces this skill. Similarly, a good user interface designer creates much better programs than a mediocre interface designer can create..

Figure 4-22 shows a merging of Jim's and my thoughts on these issues.


Figure 4-22. Documentation is not understanding, process is not discipline, formality is not skill.

Jim distinguishes exploratory or adapting activities from optimizing activities. The former, he says, is exemplified by the search for new oil wells.

In searching for a new oil well, one cannot predict what is gong to happen. After the oil well is functioning, however, the task is to keep reducing costs in a predictable situation.

In software development, we become more like the optimizing oil company as we become more familiar with the problem to be solved, the team, and the technologies being used. We are more like the exploratory company, operating in an adaptive mode, when we don't know those things.

Light methodologies draw on understanding, discipline, and skill more than on documentation, process, and formality. They are therefore particularly well suited for exploratory situations. The typical heavy methodology, drawing on documentation, process, and formality, is designed for situations in which the team will not have to adapt to changing circumstances but can optimize its costs.

Of the projects I have seen, almost all fit the profile of exploratory situations. This may explain why I have only once seen a project succeed using an optimizing style of methodology. In that exceptional case, the company was still working in the same problem domain and was using the same basic technology, process, and architecture as it had done for several decades.

The characteristics of exploratory and optimizing situations run in opposition to each other. Optimizing projects try to reduce the dependency on tacit knowledge, personal skill, and discipline and therefore rely more on documentation, process, and formality. Exploratory projects, on the other hand, allow people to reduce their dependency on paperwork, processes, and formality by relying more on understanding, discipline, and skill. The two sets draw away from each other.

Jim and I hypothesize that any methodology design will live on the track shown in the figure, drawing either to one set or the other, but not both.

Principle 7. Efficiency is expendable in non-bottleneck activities.

Principle 7 provides guidance in applying concurrent development, and is a key principle in tailoring the Crystal methodologies for different teams in different situations. It is closely related to Elihu Goldratt's ideas as expressed in The Goal (Goldratt 1992) and The Theory of Constraints (Goldratt 1990).

To get a start on the principle, imagine a project with five requirements analysts, five Smalltalk programmers, five testers, and one relational database designer (DBA), all of them good at their jobs. Let us assume, for the sake of this example, that the group cannot hire more DBAs. Figure 4-23 shows the relevant part of the situation, the five programmers feeding work to the single DBA.


Figure 4-23. The five Smalltalk programmers feeding work to the one DBA.

The DBA clearly won't be able to keep up with the programmers. This has nothing to do with his skills, it is just that he is overloaded. In Goldratt's terms, the DBA's activity is the bottleneck activity. The speed of this DBA determines the speed of the project.

To make good progress, the team had better get things lined up pretty well for the DBA so that he has the best information possible to do his work. Every slowdown, every bit of rework he does, costs the project directly.

That is quite the opposite story from the Smalltalk programmers. They have a huge amount of excess capacity compared with the DBA.

Faced with this situation, the project manager can do one of two things:

· Send four of the programmers home so that the Smalltalk programmers and the DBA have matched capacities.

· Make use of the programmers' extra capacity. If he is mostly interested in saving money, then he sends four of the programmers home and lives with the fact that the project is going to progress at the speed of these two solo developers.

If he is interested in getting the project done as quickly as possible, he doesn't send the four Smalltalk programmers home. He takes advantage of their spare capacity.

He has them revise their designs several times, showing the results to users, before they hand over their designs to the DBA. This way, they get feedback that enables them to change their designs before, not after, the DBA goes through his own work.

He also has them start earlier in the requirements-gathering process, so that they can show intermediate results to the users sooner, again getting feedback earlier. He has them spend a bit more time drawing their designs so that the DBA can read them easily.

He does this knowing that he is causing them extra work. He is drawing on their spare capacity.

Figure 4-24 diagrams this second work strategy. In Figure 4-24, you see only one requirements person submitting information to one Smalltalk programmer, who is submitting work to the one DBA. The top two curves are used five times, for the five requirements writers and the five programmers.


Figure 4-24. Bottleneck station starts work higher on the completeness and stability curve than do non-bottleneck stations.

Notice in Figure 4-24 that the Smalltalker starts work as soon as the requirements person has something to hand him, but the DBA waits until the Smalltalker's work is almost complete and quite stable before starting.

Notice also that the DBA is completing work faster than the others. This is a reflection of the fact that the other groups are doing more rework, and hence reaching completeness and stability more slowly. This is necessary because four other groups are submitting work to the DBA. In a balanced situation, the DBA reaches completion five times as fast as the others.

People on a bottleneck activity need to work as efficiently as possible and cannot afford to do much rework. (I say "much rework" because there is always rework in software development; the goal is to reduce the amount of rework.)

Principle 7 has three consequences.

Consequence 1. Do whatever you can to speed up the work at the bottleneck activity.

That result is fairly obvious, except that people often don't do it.

Every project has a bottleneck activity. It moves during the project, but there is always one. In the above example, it is the DBA's work. There are four ways to improve a bottleneck activity. Principle 7 addresses the fourth.

1. Get better people doing that work.

2. Get more people to do that work.

3. Get better tools for the people doing that work.

4. Get the work that feeds that activity to a more complete and stable state before passing it along.

Consequence 2. People at the nonbottleneck activities can work inefficiently without affecting the overall speed of the project!

This is not obvious.

Of course, one way for people to work inefficiently is to take long smoking breaks, surf the Web, and spend time at the water cooler. Those are relatively uninteresting for the project and for designing methodologies.

More interesting is the idea of spending efficiency, trading it for stability.

The nonbottleneck people can spend some of their extra capacity by starting earlier, getting results earlier, doing more reword and doing it earlier, and doing other work that helps the person at the bottleneck activity.

Spending excess capacity for rework is significant for software development because rework is one of the things that causes software projects to take so much time. The users see the results and change their requests; the designers see their algorithm in action and change the design; the testers break the program, and the programmers change the code. In the case of the above example, all of these will cause the DBA rework.

Applying Principle 7 and the diagram of concurrent development (Figure 4-14) to the problem of the five Smalltalkers and one DBA, the project manager can decide that the Smalltalk programmers can work "inefficiently," meaning "doing more rework than they might otherwise," in exchange for making their work more stable earlier. This means that the DBA, to whom rework is expensive, will be given more stable information at the start.

Principle 7 offers a strategy for when and where to use early concurrency, and when and where to delay it. Most projects work from a given amount of money and an available set of people. Principle 7 helps the team members adjust their work to make the most of the people available.

Principle 7 can be used on every project, not just those that are as out of balance as the sample project. Every project has a bottleneck activity. Even when the bottleneck moves, Principle 7 applies to the new configuration of bottleneck and nonbottleneck activities.

Consequence 3. Applying the principle of expendable efficiency yields different methodologies in different situations, even keeping the other principles in place.

Here is a first story, to illustrate. Winifred and Principle 7.

Project Winifred did resemble the sample project above. It was the project on which I learned to apply the principle. In the middle of the project, there were about a dozen Smalltalk programmers, four COBOL programmers, and two DBAs. The Smalltalk programmers could revise their designs faster than any of the others. The two DBAs were overloaded, as in the example story. We arranged for the Smalltalkers to work very closely with the requirements writers, getting started as soon as there was information to work from. Applying osmotic and face-to-face communication, rather than documents between them, the Smalltalkers worked by word of mouth, changing their designs as they heard new information from the requirements writers.

The DBAs and COBOL programmers started their work only after the Smalltalkers had a "relatively stable" design that had passed its design review.

I described this use of the principle as the Gold Rush strategy in Surviving Object-Oriented Projects (Cockburn 1998). That book also describes the related use of the Holistic Diversity strategy and examines project Winifred more extensively.

Here is a second story, with a different outcome.

eBucks.com and Principle 7

The company eBucks.com had 15 developers and a dozen business specialists. They also had a backlog of six dozen work initiatives. The programmers were being distracted away from their work several times each day and consequently were making little headway against their backlog.

Gold Rush was exactly the wrong strategy to use in this situation. The programmers had no spare capacity. In fact, programming was the bottleneck activity.

We first took several steps to reduce the distractions hitting the programmers. That was still not enough, given their backlog.

We decided, therefore, that the business specialists would write use cases, business rules, and data descriptions to hand to the programmers.

Note that this strategy appears at first glance to go against a primary idea of this book: maximizing face-to-face communication. However, in this situation, these programmers could not keep information in their heads. They needed the information to reach them in a "sticky" form, so they could refer to it after the conversations.

After the programmers work through the backlog, the bottleneck activity will move, and the company may find it appropriate to move to a more concurrent, conversation-based approach.

Just what they do will depend on where the next bottleneck shows up.

Here is a third story. Udall and Principle 7

Project Udall had become stuck, with dozens of developers and a large, unworkable design. Four of the senior developers decided to ignore all the other developers and simply restarted their work. They added people to their private workgroup slowly, inviting only the best people to join them.

They reasoned (correctly, as it turns out) that the two bottleneck activities were getting political alignment on design decisions and transferring information from the senior designers' heads to the others.

They decided that it would be more effective for them to let the others do anything other than program on the system than to spend key design resources convincing and training the others.

This was a most surprising and effective application of the principle of expendable efficiency.

When I interviewed one of the team leads, I asked, "What about all those other people? What did they do?"

The team lead answered, "We let them do whatever they wanted to. Some did nothing, some did small projects to improve their technical skills. It didn't matter, because they wouldn't help the project more by doing anything else."

The restarted project did succeed. In fact, it became a heralded success at that company.

Consequences of the Principles

The above principles work together to help you choose an appropriate size for the team when given the problem, and to choose an appropriate size for the methodology when given the team. Look at some of the consequences of combining the principles:

Consequence 1. Adding people to a project is costly.

People who are supposed to know this sometimes seem unaware of it, so it is worth reviewing.

Imagine forty or fifty people working together. You create teams and add meetings, managers, and secretaries to orchestrate their work.

Although the managers and secretaries protect the programming productivity of the individual developers, their salaries add cost to the project. Also, adding these people adds communication needs, which call for additional methodology elements and overall lowered productivity for the group (Figure 4-19).


Figure 4-19. Reduced effectiveness with increasing communications needs (methodology size).

Consequence 2. Team size increases in large jumps.

The effects of adding people and adding methodology load combine, so that adding "a few" people is not as effective an approach as it might seem. Indeed, my experience hints that to double a group's output, one may need to almost square the number of people on the project! Here is a story to illustrate. Mythical Man-Month Revisited

Fred Brooks, in the Mythical Man-Month, writes that one may have a project that cannot be delivered in time by even the ten best people in the world. As a consequence, he writes, one may have to use 200 or 300 people.

He explains that there are two effects driving the need for extra people. One is that more people are needed to handle the communications load on the project. The other is that it will not be possible to hire 200 people of the same caliber as the proposed 10. The communications load is compounded by a decrease in talent.

Here is a second, more recent story, with a similar outcome.

Six to 24 Programmers

At the start of one fixed-priced project, we estimated that we could deliver the project with six good Smalltalk programmers. That wasn't an option, though. At that time, we couldn't get our hands on six good Smalltalk programmers. To make matters worse, we were given ten novices to train and only two expert programmers to both train them and create code.

During our estimation process, we concluded we would need a staff of 24 programmers of mixed abilities.

Over the course of the project, we eventually had four experts and 20 other programmers with mixed experience. We got our 24 programmers.

We reviewed our assessment at several times during the project, and at the end. Yes, six good Smalltalk programmers would have been sufficient. No, 12 programmers, even 16 programmers of the mixed experience levels we were seeing would not have been sufficient.

The correct jump was from 6 good programmers to 24 programmers of mixed ability.

Consequence 3. Teams should be improved, not enlarged

Here is a common problem: A manager has a ten-person team that sits close together and achieves high communication rates with little energy.

The manager needs to increase the team's output. He has two choices: add people or keep the team the same size and do something different within the team.

If he increases the team size from 10 to 15, the communications load, communications distances, training, meeting, and documentation needs go up. Most of the money spent on this new group will get spent on communications overhead, without producing more output.

This group is likely to grow again, to 20 people (which will add a heavier communications burden but will at least show improvement in output).

The second strategy, which seems less obvious, is to lock the team size at 10 people (the maximum that can be coordinated through casual coordination) and improve the people on the team.

To improve the individuals on the team, the manager can do any or all of the following:

· Send them to courses to improve their skills.

· Seat them closer together to reduce communications cost.

· Improve their amicability and teamwork.

· Replace some of the people on the team with more talented (and more highly paid) people.

Repeating the strategy over time, the manager will keep finding better and better people who work better and better together.

Notice that in the second scenario, the communications load stays the same, while the team becomes more productive. The organization can afford to pay the people more for their increased contribution. It can, in fact, afford to double their salaries, considering that these 10 are replacing 20! This makes sense. If the pay is good, bureaucratic burden is low, and team members are proud of their output, they will enjoy the place and stay, which is exactly what the organization wants them to do.

Consequence 4. Different methodologies are needed for different projects.

Figure 4-21 shows one way to examine projects to select an appropriate methodology. The attraction of using grid in this figure is that it works from fairly objective indices:

· The number of people being coordinated

· The system criticality

· The project priorities

You can walk into a project work area, count the people being coordinated, and ask for the system criticality and project priorities.

In the figure, the lettering in each box indicates the project characteristics. A "C6" project is one that has six people and may cause loss of comfort; a "D20" project is one that has 20 people and may cause the loss of discretionary monies.


Figure 4-21. Characterizing projects by communication load, criticality, and priorities.

In using this grid, you should recognize several things:

? Communication load rises with the number of people. At certain points, it becomes incorrect to run the project in the same way: Six people can work in a room, 20 in close proximity, 40 on a floor, 100 in a building. The coordination mechanisms for the smaller-sized project no longer fit the larger-sized project.

? A project potentially causing companies to go out of business or causing loss of life need more careful checking than systems only causing loss of comfort or discretionary monies.

? Projects that are prioritized with legal liability concerns will need more care and tracking in the work.

Here is how I once used the grid: Changing Grid Cells Mid-Project The banking project I was asked to coordinate at the Central Bank of Norway started as a three-person effort, using the same three people who had done the previous system. I characterized it as a D6 type of project,and planned to more or less just trust the programmers to do a good job. After a month or so, though, it became clear that we were coordinating large amounts of money and that we should perhaps be more careful about the mistakes we let slip. I moved the project rating to E6, and we spent a week

or two fixing the design with respect to fault tolerance, recovery, and race conditions. After the architect and lead programmer went on paternity leave, we got two new programmers and two testers. At this point, we had seven people, two in Lillehammer, two on the first floor, and one each on the second, third, and fourth floors in Oslo (remember the cost of communicating across floors?). It turned out that this system was actually being developed by two companies, and our team was coordinating its work with a group of 35 developers at a different location in Oslo who were using a different (waterfall) methodology. It was at this moment that the grid came in handy. I reclassified our project as an E20 project (some mix of the number of people and the geographic dispersion). Paying attention to the methodology principles, I did not add more paperwork to the project but stepped up personal communications, using phone calls and the video link, and increased personal study of the issues affecting the outcome of the project.

The grid characteristics can be used in reverse to help discuss the range of projects for which a particular methodology is applicable.

This is what I do with the Crystal methodology family in Chapter 6. I construct one methodology that might be suitable for projects in the D6 category (Crystal Clear), another that might be suitable for projects in the D20 range (Crystal Yellow), another for D40 category projects (Crystal Orange), and so on. Looking at methodologies in this way, you would say that Extreme Programming is suited for projects in the C4 to E14 categories.

Consequence 5. Lighter methodologies are better, until they run out of steam.

What we should be learning is that a small team with a light methodology can sometimes solve the same problem as a larger team with a heavier methodology. From a project cost point of view, as long as the problem can be solved with ten people in a room, that will be more effective than adding more people.

At some point, though, even the ten best people in the world won't be able to deliver the needed solution in time, and then the team size must jump drastically. At that point, the methodology size will have to jump also (Figure 4-20).


Figure 4-20. Small methodologies are good but run out of steam.

There is no escaping the fact that larger projects require heavier methodologies. What you can escape, though, is using a heavy methodology on a small project. What you can work towards is keeping the methodology as light and nimble as possible for the particular project and team.

"Agile" is a reasonable goal, as long as you recognize that a large-team agile methodology is heavier than a small-team agile methodology.

Consequence 6. Methodologies should be stretched to fit.

Look for the lightest, most "face-to-face" centric methodology that will work for the project. Then stretch the methodology. Jim Highsmith summarizes this with the phrase, "A little less than enough is better than a little more than enough."

A manager of a project with 50 people and the potential for "expensive" damage has two choices: · He can choose a larger-category methodology (say, E100) and remove the excess weight from it. This is attractive to some managers, because it gives them bragging rights: "Yeah, we had to use an E100 methodology for our project!" However, it is unlikely that the team will remove as much as it can, and so the project will go slower and be more expensive than it needs to be. · He should choose a smaller-categorymethodology (say, D40) and adapt it up to the project. Although this gives him fewer bragging rights, the team is likely to add fewer irrelevant items to the methodology, and as a consequence, the project is more likely to go faster and be less expensive.

XP was first used on D8 types of projects. Over time, people found ways to make it work successfully for more and more people. As a result, I now rate it for E14 projects.

More Principles

We should be able to uncover other principles.

One of the more interesting candidates I recently encountered is the "real options evaluation" model (Sullivan 1999).

In considering the use of financial options theory in software development, Sullivan and his colleagues highlight the "value of information" against the "value of flexibility" (VOI against VOF).

Value of information (VOI) deals with the choice: "Pay to learn, or don't pay if you think you know." The concept of VOI applies to situations in which it is possible to discover information earlier by paying more.

An application of the VOI concept is deciding which prototypes to build on a project.

Value of flexibility (VOF) deals with the choice: "Pay to not have to decide or don't pay, either because you are sure enough the decision is right, or because the cost of changing your decision later is low." The concept of VOF applies to situations in which it is not possible to discover information earlier.

An application of the VOF concept is deciding how to deal with competing (potential) standards, such as COM versus CORBA.

A second application, which they discuss in their article, is evaluating the use of a spiral development process. They say that using spiral development is a way of betting on a favorable future. If conditions development as a resource-limited cooperative game. improve at the end of the first iteration, the project They may provide guidance to some process designer continues. If the conditions worsen, the project can and yield a new principle for designing be dropped at a controlled cost. methodologies.

I haven't yet seen these concepts tried explicitly, but they certainly fit well with the notion of software

XP Under Glass

Extreme Programming (XP) is an agile methodology that illustrates the ideas in this book very well. Additionally, it is effective, well documented, and controversial. Thus, it makes a wonderful sample methodology to examine. At this point, we finally have enough vocabulary to put it under the methodology microscope.

The short story is that XP scores very high within its area of applicability. It (like all others) needs to be adjusted when applied outside its sweet spot.

XP in a Nutshell

The briefest of reviews of XP is in order, although much has been written about it elsewhere (Beck 1999, Jeffries 2000, XP URL).

Following is a summary, as brief as it would be if given as instructions over the phone or e-mail:

Use only 3-10 programmers. Arrange for one or several customers to be on site to provide ongoing expertise. Everyone works in one room or adjacent rooms, preferably with the workstations clustered, monitors facing outwards in a circle, half as many workstations as programmers.

Do development in three-week periods, or "iterations." Each iteration results in running, tested code of direct use to the customers. The compiled system is rolled out to its end users at the end of each release period, which may be every two to five iterations.

The unit of requirements gathering is the "user story," user-visible functionality that can be developed within one iteration. The customers write the stories for the iteration onto simple index cards. The customer(s) and programmers negotiate what will get done in the next iteration in the following way:

· The programmers estimate the time to complete each card.

· The customers prioritize, alter, and de-scope as needed so that the most valuable stories are most likely to get done in the allotted time period.

The programmers write the tasks for each story on flipcharts on the wall or a whiteboard, estimating the time they will need for each task. Over time, the customers and programmers can reprioritize or de-scope the tasks or stories.

Development on a story starts with the programmers discussing the story with the expert customer. Because this discussion is guaranteed to take place, the text written on the story card can be very brief, just enough to remind everyone of what the conversation is going to be about. The understanding of the requirements grow through those conversations and any pictures or documents the people decide they need.

Programmers work in pairs. They follow strict coding standards that they set up at the beginning of the project. They create unit tests for everything they write and make sure that those tests run at 100% every time they check in their code to the mandatory versioning and configuration-management system. They develop in tiny increments of 15 minutes to a few hours long, integrating their code several times a day. At the end of each of these integrations, they ensure that the entire code base passes all unit tests.

At any time, any two programmers sitting together may change any line of code in the system. In fact, they are supposed to. Any time the two find a section of code that appears hard to understand or overly complex, they are to revise it, constantly simplifying and improving it. At all times, they are to keep the overall design as simple as they can and the code as clear as they can. This constant refactoring is possible because of the extensive unit test suites in place. It is also possible because the programmers rotate pair assignments every day or so, and so knowledge of the changes in the code structure pass through the group through the shifting partnerships.

While the programmers are working, the customers are doing three things: They visit with the programmers to clarify ideas, they write system acceptance tests to be run during and at the end of the iteration, and they select stories to be built for the next iteration. They may be on the project full time or not, as they decide.

The team holds a stand-up meeting every day, in which they describe what they are working on, what is working well for them, and what they might need help with. The meeting is held standing up to keep it short. At the end of each iteration, they hold another meeting in which they review what they did well and what they want to work on next time. They post this list for all to see during the next iteration.

XP prizes four values: communication, simplicity, testing, and courage. The "courage" value is intended as courage to go ahead and make improvements to the system at any time.

One person on the team is designated the "coach" for the team. This person reviews with the team members their use of the key practices: use of pair programming and testing, pair rotation, keeping design simple, communicating, and so on.

Dissecting XP

An XP team makes great use of osmotic communication, face-to-face communication, convection currents of information flow, and information radiators on the wall.

The consistent availability of experts means that the delay from question to answer is short. The time and energy cost to discover a needed piece of information is low; the rate of information dispersion is high.

Feedback is rapid. The customers get quick feedback as to the implementation implications of their requirements requests during the planning session. They see running code within days and can adjust accordingly their views on what should really be programmed. The programmers get immediate correction on the code they enter, because another person sitting next to them is watching what they type and because there are unit tests for each function they write. When changing the design, they get rapid feedback from the extensive unit and acceptance tests. They get fairly rapid feedback on their process, about every few weeks, through the iteration cycles.

XP uses human strength of communication. Through pair work and rapid feedback, it compensates for the human tendency to make mistakes.

XP is a high-discipline methodology. It calls for tight adherence to strict coding and design standards, strong unit test suites that must pass at all times, good acceptance tests, constant working in pairs, vigilance in keeping the design simple, and aggressive refactoring.

These disciplines are protected through two mechanisms and are exposed in three places.

It turns out (much to the surprise of many) that most people like working in pairs. It provides pride-in-work, because they get more done in less time, with fewer errors, and usually end up with a better design than if they were working alone. They like this. As a result, they do it voluntarily. While in pairs, they help each other write tests and follow coding standards. Thus, pair programming helps hold unit-testing in place.

Having a coach helps keep the other disciplines in place. Reports from various groups indicate to me that even better than having a coach is having several very enthusiastic XP practitioners on the team. This is because the coach is an external force, while enthusiastic teammates create peer pressure?an internal, and hence more powerful, force.

The places where XP is still exposed with respect to being high-discipline are coding standards, acceptance tests, and aggressive refactoring. Of those, aggressive refactoring probably will remain the most difficult, because it requires consistency, energy, and courage, and no mechanisms in the methodology reinforce it.

There are some high-ceremony (low-tolerance) standards. The policy standards include the use of iterations. Design and programming are done in tiny increments of hours or a few days. Planning and development cycles are two to four weeks, releases one to four months. The testing policy standard is that all unit tests run at 100% for all checked-in code. A policy standard states that the team is to be colocated, with a strong recommendation toward the "caves and commons" seating (Auer 2001).

XP includes within its definition a selection of techniques that the people need to learn: the planning game, the daily stand-up meeting, refactoring, and test-first development.

XP is designed for small, colocated teams aiming to get quality and productivity as high as possible. It does this through the use of rich, short, informal communication paths with emphasis on skill, discipline, and understanding at the personal level, minimizing all intermediate work products.

Adjusting XP

Two traits of XP are controversial: absence of documentation and the restriction to small teams.

Absence of Documentation

We can explore the documentation issue in terms of the cooperative game. XP targets success at the primary goal: delivering software.

It targets succeeding at the secondary goal, setting up for the next game, solely through the tacit knowledge built up within the project team.

The knowledge that binds the group and the design is tacit knowledge: the sum of knowledge of all the people on the team. The tacit knowledge is communicated through osmotic communication, rotation in the pair programming, clear, simple code, and extensive unit tests. People joining the team gain this tacit knowledge by pair programming with experienced people in rotation.

While the attention to tacit knowledge is good, sometimes the sponsors want other deliverables besides the system in operation. They may want usage manuals or paperwork describing the system's design. Even if the customers don't need these things, the organization's executives are likely to want to protect themselves against the eventual disappearance of the team's tacit knowledge.

Although it is not likely that everyone will quit at one time, it is likely that the organization will reduce staff size after the main development period of the project. At that point the tacit knowledge starts to be in jeopardy: If several people leave in quick succession, the new people will not have had enough time to absorb the project details adequately. At that point, the project has neither documents nor tacit knowledge.

XP actually contains a mechanism to deal with this situation: the planning game. It just happens that XP projects to date have not made use of the planning game for this purpose.

In the planning game, the sponsors can write story cards that call for creating documentation instead of new program features. During the planning game, the developers estimate the time it will take to generate the documentation, and the customers prioritize those ones against the stories specifying new features.

Using the planning game in this way, the sponsors can properly play the two competing subgames: that of delivering software quickly and that of protecting the group's knowledge.

The above discussion is hypothetical. I have not seen it used. The reason may be, and this is the hazard to the scheme, that the people who are requesting new functionality have great allegiance to the current project and little or no allegiance to future, possible projects. In other words, they don't have a "duration of accountability" that permits them to adequately balance the priority of new functionality against documentation. Resolving this problem will probably remain difficult.

An XP team might consider less common and less expensive ways to document the system design, such as video documentation (as described in Chapter 3).

Restriction to Small Team

Many people exclaim: "XP doesn't scale!"

At this point, you should review, if you don't recall it, the graphs of problem size versus team size in the last section.

A well-structured, 10-programmer team using XP properly may be able to solve a larger problem than a 30-person team using a larger methodology. In fact, on the first official XP project, an 8-person XP team delivered in one year what the previous, 26-person team had failed to deliver in the previous year. So be aware of what the statement "XP doesn't scale" really means. XP scales quite well in problem size (up to its limit); at the same time, it does not scale in staff size.

XP, as written, has been demonstrated on projects with up to 12 programmers and four on-site customers. It may have trouble with larger teams due to its reliance on tacit knowledge. It is difficult to build extensive tacit knowledge without good osmotic communication, and that is hard to do with more people than conveniently fit in a room. A larger project team trying XP will have to adjust the teaming structures, interfaces, and use of documentation to accommodate the greater coordination needs of the larger group and the thinner communication lines.

I leave it as an exercise to the inventive practitioner to experiment with these modifications to XP.

Why Methodology at All?

At this point it is appropriate to review the reasons for spending so much energy on methodologies at all, because they are the cause of so much argument and frustration the world over.

A methodology addresses "how we work around here." As such, it can serve several uses:

1. Introducing new people to the process

"Hi, how do we work around here?" is a natural question for new team members to ask. It is helpful to have something available so they can learn their place in the organization. Methodology in a Drawer

On my first hardware design project, my team leader told me,

"We draw the gates and ICs on these D-sized sheets of paper, name at the bottom left. We use only symmetric clocks, triggering on the rising edge. We put our drawings in the drafting department's cabinet, second drawer from the top. Let me know before you do that, though, and we'll schedule a design review..."

Even experienced people coming onto the project need to know how to play into the process in action.

2. Substituting people

Although people are not plug-replaceable, they often do need to be replaced. Methodology on the Job

A colleague was being hired by a contracting company he didn't know, to do some proposal work in a field he didn't know, for a client he didn't know.

The contract lead sat with him for two days reviewing the company?s methodology: who produced what, how the work products were structured, what standards were needed, what his priorities should be, who he would talk to.

I found this an impressive use of methodology. My colleague will walk onto the project and be useful in less than four hours, even though so much of the work will be new to him. Contracting companies make the most use of this aspect of methodologies.

3. Delineating responsibilities A methodology indicates what is not part of a person's job. Thus, XP states that decisions about a story's priority belong to the customer, not the programmer; design estimates are made by the programmers, not the customer.

4. Impressing the sponsors This force drives construction of thick methodology manuals.

Consider two companies bidding to do work for you. The first says, "We have this carefully thought through and documented process that we have used many times. Here it is in these boxes of binders."

The second says, "We sit close together and trade notes, without writing anything down. In particular, we don't need to write down our methodology, because we are all responsible individuals."

Which would you hire?

The force plays on the natural reflex that a heavier and more precisely choreographed methodology is "safer." It is a non-negligible factor in the awarding of contracts, even if the process used on the job is not the same as the one that is outlined in the manuals.

5. Demonstrating visible progress Related to impressing the sponsors, the purpose of the methodology may be to allow the contractors to show their sponsors what they have been doing. In the methodology my colleague was being taught, a key element was to produce something visible every single day so that the sponsors would know that they had been "making progress." Exercise for the reader: Reconsidering XP in this light, ask yourself what an XP team could show every single day to demonstrate visible progress to the sponsors.

6. Curriculum for education

After a methodology names techniques and standards for people to use, courses can be found or designed that teach skills around those techniques and standards.

For example, the people can be sent, according to their job responsibilities, to develop skills in writing use cases, facilitating meetings, semantic modeling, programming, and the use of various tools.

People can be sent to learn standards that will be used. The organization might center a class around the subset parts of UML they expect to use or perhaps a variation for real-time systems.

Evaluating a Methodology

In light of the above, how might you evaluate a methodology?

You would first ask why the methodology exists, and then what game you are playing. Based on those you might evaluate the methodology for:

· How rapidly you can substitute or train people

· How great an effect it has on the sales process

· How much freedom it gives people (or how constraining it is)

· How fast it allows people to respond to changing situations

· How well it protects your organization from lawsuits or other damage

You have undoubtedly noticed that the principles of methodology design presented in this chapter are oriented toward designing methodologies whose priorities are being productive and responsive to change.

I leave as an exercise for another author to capture the principles for methodologies that enhance sales, substitutability, and safety from lawsuits.

And What Should I Do Tomorrow?

Start by recognizing that no single methodology definition can possibly suit all projects. Don't even think that the Rational Unified Process, Unified Process, or the Something Else methodology will fit your project out of the box. If you follow a methodology out of the box, you will have one that fits some project in the world, but probably not yours.

Practice noticing how the seven principles of methodology design relate to your project:

· Look for where your team could benefit from using warmer communications channels and where cooler ones are needed.

· Identify the bottleneck activities on your project.

· Track them as they change.

· Invent ways to utilize some other group's excess capacity to streamline the bottleneck group's work or to reduce uncertainty.

· Reduce the internal deliverables on your project:

· Arrange for higher-bandwidth communication channels between the developers and opportunities for rapid feedback, and you will find that some of the promissory notes are no longer really needed.

· Find the bottleneck activities, and see if you can trade efficiency elsewhere for increased productivity at the bottleneck station.

· Find a place where lightening the methodology would actually cause damage. Think about what might be an alternative.

· Review the list of purposes of a methodology. Evaluate the purpose of your group's methodology, and then rank its effectiveness with respect to that purpose.

· Practice naming the scope and elements of your methodology and other methodologies. Observe how much they differ due to addressing different scopes or different priorities.

· Look at the different methodologies in use on different projects, and evaluate them according to how they address their different project sizes.

· Experiment with the difference between problem size and project size.

· Can you think of a project that had more people than it needed?

· Can you think of a difficult problem that was turned into an easy problem through the application of some particular point of view?

Level 2 readers:

· Add these ideas to your bag of tricks.

· Learn where to apply, adjust, and drop them. Level 3 readers: See if you can explain these ideas to someone else.

Оглавление книги


Генерация: 0.040. Запросов К БД/Cache: 0 / 0
поделиться
Вверх Вниз