Which type of project lends itself to the function point method of top down estimating?

4.2 EFFORT ESTIMATION

At Infosys, estimation generally takes place after analysis. That is, when a project manager estimates the effort, the requirements are well understood. The business processes are organized to support this approach. For example, the requirement phase is sometimes executed as a separate project from the software development project.

At Infosys, multiple estimation approaches have been proposed, some of which are discussed here. A project manager can choose any of the estimation approaches that suit the nature of the work. Sometimes, a project manager may estimate using multiple methods, either to validate the estimate from its primary method or to reduce the risks, particularly where past data of similar projects are limited.

4.2.1 The Bottom-up Estimation Approach

Because the types of projects undertaken at Infosys vary substantially, the bottom-up approach is preferred and recommended. The company employs a task unit approach,1 although some of the limitations of this strategy have been overcome through the use of past data and the process capability baseline (see Chapter 2).

In the task unit approach, the project manager first divides the software under development into major programs (or units). Each program unit is then classified as simple, medium, or complex based on certain criteria. For each classification unit, the project manager defines a standard effort for coding and self-testing (together called the build effort). This standard build effort can be based on past data from a similar project, from the internal guidelines available, or some combination of these.

Once the number of units in the three categories of complexity is known and the estimated build effort for each program is selected, the total effort for the build phase of the project is known. From the build effort, the effort required for the other phases and activities is determined as a percentage of the coding effort. From the process capability baseline or the process database, the distribution of effort in a project is known. The project manager uses this distribution to determine the effort for other phases and activities. From these estimates, the total effort for the project is obtained.

This approach lends itself to a judicious mixture of experience and data. If suitable data are not available (for example, if you're launching a new type of project), you can estimate the build effort by experience after you analyze the project and when you know the various program units. With this estimate available, you can obtain the estimate for other activities by working with the effort distribution data obtained from past projects. This strategy even accounts for activities that are sometimes difficult to enumerate early but do consume effort; in the effort distribution for a project, the "other" category is frequently used to handle miscellaneous tasks.

The procedure for estimation can be summarized as the following sequence of steps:

1.       Identify programs in the system and classify them as simple, medium, or complex (S/M/C). As much as possible, use either the provided standard definitions or definitions from past projects.

2.       If a project-specific baseline exists, get the average build effort for S/M/C programs from the baseline.

3.       If a project-specific baseline does not exist, use project type, technology, language, and other attributes to look for similar projects in the process database. Use data from these projects to define the build effort of S/M/C programs.

4.       If no similar project exists in the process database and no project-specific baseline exists, use the average build effort for S/M/C programs from the general process capability baseline.

5.       Use project-specific factors to refine the build effort for S/M/C programs.

6.       Get the total build effort using the build effort of S/M/C programs and the counts for them.

7.       Using the effort distribution given in the capability baseline or for similar projects given in the process database, estimate the effort for other tasks and the total effort.

8.       Refine the estimates based on project-specific factors.

This procedure uses the process database and process capability baseline, which are discussed in Chapter 2. As mentioned earlier, if many projects of a type are being executed, you can build a project-specific capability baseline. Such baselines are similar to the general baselines but use only data from specific projects. These baselines have been found to be the best for predicting effort for another project of that type. Hence, for estimation, their use is preferred.

Because many factors can affect the effort required for a project, it is essential that estimates account for project-specific factors. Instead of classifying parameters into different levels and then determining the effect on the effort requirement, the approach outlined here lets the project manager determine the impact of project-specific factors on the estimate. Project managers can make this adjustment using their experience, the experience of the team members, or data from projects found in the process database.

Note that this method of classifying programs into a few categories and using an average build effort for each category is followed for overall estimation. In detailed scheduling, however in which a project manager assigns each unit to a member of the team for coding and budgets time for the activity characteristics of a unit are taken into account to give more or less time than the average.

4.2.2 The Top-Down Estimation Approach

Like any top-down approach, the Infosys approach starts with an estimate of the size of the software in function points. The function points can be counted using standard function point counting rules. Alternatively, if the size estimate is known in terms of LOC, it can be converted into function points.

In addition to the size estimate, a top-down approach requires an estimate of productivity. The basic approach is to start with productivity levels of similar projects (data for which is available in the process database) or with standard productivity figures (data for which is available in the process capability baseline), and then to adjust those levels, if needed, to suit the project. The productivity estimate is then used to calculate the overall effort estimate. From the overall effort estimate, estimates for the various phases are derived by using the percentage distributions. (These distributions, as in the bottom-up approach, are obtained from the process database or the capability baseline.)

To summarize, the overall approach for top-down estimation involves the following steps:

1.       Get the estimate of the total size of the software in function points.

2.       Using the productivity data from the project-specific capability baseline, from the general process capability baseline, or from similar projects, fix the productivity level for the project.

3.       Obtain the overall effort estimate from the productivity and size estimates.

4.       Use effort distribution data from the process capability baselines or similar projects to estimate the effort for the various phases.

5.       Refine the estimates, taking project-specific factors into consideration.

Like the bottom-up estimation, the top-down approach allows the estimates to be refined using project-specific factors. This allowance, without actually defining these factors, acknowledges that each project is unique and may have some characteristics that do not exist in other projects. It may not be possible to enumerate these characteristics or formally model their effects on productivity. Hence, it is left to the project manager to decide which factors should be considered and how they will affect the project.

4.2.3 The Use Case Points Approach

The use case points approach employed at Infosys is based on the approach from Rational and is similar to the function points methods. This approach can be applied if use cases are used for requirement specification. The basic steps in this approach are as follows.

1.       Classify each use case as simple, medium, or complex. The basis of this classification is the number of transactions in a use case, including secondary scenarios. A transaction is defined to be an atomic set of activities that is either performed entirely or not at all. A simple use case has three or fewer transactions, an average use case has four to seven transactions, and a complex use case has more than seven transactions. A simple use case is assigned a factor of 5, a medium use case a factor of 10, and a complex use case a factor of 15. Table 4.1 gives this classification and factors.

2.       Obtain the total unadjusted use case points (UUCPs) as a weighted sum of factors for the use cases in the application. That is, for each of the three complexity classes, first obtain the product of the number of use cases of a particular complexity and the factor for that complexity. The sum of the three products is the number of UUCPs for the application.

3.       Adjust the raw UUCP to reflect the project's complexity and the experience of the people on the project. To do this, first compute the technical complexity factor (TCF) by reviewing the factors given in Table 4.2 and rating each factor from 0 to 5. A rating of 0 means that the factor is irrelevant for this project; 5 means it is essential. For each factor, multiply its rating by its weight from the table and add these numbers to get the TFactor. Obtain the TCF using this equation:

TCF = 0.6 + (0.01 * TFactor)

Use Case Type

Description

Factor

Simple

3 or fewer transactions

5

Medium

4 7 transactions

10

Complex

>7 transactions

15

Sequence Number

Factor

Weight

1

Distributed system

2

2

Response or throughput performance objectives

1

3

End-user efficiency (online)

1

4

Complex internal processing

1

5

Code must be reusable

1

6

Easy to install

0.5

7

Easy to use

0.5

8

Portable

2

9

Easy to change

1

10

Concurrent

1

11

Includes special security features

1

12

Provides direct access for third parties

1

13

Special user training facilities required

1

4.       Similarly, compute the environment factor (EF) by going through Table 4.3 and rating each factor from 0 to 5. For experience-related factors, 0 means no experience in the subject, 5 means expert, and 3 means average. For motivation, 0 means no motivation on the project, 5 means high motivation, and 3 means average. For the stability of requirements, 0 means extremely unstable requirements, 5 means unchanging requirements, and 3 means average. For part-time workers, 0 means no part-time technical staff, 5 means all part-time staff, and 3 means average. For programming language difficulty, 0 means easy-to-use programming language, 5 means very difficult programming language, and 3 means average. The weighted sum gives the EFactor, from which the EF is obtained by the following equation:

Sequence Number

Factor

Weight

1

Familiar with Internet process

1.5

2

Application experience

0.5

3

Object-oriented experience

1

4

Lead analyst capability

0.5

5

Motivation

1

6

Stable requirements

2

7

Part-time workers

1

8

Difficult programming language

1

5.       EF = 1.4 + ( 0.03 * EFactor)

6.       Using these two factors, compute the final use case points (UCP) as follows:

UCP = UUCP * TCF * EF

For effort estimation, assign, on an average, 20 person-hours per UCP for the entire life cycle. This will give a rough estimate. Refine this further as follows. Count how many factors are less than 3 and how many factors are greater than 3. If the total number of factors that have a value less than 3 are few, 20 person-hours per UCP is suitable. If there are many, use 28 person-hours per UCP. In other words, the range is 20 to 28 person-hours per UCP, and the project manager can decide which value to use depending on the various factors.

4.2.4 Effectiveness of the Overall Approach

The common way to analyze the effectiveness of an estimation approach is to see how the estimated effort compares with the actual effort. As discussed earlier, this comparison gives only a general idea of the accuracy of the estimates; it does not indicate how optimal the estimates are. To gain that information, you must study the effects of estimates on programmers (for example, whether they were "stretched" or were "underutilized"). Nevertheless, a comparison of actual effort expended and estimated effort does give an idea of the effectiveness of the estimation method.

For completed projects, as discussed in Chapter 3, the process database includes information on the estimated effort as well as the actual effort. Figure 4.1 shows the scatter plot of estimated effort and actual effort for some of the completed development projects.

Figure 4.1. Actual versus estimated effort

Which type of project lends itself to the function point method of top down estimating?

As the plot shows, the estimation approach works quite well; most of the data points are close to the 45-degree line in the graph (if all estimates match the actual effort, all points will fall on the 45-degree line). The data also show that more than 50% of the projects are within 25% of the estimated effort. Nevertheless, the data indicate that the estimates are usually lower than the actual effort; note that most of the points are above the 45-degree line rather than below it. That is, people tend to underestimate more often something that afflicts the software industry in general. On average, the actual effort was 25% higher than the estimate. Overall, although there is room for improvement, the estimation approach is reasonably effective.

4.2.5 Effort Estimate of the ACIC Project

Here we illustrate the estimation approach by showing its application on the ACIC project. Two other examples can be found in my earlier book.10 The ACIC project employs the use-case-driven approach. Hence, the main decomposition is in terms of use cases and not in terms of modules. To classify the use cases, the project manager used the classification criteria. Table 4.4 lists the 26 use cases along with their complexity.

To estimate the build effort for different types of use cases, the ACIC project manager used the data from the Synergy project, whose process database entry is given in Chapter 3. The Synergy project had 21 simple, 11 medium, and 8 complex use cases. The detailed build data for the different use cases was used to estimate the average build efforts. (The total build effort was about 143 person-days. With average build efforts of 1 person-day, 5 person-days, and 8 person-days, respectively, for average, medium, and complex use cases, the total comes to 140, a number that is reasonably close to actual.) Table 4.5 shows the average build effort for each type of use case and the total build effort.

Use Case Number

Description

Complexity

1

Navigate Screen

Complex

2

Update Personal Details

Medium

3

Add Address

Medium

4

Update Address

Complex

5

Delete Address

Complex

6

Add Telephone Number

Medium

7

Update Telephone Number

Complex

8

Delete Telephone Number

Complex

9

Add E-mail

Medium

10

Update E-mail

Medium

11

Delete E-mail

Medium

12

Update Employment Details of a Party

Medium

13

Update Financial Details of a Party

Medium

14

Update Details of an Account

Medium

15

Maintain Activities of an Account

Complex

16

Maintain Memos of an Account

Simple

17

View History of Party Details

Complex

18

View History of Account Details

Complex

19

View History of Option Level and Service Options

Simple

20

View History of Activities and Memos

Simple

21

View History of Roles

Complex

22

View Account Details

Simple

23

View Holdings of an Account

Complex

24

View Pending Orders of an Account

Complex

25

Close/Reactivate Account

Simple

26

Make Intelligent Update to Business Partners of ACIC

Complex

To estimate the effort distribution among the stages, the project manager used the distribution found in the Synergy project. Because the earlier project did not have a requirements phase, the distribution had to be modified. Table 4.6 gives the estimate for each phase and for the total.

Use Case Type

Effort (per use case, in person-days)

Number of Units

Total Build Effort (person-days)

Simple use cases

1

5

5

Medium use cases

5

9

45

Complex use cases

8

12

96

Total

146

In this project, in addition to estimating in this bottom-up manner, the project manager employed the use case point methodology. As described earlier, first the UUCPs are determined from the use cases by assigning 5 points to each simple use case, 10 points to each medium-complexity use case, and 15 to each complex use case. The number of simple, medium, and complex use cases were 5, 9, and 12, respectively, so this translates to

UUCP = 5 * 5 + 9 * 10 + 12 * 15 = 295

To take into account the various factors, first the ACIC project manager assigned weights to the factors related to the complexity of the technology and obtained the technology complexity factor. He chose the following values of the factors (in the order given in Table 4.3): 4, 3, 5, 3, 4, 5, 5, 0, 4, 1, 2, 0, and 5, resulting in a TFactor of 40 (8 + 3 + 5 + 3 + 4 + 2.5 + 2.5 + 0 + 4 + 1 + 2 + 0 + 5) and a TCF of 1.0. Next, he computed the environmental factor. He assigned the following weights to the environmental factors: 3, 1, 3, 4, 5, 5, 0, and 3; the resulting EFactor was 22 (4.5 + 0.5 + 3 + 2 + 5 + 10 + 0 3) and an EF of 0.74. From these, he calculated the total use case points as

Activity

Estimated Effort

Person-days

% of Total Effort

Requirements

50

10

Design

60

12

Build

146

29

Integration testing

35

7

Regression testing

10

2

Acceptance testing

30

6

Project management

75

15

Configuration management

16

3

Training

50

10

Others

40

6

Estimated effort

501

100%

UCP = 295 * 1.0 * 0.74 = 218.3

Using the standard effort figure of 20 person-hours per UCP, he got the effort estimate as

218 * 20 = 4,360 person-hours = 499 person-days (at 8.75 hrs/day)

or

513 person-days (at 8.5 hrs/day)

These estimates were amazingly close to the earlier estimate, increasing the confidence of the project manager in the estimation. (As it turns out, the estimates for this project were indeed highly accurate, as you will see in the closure report given in Chapter 12. Furthermore, at all the milestones, the effort overrun, as compared to planned, was minuscule, as you will see in a milestone analysis given in Chapter 11.)

In this project, as mentioned earlier, the iterative process of RUP was used. Because the phases of design, analysis, and build were spread over many iterations, a phase-wise effort estimate, by itself, would not have provided a direct input for planning. For planning, the project manager had to estimate the effort for the various iterations. To obtain this, he started with the overall estimate as determined earlier. The estimate for requirements was broken into project initiation and inception phases. The effort for design, build, and test was broken into elaboration and construction, based on the use cases chosen in the various iterations and the guidelines given in the RUP methodology. The project management, CM, and other costs remained the same. Table 4.7 shows the distribution of effort by iterations.

Iteration

Estimated Effort

Person-days

% of Total Effort

Project initiation

25

5

Inception phase

24

5

Elaboration phase: Iteration 1

45

9

Elaboration phase: Iteration 2

34

7

Construction phase: Iteration 1

27

5

Construction phase: Iteration 2

24

5

Construction phase: Iteration 3

21

4

Transition phase

110

22

Project closure

10

2

Project management

75

15

Configuration management

16

3

Training

50

10

Others

40

8

Total estimated effort

501 person-days

100%


Page 2

4.3 SCHEDULING

The scheduling activity at Infosys can be broken into two subactivities: determining the overall schedule (the project duration) with major milestones, and developing the detailed schedule of the various tasks.

4.3.1 Overall Scheduling

As discussed earlier in the chapter, you can gain some flexibility in determining the schedule by controlling the staffing level, but this flexibility is limited. Because of this possible flexibility, building strict guidelines for scheduling may not be desirable; strict guidelines forfeit the advantage of the flexibility to be passed to the project or the customer. Furthermore, the project schedule is usually determined in the larger context of business plans, which impose some schedule requirements. Whenever possible, you should exploit your schedule flexibility to satisfy such requirements. One method is to use scheduling guidelines more for checking the feasibility of the schedule than for determining the schedule itself.

Figure 4.2 shows the scatter plot of the schedule and effort for some of the completed development projects at Infosys, along with a nonlinear regression curve fit for the scatter plot.

Figure 4.2. Schedule as a function of effort

Which type of project lends itself to the function point method of top down estimating?

The equation of the curve in Figure 4.3 is

schedule = 23.46 (effort)0.313

Figure 4.3. Manpower ramp-up in a typical project

Which type of project lends itself to the function point method of top down estimating?

From the distribution of the points, it is evident that schedule is not a function solely of effort. The determined schedule can, however, be used as a guideline or check of the schedule's reasonableness, which might be decided based on other factors. Similarly, the schedule and effort data from similar projects can be used to check the reasonableness of any proposed schedule.

Project managers often use a rule of thumb, called the square root check, to check the schedule of medium-sized projects. The principle is that the proposed schedule should be around the square root of the total effort in person-months; the schedule can be met if

Which type of project lends itself to the function point method of top down estimating?
resources are assigned to the project. For example, if the effort estimate is 50 person-months, a schedule of about 7 to 8 months will be suitable with about 7 to 8 full-time resources.

Because of the relationship between schedule and resources, a schedule is accepted only if the head of the business unit to which the project belongs agrees to provide the necessary resources. If the necessary resources are not available, the schedule must be adjusted. Dependencies of the project are also checked before a schedule is accepted. If the project execution depends on external factors (such as completion of another project or availability of certain software), the schedule must be adjusted to accommodate these factors.

Once the overall duration of the project is fixed, the schedule for the major milestones must be determined. To determine the milestones, you must first understand the manpower ramp-up that usually takes place in a project. The number of people in a software project tends to follow the Rayleigh curve.9,11 In the beginning and the end, few people work on the project; the peak team size (PTS) is reached somewhere near the middle of the project. This behavior occurs because only a few people are needed in the initial phases of requirements analysis and design. The human resources requirement peaks during coding and unit testing. Again, during system testing and integration, fewer people are required. In many cases, the staffing level does not change very often, but approximations of the Rayleigh curve are used: assigning a few people at the start, having the peak team during the build phase, and then leaving a few people for integration and system testing. If you consider design, build, and test as three major phases for which requirements are done, the manpower ramp-up in projects typically resembles the function shown in Figure 4.3

At Infosys, this approach for assigning resources is generally followed. Fewer people are assigned to the starting and ending phases, with maximum people during the build phase. During the build phase, the PTS for the project is usually achieved.

For ease of scheduling, particularly for smaller projects, all the required people are often assigned together around the start of the project. This approach can lead to some people being unoccupied at the start and toward the end. This slack time is often used for training. Project-level training is generally needed in the technology being used and the business domain of the project, and this training consumes a fair amount of effort, as can be seen in the effort distribution given in the PCB. Similarly, the slack time available in the end can be utilized for documentation and other closure tasks.

The schedule distribution differs from the effort distribution. For these three major phases, the percentage of the schedule consumed in the build phase is smaller than the percentage of the effort consumed because this phase involves more people. Similarly, the percentage of the schedule consumed in the design and testing phases exceeds their effort percentages. The exact schedule depends on the planned manpower ramp-up. Given the effort estimate for a phase, you can determine the duration of the phase when you know the manpower ramp-up.

Generally speaking, design requires about 40% of the schedule (20% for high-level design and 20% for detailed design), build consumes about 40%, and integration and system testing consume the remaining 20%. The manpower ramp-up typically is around 1:2:1 for design, build, and integration and testing, respectively (giving an effort distribution among these phases as 1:4:1). These types of guidelines provide a check for the milestones, which can be set based on other constraints.

It is important to recognize that even a person assigned full time to a project typically performs other tasks that consume time but do not contribute to the project. These tasks include leave, corporate activities, general (not project-specific) training, reviews in other projects, and so on.

4.3.2 The Effectiveness of the Approach

As with effort estimates, one way of checking the schedule estimates is to plot the actual schedule against the estimated schedule and see how close the points are to the 45-degree line. If all the points fall very close to the 45-degree line, the scheduling approach can be considered effective. Figure 4.4 shows this plot for previously completed development projects.

Figure 4.4. Actual versus estimated schedule

Which type of project lends itself to the function point method of top down estimating?

As you can see, the scheduling approach results in schedules that match reasonably well with the actual schedule. Keep in mind, however, that other factors (discussed in section 4.2) may determine whether the estimated schedule is met.

4.3.3 Detailed Scheduling

Once the milestones and the resources are fixed, it is time to set the detailed scheduling. The project manager breaks the tasks into small schedulable activities in a hierarchical manner. For each detailed task, the project manager estimates the time required to complete it and assigns a suitable resource so that the overall schedule is met. In assigning resources, she considers various factors such as leave plans of the team members, their personal growth plans and career paths, their skill sets and experience, training and mentoring needs, the criticality of the task, and the future value that the experience acquired in a task may provide to the project.

At each level of refinement, the project manager checks the effort for the overall task in the detailed schedule against the effort estimate. If necessary, she adjusts the detailed estimates. For example, she will break down the detailed design phase into tasks such as developing the detailed design for each module, review of each detailed design, fixing of defects found, and so on, and she may break down each of these further. Then she schedules these activities and assigns resources for some duration.

If this detailed schedule is not consistent with the overall schedule and effort estimate for detailed design, she must change the detailed schedule. If she finds that the best detailed schedule cannot match the milestone effort and schedule, she must revise the earlier estimates. Thus, scheduling is an iterative process.

Generally, the project manager refines the tasks to a level so that the lowest-level activity can be scheduled to occupy no more than a few days from a single resource. She also adds general activities, such as project management, coordination, database management, and configuration management. These activities have less direct effect on the schedule because they are ongoing tasks rather than schedulable activities. Nevertheless, they consume resources and hence are included in the project schedule.

Rarely does the project manager complete the detailed schedule of the entire project all at once. Once the overall schedule is fixed, she may do the detailing for a phase only at the start of that phase.

For detailed scheduling, project managers frequently use Microsoft Project (MSP) or a spreadsheet. For each lowest-level activity, they stipulate the effort, duration, start date, end date, and resources. For each activity, they also specify the activity code (discussed further in Chapter 7), the program code, and the module code. They may also specify dependencies between activities, due either to an inherent dependency (for example, you can conduct a unit test plan for a program only after it has been coded) or to a resource-related dependency (the same resource is assigned two tasks).

A detailed project schedule is never static. Changes may be needed because the actual progress in the project may be different from what was planned, because newer tasks are added in response to change requests, or because of other unforeseen situations. Changes are done as and when the need arises.

The final schedule, as recorded in MSP or some other tool, is the most "live" project plan document. During the project, if plans must be changed and additional activities must be done, after the decision is taken, any changes are reflected in the detailed schedule. Hence, the detailed schedule becomes the main document that tracks the activities and schedule. The detailed schedule is also a key input in project monitoring, which is discussed in Chapter 11.

4.3.4 The Schedule of the ACIC Project

Let's consider the example of the ACIC project. (See my earlier book for the scheduling of a different case study.10) As discussed earlier, the effort estimates for the ACIC project were 501 person-days, or about 24 person-months. The customer gave approximately 5.5 months to finish the project (from May 15 to November 3). Because this is more than the square root of effort in person-months, and because requirement gathering had to be finished before the project started, this schedule was accepted. (The resource requirement for this schedule was also estimated and is given in the project management plan in Chapter 8.)

The milestones are determined by using the effort estimate for the phase and an estimate of the number of resources that can be fully occupied in this phase. In the ACIC project, the project manager listed the major activities in each phase and assigned them to resources. From this assignment, he determined the overall schedule and effort for the phase. If the total effort for the phase did not match the effort estimate, he revised the assignment until the total effort matched the effort estimate. Then the overall schedule, as obtained from the assignment of activities, was taken as the schedule for the phase. (The milestones are specified in the project management plan given in Chapter 8.) Table 4.8 shows the high-level schedule of the ACIC project. This schedule is obtained automatically from the final detailed schedule of the project.

In the table, the task ID is the sequence number assigned in the MSP. The task IDs show that the total number of tasks in the final schedule was more than 330 and that each of these high-level tasks had many schedulable activities under it. The first task was the overall project, with a duration of about 140 days and an effort of 560 person-days. (This schedule is from the final schedule of the project, which incorporated the changes to be done; the final two tasks in this table are the two major changes.)

Task ID

Task

Duration (days)

Work (person-days)

Start Date

End Date

1

ACIC development schedule

139.56

559.93

4/3/00

11/3/00

2

Project initiation activities

33.78

24.2

5/4/00

6/23/00

29

Regular activities

87.11

35.13

6/5/00

10/16/00

74

Training

95.11

49.37

5/8/00

9/29/00

99

Organization activities

76.89

12.9

5/22/00

9/15/00

104

Knowledge sharing initiative

78.22

19.56

6/2/00

9/30/00

110

Inception phase activities

26.67

22.67

4/3/00

5/12/00

114

Elaboration Iteration 1

27.56

55.16

5/15/00

6/23/00

157

Elaboration Iteration 2

8.89

35.88

6/26/00

7/7/00

198

Construction Iteration 1

8.89

24.63

7/10/00

7/21/00

228

Construction Iteration 2

6.22

28.22

7/20/00

7/28/00

256

Construction Iteration 3

6.22

27.03

7/31/00

8/8/00

290

Transition phase activities

56

179.62

8/9/00

11/3/00

323

Window resized release of 2.0 code

26.67

39.11

8/14/00

9/22/00

331

Back-end mainframe work for 3.0

4.44

6.44

8/14/00

8/18/00

This high-level schedule is not suitable for assigning resources and detailed planning. During detailed scheduling, these tasks are broken into schedulable activities. In this way, the schedule also becomes a checklist of tasks for the project. As mentioned before, this "exploding" of top-level activities is not done fully at the start but rather takes place many times during the project.

Table 4.9 shows part of the detailed schedule of the construction-iteration 1 phase of the ACIC project. For each activity, the table specifies the module, the program, and the activity code, along with the duration, effort, and so on. The Module and Program columns represent the module and program on which work is being done. Activity Code represents the activity being performed. (The standard organization-wide activity codes are discussed further in Chapter 7.)

Sometimes, the predecessors of the activity (the activities upon which the task depends) are also specified, although they are omitted here. This information helps in determining the critical path and the critical resources.

For each task, how much is completed is given in the % Complete column. This information is used for activity tracking, which is discussed further in Chapter 11. The detailed schedule also specifies the resource to which the task is assigned.

The activity number has been omitted from Table 4.9. As Table 4.8 indicated, there were more than 330 line items in the final schedule of the ACIC project, the lowest-level tasks being the schedulable tasks.


Page 3

4.4 SUMMARY

The basic goal of effort estimation is to generate reasonable estimates that will work most of the time. The following are key lessons from the estimation and scheduling approaches used at Infosys:

         Use past data to estimate. Prefer data from similar projects to general process capability data. Use a model to estimate, but allow flexibility for adjusting estimates to accommodate project-specific factors.

         Employ different models in different situations. Bottom-up estimation is effective when project details are known. Use the top-down approach if you can estimate the size and productivity, and the use case approach when using a use-case-based development approach.

Module

Program

Activity Code

Task

Duration (days)

Effort (days)

Start Date

End Date

% Complete

Resource Initials

PRS

Requirements

8.89 days

1.33 days

7/10/00 8:00

7/21/00 17:00

100%

BB, BJ

PDDRV

Design review

1 day

0.9 days

7/11/00 8:00

7/12/00 9:00

100%

BB, BJ, SB

PDDRW

Rework after design review

1 day

0.8 days

7/12/00 8:00

7/13/00 9:00

100%

BJ, SB

History

UC17

PCD

View history of party details, UC17

2.67 days

1.87 days

7/10/00 8:00

7/12/00 17:00

100%

HP

History

UC7

PCDRV

Code walkthrough, UC17

0.89 days

0.27 days

7/14/00 8:00

7/14/00 17:00

100%

BJ, DD

History

UC19

PCDRV

Code walkthrough, UC19

0.89 days

0.27 days

7/14/00 8:00

7/14/00 17:00

100%

BJ, DD

PCDRW

Rework after code walkthrough

0.89 days

2.49 days

7/17/00 8:00

7/17/00 17:00

100%

DD, SB, HP, BJ

PUTRW

Rework after testing

0.89 days

0.71 days

7/18/00 8:00

7/18/00 17:00

100%

BJ, SB, DD, HP

History

UC17

PUT

Test, UC 17

0.89 days

0.62 days

7/18/00 8:00

7/18/00 17:00

100%

SB

History

UC19

PUT

Test, UC 19

0.89 days

0.62 days

7/18/00 8:00

7/18/00 17:00

100%

HP

Configuration

PCM

Reconciliation

0.89 days

2.49 days

7/19/00 8:00

7/19/00 17:00

100%

BJ, DD, SB, HP

Management

PPMPT

Scheduling and tracking

7.11 days

2.13 days

7/10/00 8:00

7/19/00 17:00

100%

BB

Quality

PPMPT

Milestone analysis

0.89 days

0.62 days

7/19/00 8:00

7/19/00 17:00

100%

BB

         For the overall schedule and the high-level milestones, use the existing flexibility to meet proposed dates. Once the overall schedule and milestones are fixed, determine the resource requirement for each phase from the phase-wise effort estimate.

         Detailed scheduling is a dynamic task; take into account people issues while assigning tasks. It is not necessary to completely refine the schedule at the start. You can develop details for the tasks in the overall schedule as the need arises.

         The detailed schedule forms the planned activity list for the project. Capture all activities planned in the project in this document and use it later to track activities.

From the CMM standpoint, a proper effort and schedule estimation method is a requirement for the Software Project Planning KPA at level 2. At level 4, the use of past data for estimation is expected to increase, and the goals of the Quantitative Process Management KPA cannot be satisfied unless a good estimation procedure is in place. The Integrated Software Management KPA at level 3 also assumes that good estimation methods are available to projects for planning. The requirements related to estimation in these KPAs are satisfied by methods discussed in this chapter.


Page 4

4.5 REFERENCES

1. B. Boehm. Software Engineering Economics. Prentice Hall, 1981.

2. S.D. Conte, H.E. Sunsmore, and V.Y. Shen. Software Engineering Metrics and Models. Benjamin/Cummings, 1986.

3. V.R. Basili. Tutorial on Models and Metrics for Software Management and Engineering. IEEE Press, 1980.

4. B. Boehm. Software engineering economics. IEEE Transactions on Software Engineering, 10(1), 1984.

5. C.F. Kemerer. An empirical validation of software cost estimation models. Communications of the ACM, 30(5), 1987.

6. J.E. Matson, B.E. Barrett, and J.M. Mellicham. Software development cost estimation using function points. IEEE Transactions on Software Engineering, 20(4), 1994.

7. A.J. Albrecht and J.R. Gaffney. Software function, source lines of code, and development effort prediction: A software science validation. IEEE Transactions on Software Engineering, 9(6), 1983.

8. F. Brooks, Jr. The Mythical Man Month, Anniversary Edition. Addison-Wesley, 1995.

9. L.H. Putnam and W. Myers. Industrial Strength Software: Effective Management Using Measurement. IEEE Computer Society Press, 1997.

10. P. Jalote. CMM in Practice: Processes for Executing Software Projects at Infosys. Addison-Wesley, 2000.

11. L.H. Putnam. A general empirical solution to the macro software sizing and estimating problem. IEEE Transactions on Software Engineering, 4(4), 1978.


Page 5

Until a few years ago, software engineering suffered the same tragic notion of quality that manufacturing companies had much earlier that quality was something that was done at the end of the assembly/development process, before the product was to be delivered. It was common to see quality-conscious project managers plan for system testing after the development (other project managers did not even plan properly for system testing!) but fail to give any importance to quality control tasks during development. The result? System testing frequently revealed many more defects than anticipated. These defects, in turn, required much more effort than planned for repair, finally resulting in buggy software that was delivered late.

As the situation improved, project managers started planning for reviews and unit testing. But they did not know how to judge the effectiveness and implications of these measures. In other words, projects still lacked clear quality goals, convincing plans to achieve their goals, and mechanisms to monitor the effectiveness of quality control tasks such as unit testing.

With proper use of measurements and past data, it is possible to treat quality in the same way you treat the other two key parameters: effort and schedule. That is, you can set quantitative quality goals, along with subgoals that will help track the project's progress toward achieving the quality goal.

This chapter discusses how project managers at Infosys set the quality goals for their projects and how they develop a plan to achieve these goals using intermediate quality goals to monitor their progress. Before we describe Infosys's approach, we briefly discuss some general concepts of quality management.


Page 6

5.1 QUALITY CONCEPTS

Ensuring that the final software is of high quality is one of the prime concerns of a project manager. But how is software quality defined? The concept of software quality is not easily definable because software has many possible quality characteristics.1 In practice, however, quality management often revolves around defects. Hence, we use delivered defect density that is, the number of defects per unit size in the delivered software as the definition of quality. This definition is currently the de facto industry standard.2 Using it signals that the aim of a software project is to deliver the software with as few defects as possible.

What is a defect? Again, there can be no precise definition of a defect that will be general and widely applicable (is a software that misspells a word considered to have a defect?). In general, we can say a defect in software is something that causes the software to behave in a manner that is inconsistent with the requirements or needs of the customer.

Before considering techniques to manage quality, you must first understand the defect injection and removal cycle. Software development is a highly people-oriented activity and hence error-prone. Defects can be injected in software at any stage during its evolution. That is, during the transformation from user needs to software to satisfy those needs, defects can be injected in all the transformation activities undertaken. These injection stages are primarily the requirements specification, the high-level design, the detailed design, and coding.

For high-quality software, the final product should have as few defects as possible. Hence, for delivery of high-quality software, active removal of defects is necessary; this removal takes place through the quality control activities of reviews and testing. Because the cost of defect removal increases as the latency of defects (the time gap between the introduction of a defect and its detection) increases,3 any mature process will include quality control activities after each phase in which defects can potentially be injected. The activities for defect removal include requirements reviews, design reviews, code reviews, unit testing, integration testing, system testing, and acceptance testing (we do not include reviews of plan documents, although such reviews also help in improving quality of the software). Figure 5.1 shows the process of defect injection and removal.

Figure 5.1. Defect injection and removal

Which type of project lends itself to the function point method of top down estimating?

The task of quality management is to plan suitable quality control activities and then to properly execute and control them to achieve the project's quality goals.

5.1.1 Procedural Approach to Quality Management

As noted earlier, you detect defects by performing reviews or testing. Whereas reviews are structured, human-oriented processes, testing is the process of executing software (or parts of it) in an attempt to identify defects. In the procedural approach to quality management, procedures and guidelines for the review and testing activities are established. In a project, these activities are planned (that is, it is established which activity will be performed and when); during execution, they are carried out according to the defined procedures. In short, the procedural approach is the execution of certain processes at defined points to detect defects.

The procedural approach does not allow claims to be made about the percentage of defects removed or the quality of the software following the procedure's completion. In other words, merely executing a set of defect removal procedures does not provide a basis for judging their effectiveness or assessing the quality of the final code. Furthermore, such an approach is highly dependent on the quality of the procedure and the quality of its execution. For example, if the test planning is done carefully and the plan is thoroughly reviewed, the quality of the software after performance of the testing will be better than if testing was done but the test plan was not carefully thought out and the review was done perfunctorily. A key drawback in the procedural approach is the lack of quantitative means for project managers to assess the quality of the software produced; the only factor visible to project managers is whether the quality control tasks are executed.

5.1.2 Quantitative Approaches to Quality Management

To better assess the effectiveness of the defect detection processes, an approach is needed that goes beyond asking, "Has the method been executed?" and looks at metrics data for evaluation. Based on this analysis of the data, you can decide whether more testing or reviews are needed. If controls are applied during the project based on quantitative data to achieve quantitative quality goals, then we say that a quantitative quality management approach is being applied.

Quantitative quality management has two key aspects: setting a quantitative quality goal and then managing the software development process quantitatively so that this quality goal is met (with a high degree of confidence).

A good quality management approach should provide warning signs early in the project and not only toward the end, when the options are limited. Early warnings allow for timely intervention. To achieve this goal, it is essential to predict the values of some parameters at various stages so that controlling them during project execution will ensure that the final product has the desired quality. If such predictions can be made, you can use the actual data gathered to judge whether the process has been applied effectively. With this approach, a defect detection process does not terminate with the declaration that the process has been executed; instead, the data from process execution are used to ensure that the process has been performed in a manner that exploited its full potential.

One approach to quantitatively control the quality of the software is to work with software reliability models. Most such models use the failure data during the final stages of testing to estimate the reliability of the software. These models can indicate whether the reliability is acceptable or more testing is needed. Unfortunately, they do not provide intermediate goals for the early phases of the project, and they have other limitations. Overall, such models are helpful in estimating the reliability of a software product, but they have a limited value for quality management. (More information is available on reliability models.4,5,6)

Another well-known quality concept in software is defect removal efficiency. For a quality control (QC) activity, we define the defect removal efficiency (DRE) as the percentage of existing total defects that are detected by the QC activity.5 The DRE for the full life cycle of the project that is, for all activities performed before the software is delivered represents the in-process efficiency of the process. If the overall defect injection rate is known for the project, then DRE for the full life cycle also defines the quality (delivered defect density) of the software.

Although defect removal efficiency is a useful metric for evaluating a process and identifying areas of improvement, by itself it is not suitable for quality management. The main reason is that the DRE for a QC activity or the overall process can be computed only at the end of the project, when all defects and their origins are known. Hence, it provides no direct way to control quality during project execution.

Another approach to quantitative quality management is defect prediction. In this approach, you set the quality goal in terms of delivered defect density. You set the intermediate goals by estimating the number of defects that may be identified by various defect detection activities; then you compare the actual number of defects to the estimated defect levels.

This approach makes the management of quality closely resemble the management of effort and schedule the two other major success parameters of a project. A target is first set for the quality of the delivered software. From this target, the values of chosen parameters at various stages in the project are estimated; that is, milestones are established. These milestones are chosen so that, if the estimates are met, the quality of the final software is likely to meet the desired level. During project execution, the actual values of the parameters are measured and compared to the estimated levels to determine whether the project is traveling the desired path or whether some actions need to be taken to ensure that the final software has the desired quality.

The effectiveness of this approach depends on how well you can predict the defect levels at various stages of the project. It is known that the defect rate follows the same pattern as the effort rate, with both following the Rayleigh curve.5,7,8 In other words, the number of defects found at the start of the project is small but keeps increasing until it reaches a peak (around unit testing time) before it begins to decline again. Because a process has defined points for defect detection, you can also specify this curve in terms of percentages of total defects detected at the various detection stages. And from the estimate of the defect injection rate and size, you can estimate the total number of defects. This approach for defect level prediction is similar to both the base defect model and the STEER approach of IBM's Federal Systems Division.5

Yet another approach is to use statistical process control (SPC) for managing quality (Chapter 7 includes a brief discussion of SPC). In this approach, you set performance expectations of the various QC processes, such as testing and reviews, in terms of control limits. If the actual performance of the QC task is not within the limits, you analyze the situation and take suitable action. The control limits resemble prediction of defect levels based on past performance but can also be used for monitoring quality activities at a finer level, such as review or unit testing of a module.

When you use a performance prediction approach and the actual number of defects is less than the target, the approach has too many uncertainties for you to say with surety that the removal process was not executed properly. As a result, you must look at other indicators to determine the cause.5 In other words, if the actual data are out of range, the project manager will look at other indicators to decide what the actual situation is and what action, if any, is needed.


Page 7

5.2 QUANTITATIVE QUALITY MANAGEMENT PLANNING

Now let's consider how project managers at Infosys use the defect prediction approach for quantitatively managing software quality. (As discussed in Chapters 10 and 11, projects also use SPC at the task level.) As discussed earlier, when you plan for quantitatively managing quality for a project, you face two key issues: first, setting the quality goal, and second, predicting defect levels at intermediate milestones that can be used for quantitatively monitoring progress toward the goal. In addition, if the project goals exceed the capability of existing processes, the project manager must plan suitable enhancements to the quality process. Let's look at how project managers perform these three tasks at Infosys.

5.2.1 Setting the Quality Goal

Project managers at Infosys set quality goals during the planning stages. The quality goal for a project generally is the expected number of defects found during acceptance testing. You can set the quality goal according to what is computed using past data; in this case, it is implied that you will use the standard process, and hence standard quality results will be expected. Two primary sources can be used for setting the quality goal: past data from similar projects and data from the PCB.

If you use data from similar projects, you can estimate the number of defects found during acceptance testing of the current project as the product of the number of defects found during acceptance testing of the similar projects and the ratio of the estimated effort for this project and the total effort of the similar projects.

If you use data from the PCB, you can use any of several methods to compute this value. If you set the quality target as the number of defects per function point, you estimate size in function points (as discussed earlier), and the expected number of defects is the product of the quality figure and the estimated size. The following sequence of steps is used:

1.       Set the quality goal in terms of defects per FP.

2.       Estimate the expected productivity level for the project.

3.       Estimate the size in FP as (expected productivity * estimated effort).

4.       Estimate the number of AT defects as (quality goal * estimated size).

Instead of setting the quality goal in terms of defects per function point, sometimes it is more useful to set the target in terms of the process's defect removal efficiency. In this situation, you can determine the number of defects to be expected during acceptance testing from the defect injection rate, the target in-process removal efficiency, and the estimated size. The sequence of steps is as follows:

1.       Set the quality goal in terms of defect removal efficiency.

2.       Estimate the total number of defects from the defect injection rate and the estimated size, or by the effort-based defect injection rate and the effort estimate.

3.       Estimate the number of AT defects from the total number of defects and the quality goal.

5.2.2 Estimating Defects for Other Stages

Once the project's quality goal is set, you should estimate defect levels for the various quality control activities so that you can quantitatively control the quality. The approach for estimating defect levels for other phases is similar to the approach for estimating the defects in acceptance testing. From the estimate of the total number of defects that will be introduced, you forecast the defect levels for the various testing stages by using the percentage distribution of defects as given in the PCB. Alternatively, you can forecast defects for the various phases based on past data from similar projects.

At a minimum, you estimate the defects uncovered in system and integration testing. System testing is singled out because it is the major testing activity and the final QC activity performed before the software is submitted to the customer. Estimating defects for system testing and then comparing that number to the actual number of defects found will help you determine whether the system testing has been sufficient and the software is ready for release.

For reviews, instead of making an explicit prediction of the defect levels, you can use norms given in the review baseline to evaluate the effectiveness of a review immediately after it has been executed. These norms are determined based on SPC and allow you to evaluate the effectiveness of each review rather than evaluating the effectiveness of a phase. Chapter 10 discusses quantitatively managing reviews in more detail. Similarly, norms are also provided for unit testing. Monitoring of unit testing is discussed further in Chapter 11.

5.2.3 Quality Process Planning

You can set a quality goal that is higher (or lower) than the quality level of a similar project, or you can aim for the levels achieved by the standard process. You can then determine the expected number of defects for the higher goal by using the quality goal set for the project. Alternatively, after determining the expected number of AT defects, you can set the quality goal by choosing a different number of AT defects as the target.

If the quality goal is based on the data from similar projects and the goal is higher than that of the similar projects, it is unreasonable to expect that following the same process as used in the earlier projects will achieve the higher quality goal. If the same process is followed, the reasonable expectation is that similar quality levels will be achieved. Hence, if a higher quality level is desired, the process must be suitably upgraded. Similarly, if the quality goal is set higher than the quality levels given in the PCB, it is unreasonable to expect that following the standard process will lead to the higher quality level. Hence, a new strategy will be needed generally, a combination of training, prototyping, testing, reviews, and, in particular, defect prevention. This strategy is explicitly stated in the quality plan for the project. Here we discuss testing and reviews, the two main quality control processes. Defect prevention planning is discussed later in this chapter.

Different levels of testing are deployed in a project. You can modify the overall testing by adding or deleting some testing steps (these steps show up as process deviations in the project management plan). In addition, you can enhance the approach to testing by, for example, performing a group review of the test plans and test results.

The choice of work products to be reviewed is generally made by the project manager. The set of documents reviewed may, in fact, change from project to project. It can be adjusted according to the quality goal. If a higher quality level is set, it is likely to be achieved by having a larger number of programs group reviewed, by including a group review of the test plans, by having a more critical review of detailed designs, and so on. If this approach is selected, it is mentioned as the strategy for meeting the quality goal. To further elaborate on the implications of this type of strategy, you specify in the project plan all documents that will be reviewed and the nature of those reviews.

You can use the data in the process capability baseline to estimate the effects of the proposed process changes on the effort and schedule planned for the project. Frequently, once the process changes are identified, their effects are predicted based on past experience. This tactic is usually acceptable because the changes are generally minor.


Page 8

5.3 DEFECT PREVENTION PLANNING

Defect prevention (DP) activities are intended to improve quality and improve productivity. It is now generally accepted that some defects present in the system will not be detected by the various QC activities and will inevitably find their way into the final system. Consequently, the higher the number of defects introduced in the software during development, the higher the number of residual defects that will remain in the final delivered system.

This point can be stated in another way. The overall defect removal efficiency of a process is the percentage of total defects that are removed by the various QC activities before the software is delivered. For a stable process, the defect removal efficiency is also generally stable. Hence, the greater the total number of defects in the system, the greater the number of defects in the delivered system. In other words, the higher the defect injection rate, the poorer the quality. Clearly, for a given process and its removal efficiency, the quality of the final delivered software can be improved if fewer defects are introduced while the software is being built. This recognition serves as the quality motivation for defect prevention.

DP also has productivity benefits. As discussed earlier, the basic defect cycle during the building of software is that developers introduce defects and later identify and remove them. In other words, something is introduced that is later removed. Clearly, this defect injection and removal cycle is a waste of effort it adds no value to the software. In this cycle, developers introduce something only to put in more effort to remove it (and they hope they remove all the errors). It therefore makes sense not to introduce defects in the first place; then the effort required to identify and remove them will not be needed. In other words, if you inject fewer defects, fewer defects must be removed; the effort required to remove defects, in turn, will be reduced, thereby increasing productivity. This concept serves as the cost motivation for DP.

How is DP done? The premise of DP is that there is some cause behind the injected defects. If the cause can be understood, efforts can be made to eliminate defects or minimize their impact. In other words, DP generally entails collecting data on defects found in the past, analyzing the data to find the root causes for the injection of the defects, and then developing solutions that attack the root causes.

Like any other major project task, the DP activities must be planned. You actually analyze defects and find solutions, however, after some amount of defect data from the project is available. To implement DP, a project manager may start with the set of recommendations available at the organization level and then build project-specific recommendations based on analysis of the project's defect data. At Infosys, the following steps are taken for defect prevention activities at the project level:

         Identify a defect prevention team within the project.

         Have a kick-off meeting and identify existing solutions.

         Plan for defect prevention.

- Set defect prevention goals for the project.

- See that the DP team is trained on DP and causal analysis, if needed.

- Define the frequency at which defect prevention activities will be carried out.

         Do defect prevention.

- At defined points, collate defects data.

- Identify the most common types of defects by doing Pareto analysis.

- Perform causal analysis and prioritize the root causes.

- Identify and develop solutions for the root causes.

- Implement the solutions.

- Review the status and benefits of DP at the project milestones.

         Capture learning.

- In the metrics analysis report and BOK, capture the learning and benefits you have obtained.

- Submit all outputs of DP as a part of the process assets.

During quality planning, your only activities are the planning activities. The activities under "Do defect prevention" or "Capture learning" are done while the project is executing or when it is finished. Here, we focus primarily on the tasks related to planning. Chapter 11 describes the tasks under "Do defect prevention" in a discussion of project monitoring.

Most of the activities relating to DP planning are self-explanatory. A project manager identifies a team that will perform the DP analysis (obviously, everyone in the project must perform the actual solutions that are to be executed to prevent defects). A kick-off meeting raises the awareness of team members and identifies the solutions that may be available in the organization. If needed, the DP team is trained on DP and causal analysis.

Setting the DP goals is a key planning activity. As mentioned earlier, DP is viewed as a strategy to achieve higher quality and productivity than the standard organization process can achieve. Hence, if a project manager has set quality and productivity goals that are higher than the organization level, as part of her strategy to achieve the goals she may use DP. In general, a project manager sets the DP goal in terms of reduction in the defect injection rate, typically about 10% to 20%. With this rate, its impact on quality and productivity can be estimated.

The other key planning activity is to decide when and how often DP tasks will be performed. Although a project manager can make these decisions, the general guideline at Infosys is that the DP activities should be carried out after about 20% of the programs have been coded, code reviewed, and unit tested, and again when about 50% of them have been coded, reviewed, and unit tested. That is, the first DP exercise is undertaken when the defect data from about 20% of the modules are available. Hopefully, the results of this DP should result in actions that will reduce the defect injection rate for the rest of the project. Then another DP exercise should be done at the 50% mark. In this exercise the project manager can determine whether the solutions are bearing results and whether further actions need to be taken. At this point of development, all the different types of defects should have been seen and their causes understood and acted on. Hence, further analysis will yield little new information.


Page 9

5.4 THE QUALITY PLAN OF THE ACIC PROJECT

Now let's discuss the quality planning of our case study, the ACIC project. (Other examples, including a different case study, can be found in my earlier book.9) To set its goals and defect estimates, the ACIC project manager used the data from the Synergy project, a similar project done earlier for the same client. The defect injection rate in Synergy was 0.036 defects per person-hour (obtained by dividing the total number of defects by total effort both available in the Synergy process database). The ACIC project manager wanted to do better than Synergy and expected to have a 10% reduction in the defect injection rate, considerably better than the organization norms. At the projected rate, the number of defects injected was expected to be around 501 * 8.75 * 0.033 (the product of effort in person-days, the total hours in one person-day, and the expected defect injection rate). That is, the estimate of the total number of defects injected during the entire life cycle was around 145 defects.

Quality is defined as the defect density during acceptance testing, or the overall defect removal efficiency before acceptance testing. Synergy found 5% of the defects during acceptance testing. The ACIC project aimed to reduce it to 3%, giving the number of defects expected to be found during acceptance testing as 145 * 0.03 = 5 (approximately).

The productivity goal was a slight improvement over what was achieved in Synergy. The goal for the schedule was to deliver on time, and the expected cost of quality was 32%, which was the same as in Synergy and the organization capability baseline. Table 5.1 shows all these goals for the ACIC project.

For the purposes of monitoring and controlling the project, the ACIC project manager wanted estimates of the number of defects detected in the various stages. He could then compare these estimates to the actual number of defects found and use them to monitor the progress of the project. With the estimate of the total number of defects injected, he could obtain these per-stage estimates using defect distribution.

Goals

Value

Basis for Setting Goals

Organization-wide Norms

Total number of defects injected

145

0.033 defects/person-hour. This is 10% better than Synergy, which was 0.036 defects/person-hour

0.052 defects/person-hour

Quality (acceptance defects)

5

3% or less of total estimated number of defects

6% of estimated number of defects

Productivity (in FP/person-month)

57

3.4% productivity improvement over Synergy

50

Schedule

Delivery on time

10%

Cost of quality

32%

31.5%

32%

To obtain the defect distribution, he had the choice of using the capability baseline data or the Synergy data. Because Synergy did not have a requirements phase, he modified its distribution to suit the current life cycle. Essentially, he reduced the percentage of defects found in unit testing from 45% to 40%, reduced the percentage of defects in acceptance testing to 3% (because that was the quality goal), and increased the percentage of defects in requirements and design review to 20%. These percentages were also consistent with the distribution given in the capability baseline. Table 5.2 shows the estimates of defects to be detected in the various stages.

Because the quality goal was higher than that achieved in Synergy as well as the organization-wide norms, and because the productivity goal was also somewhat higher, the ACIC project manager had to devise a strategy to achieve these goals because following the standard process would not help achieve them. The basic strategy was threefold:

         To employ defect prevention

         To conduct group reviews of specifications and the first program written by programmers

         To use the RUP methodology

Review/Testing Stage

Estimated Number of Defects to Be Detected

% of Defects to Be Detected

Basis for Estimation

Requirements and design review

29

20%

Similar project (Synergy) and PCB

Code review

29

20%

Similar project (Synergy) and PCB

Unit testing

57

40%

Similar project (Synergy) and PCB

Integration and regression testing

25

17%

Similar project (Synergy) and PCB

Acceptance testing

5

3%

Similar project (Synergy) and PCB

Total estimated number of defects to be detected

143

100%

Strategy

Expected Benefits

Prevent defects using the standard defect prevention guidelines and process; use standards developed in Synergy for coding.

10% 20% reduction in defect injection rate and about 2% improvement in productivity.

Group review program specs for first few and the logically complex use cases.

Group review design docs and first-time-generated code.

Improvement in quality because of improvement in overall defect removal efficiency; some benefits in productivity because defects will be detected early.

Introduce RUP methodology and implement the project in iterations. Conduct milestone analysis and defect prevention exercise after each Iteration.

Approximately 5% reduction in defect injection rate and 1% improvement in overall productivity.

That is, as compared with the process used in Synergy, the ACIC process was changed to achieve the higher goals.

Based on data from other projects, the project manager expected defect prevention to reduce the defects by about 10% to 20%. This would reduce the rework effort after testing and reviews, giving approximately a 2% improvement in productivity (based on the rework effort percentage of Synergy). He expected that group review of the program specifications and of the first module coded by programmers would improve the overall defect removal efficiency and also provide some benefits on the productivity front. Based on literature and anecdotes, he expected the use of RUP to benefit quality and productivity. Table 5.3 shows the strategy and expected benefits. Note that although the expected benefits of each strategy item are mentioned separately, it is hard to monitor the effects separately.

Because reviews are a key aspect of the quality process, they were mentioned separately in the quality plan. The plan specified the points in the development life cycle when the review was to be done, the work product to be reviewed, and the nature of the review. Table 5.4 shows these reviews in the ACIC project.


Page 10

5.5 SUMMARY

Ensuring that the final software has few defects is one of the prime concerns of a project manager. In the procedural approach to quality management, quality control procedures are planned and then properly executed. In the quantitative quality management approach, a quantitative quality goal is set for the project; to achieve this goal, the execution of the process is monitored quantitatively.

Review Point

Review Item

Type of Review

End of project planning

Project plan

Defect control system set up

Project schedule

Group review

Software quality adviser review

Software quality adviser review

End of project planning

CM plan

Group review

End of 90% of requirements (this should be at the end of the first elaboration iteration)

Business analysis and requirements specification document

Use case catalog

Group review

End of 90% design (this should be at the end of the second elaboration iteration)

Design document

Object model

Group review

Beginning of each iteration

Iteration plans

One-person review

End of detailed design

Complex and first-time-generated program specs including test cases, interaction diagrams

Group review

After coding of first few programs

Code

Group review

After self-testing of a process

Code

One-person review

End of unit test plan

Unit test plan

One-person review

Beginning of integration test

Integration test plan

Group review

Following are the key lessons from Infosys's approach to quantitative quality management through defect prediction:

         As with managing effort and schedule, you can manage quality by using the number of defects as the metric for quality.

         Set the quality goal for a project in terms of the number of defects during acceptance testing. Use past data on process capability to set this goal.

         Using past data, estimate the defect levels for the various defect detection stages in the process. Compare these estimates to the actual number of defects found during project execution to see whether the project is progressing satisfactorily toward achieving the goal or whether some correction is needed.

         In addition to testing, plan for reviews, clearly specifying the review points, review items, and review types.

         If the quality goal of the project is higher than past performance, it cannot be achieved using the same process as earlier projects. To achieve the higher goals, you must enhance the process.

         Use defect prevention as a strategy to achieve higher quality and productivity goals in a project. For defect prevention, identify the defect prevention team, the points at which defect analysis will be done, and the expected benefits.

The methods described in this chapter satisfy the quality planning requirements of the Software Product Engineering KPA and the planning requirements of the Peer Review KPA at level 3 of the CMM. They also satisfy the quantitative quality planning requirements of the Software Quality Management KPA at level 4. The defect prevention planning satisfies some requirements of the Defect Prevention KPA of level 5.


Page 11

5.6 REFERENCES

1. International Standards Organization. Information Technology Software Product Evaluation Quality Characteristics and Guidelines for Their Use. ISO/IEC IS 9126, Geneva, 1991.

2. N.E. Fenton and S.L. Pfleeger. Software Metrics: A Rigorous and Practical Approach, second edition. International Thomson Computer Press, 1996.

3. B. Boehm. Software Engineering Economics. Prentice Hall, 1981.

4. A.L. Goel. Software reliability models: Assumptions, limitations and applicability. IEEE Transactions on Software Engineering, 11, 1985.

5. S.H. Kan. Metrics and Models in Software Quality Engineering. Addison-Wesley, 1995.

6. J.D. Musa, A. Iannino, and K. Okumoto. Software Reliability: Measurement, Prediction, Application. McGraw Hill, 1987.

7. L.H. Putnam and W. Myers. Measures for Excellence: Reliable Software on Time, within Budget. Yourdon Press, 1992.

8. L.H. Putnam and W. Myers. Industrial Strength Software: Effective Management Using Measurement. IEEE Computer Society Press, 1997.

9. P. Jalote. CMM in Practice: Processes for Executing Software Projects at Infosys. Addison-Wesley, 2000.


Page 12

A software project is a complex undertaking. Unforeseen events may have an adverse impact on a project's cost, schedule, or quality. Risk management is an attempt to minimize the chances of failure caused by unplanned events. The aim of risk management is not to avoid getting into projects that have risks but rather to minimize the impact of risks in the projects that are undertaken. In words of N.R. Narayana Murthy, CEO of Infosys, "Anything worth doing has risks. The challenge for a leader is not to avoid them but effectively manage them through de-risking strategies." Improper risk management, the result mainly of the common disease of optimism, is the source of many project failures.

Vasu was designated as the project manager of a large project undertaken by a prestigious multinational corporation. The project was to build parts of an integrated system for worldwide human resource management. For the final system, the software Vasu's team was to develop had to be integrated with a system that was being developed by another vendor. For use in the project, the customer provided proprietary tools whose new version was to be released shortly. Vasu's team of 35 people had a little more than a year to deliver the software. Although the project employed a good team and the project manager made reasonable estimates, the system was commissioned six months late, with Infosys footing the bill for the 50% effort escalation in the project.

Why did this project fail? There are two clear reasons. First, the software being developed by the other vendor was not delivered on time, and the interfaces provided to Vasu's team kept changing. Second, a new version of the customer's tools was released during development, requiring the software to be ported to this new version.

Both of these events are clear instances of risks that were not managed properly. These risks were evident at the start, although, as with any risk, it was not certain that they would materialize. When the project started, the project manager, his business manager, and the customer hoped that the other vendor would deliver its software on time and that the new version of the tools would not affect the project.

In hindsight, Vasu thinks that if a steering team comprising the project managers of the two projects and a customer representative had been set up to ensure proper coordination between the two projects and their deliveries, delays could have been minimized. For the second risk, he thinks that the software should have been developed first with the earlier version of the tools. Then it should have been migrated to the new versions later through a separate project. The first solution would have required minimal extra effort, and the cost implications are minor. The second one has clear cost implications for the customer. Perhaps to avoid displeasing the customer, this risk was not highlighted and its mitigation not planned.

This chapter discusses the general concept of risks and risk management before turning to Infosys's approach for risk assessment and risk control the two major steps in risk management.


Page 13

6.1 CONCEPTS OF RISKS AND RISK MANAGEMENT

Risks are those events or conditions that may occur, and whose occurrence, if it does take place, has a harmful or negative effect on a project. Risks should not be confused with events and conditions that require management intervention or action. A project manager must deal with and plan for those situations that are likely to occur but whose exact nature is not known beforehand; such situations, however, are not risks. For example, it is almost certain that defects will be found during software testing, so a reasonable project must plan to fix these defects when they are found. Similarly, it is almost certain that some change requests will come, so project management must be prepared for changes and plan accordingly to handle such normal events.

A risk, on the other hand, is a probabilistic event it may or may not occur. For this reason, we frequently have an optimistic tendency to simply not see risks or to wish that they will not occur. Social and organizational factors also may stigmatize risks and discourage clear identification of them.1 This kind of attitude gets the project in trouble if the risk events materialize, something that is likely to happen in a large project. Not surprisingly, then, risk management is considered first among the best practices for managing large software projects.2

Risk management aims to identify the risks and then take actions to minimize their effect on the project. Risk management is a relatively new area in software management. It first came to the forefront with Boehm's tutorial on risk management.3 Since then, several books have targeted risk management for software.4,5

Before we discuss risks in the software context, let's examine the concept a bit more with the aid of an example. Consider a computer show for which an important goal is to provide uninterrupted computer services. For this goal, one clear risk is electric power failure. The power may or may not fail. If it does fail, even for a second, the computer services will be affected substantially (the machines will have to reboot, data will be lost, and so on). If this case is unacceptable (that is, if the cost of the power failure is high), a universal power supply (UPS) can be deployed to minimize its consequences. If it is suspected that the power may go out for a long period, a backup generator may be set up to minimize the problem. With these risk management systems in place, if the power does go out, even for a long period, the show-related goal will not be compromised.

The first thing to note from this example is that risk management entails additional cost. Here, the cost for the UPS and the generator is extra because these components would not be needed if the risk of power failure did not exist (for example, if the electric supply company guaranteed continuous power). Hence, risk management can be considered cost-effective only if the cost of risk management is considerably less than the loss incurred if the risk materializes.5 (Actually, the cost of risk management should be less than the expected value of the loss, a concept defined shortly.) For example, if the loss due to power failure is low, the cost of a UPS is not justified a situation that prevails, for example, in homes.

Second, it is not easy to measure the value of risk management, particularly in hindsight. If the power fails for one-half hour during the show, the value provided by the UPS and generator might be calculated as the "savings" achieved by having the computers running while the power was out. Suppose, however, that the power supply does not fail even for a second and therefore the UPS and generator are not used. Does this mean that the expenditure on these components was a waste? No, because the power could have failed. If the risk does not materialize, the value of using risk management cannot be directly measured in terms of value or output produced. Because risk events likely occur infrequently, the chances are high that risk management systems will not be used during the project. It is this probabilistic nature of risks and the inability to always realize the direct value of risk mitigation efforts that make it difficult to manage risk.

From this example, it is clear that the first step in risk management is to identify the possible risks (power failure in this example) and to assess the consequences (loss of face or clients). Once you have done risk assessment, you can develop a risk management plan (for example, having a UPS). Overall, then, risk management has two key components: risk assessment and risk control. Each component involves different tasks, as shown in Figure 6.1.3

Figure 6.1. Risk management activities

Which type of project lends itself to the function point method of top down estimating?

The purpose of the risk assessment task is to identify the risks, analyze them, and then prioritize them. In prioritizing risks, you identify the risks that should be managed. In other words, prioritization determines where the extra effort of risk management should be spent to get the maximum benefit. For this effort, two factors are important. First is the chance of a risk occurring; a more likely risk is a natural candidate for risk management. Second is the effect of the risk; a risk whose impact is very high is also a likely candidate.

One way to prioritize risks, therefore, is to estimate the probability of its occurrence and its consequences when it does occur. The product of these values, the expected value of the loss for the risk, can be used for prioritization. This expected value is called risk exposure. If Prob(R) is the probability of a risk R occurring and if Loss(R) is the total loss incurred if the risk materializes, then risk exposure, RE, for the risk is given by the following equation3:

RE (R) = Prob(R) x Loss(R)

Once the risks have been prioritized, you must decide what to do about them. Which ones will be managed is a management decision. Perhaps only the top few need to be handled in a project.

One approach is to take preventive or avoidance actions so that the perceived risk ceases to be a risk. For example, if new hardware is a risk, it could be avoided by implementing the project with proven hardware. Such actions, however, are not always feasible for example, if working with new hardware is a requirement from the customer. In such situations, the risks to the project must be handled properly.

For each risk that will be handled, you must devise and then execute risk management plans. Because risk perception changes with time, you must also monitor both the risk and the execution of the plans to minimize its consequences. In a project, risk perceptions may evolve naturally, or the risk management plans put into action may reduce the risk. In either case, it is important to continually gauge the status of risks and their management plans.

Risk management can be integrated in the development process itself, as is done in the spiral model of software development.6 If you treat risk management as a separate process, you must understand its relationship with project execution, depicted in Figure 6.2. As shown in the figure, risk assessment and monitoring take information from project execution, along with other factors, to identify risks to be managed. The risk management activities, on the other hand, affect the project's process for minimizing the consequences of the risk.

Figure 6.2. Risk management and project execution

Which type of project lends itself to the function point method of top down estimating?

The remainder of this chapter describes how project managers at Infosys manage risks using simple, effective techniques. The activities for risk management are combined into two tasks: risk assessment and risk control. We discuss each separately.


Page 14

6.2 RISK ASSESSMENT

In a project at Infosys, risk assessment consists of the two traditional components: risk identification and risk prioritization. The risk identification activity focuses on enumerating possible risks to the project. The basic activity is to try to envision all situations that might make things in the project go wrong. The risk prioritization activity considers all aspects of all risks and then prioritizes them (for the purposes of risk management). Although the two are distinct activities, they are often carried out simultaneously. That is, a project manager may identify and analyze the risks together.

6.2.1 Risk Identification

For a project, any condition, situation, or event that can occur and would jeopardize the success of the project constitutes a risk. Identifying risks is therefore an exercise in envisioning what can go wrong. Methods that can aid risk identification include checklists of possible risks, surveys, meetings and brainstorming, and reviews of plans, processes, and work products.5 Checklists of frequently occurring risks are probably the most common tool for risk identification. SEI has also provided a taxonomy of risks to aid in risk identification.7

At Infosys, the commonly occurring risks for projects have been compiled from a survey of previous projects. This list forms the starting point for identifying risks for the current project. Frequently, the risks in the current project will appear on the list.

A project manager can also use the process database to get information about risks and risk management on similar projects. Evaluating and thinking about previously encountered risks also help identify other risks that may be pertinent to this project but do not appear on the list.

Project managers can also use their judgment and experience to evaluate the situation to identify potential risks. Another alternative is to use the project management plan review and discussion meetings to elicit views on risks from others.

6.2.2 Risk Prioritization

The identified risks for a project merely give the possible events that can hinder it from meeting its goals. The consequences of various risks, however, may differ. Before you proceed with managing risks, you must prioritize them so that management energies can be focused on the highest risks.

Prioritization requires analyzing the possible effects of the risk event in case it actually occurs. That is, if the risk materializes, what will be the loss to the project? The loss could include a direct loss, a loss due to lost business opportunity or future business, a loss due to diminished employee morale, and so on. Based on the possible consequences and the probability of the risk event occurring, you can compute the risk exposure, which you can then use for prioritizing risks.

Probability

Range

Low

0.0 0.3

Medium

0.3 0.7

High

0.7 1.0

This approach requires a quantitative assessment of the risk probability and the risk consequences. Usually, little historical data are available to help you make a quantitative estimate of these parameters. Because risks are probabilistic events, they occur infrequently, and that makes it difficult to gather data about them. Furthermore, any such data must be interpreted properly because the act of managing the risks affects them. This fact implies that risk prioritization will be based more on experience than on hard data from the past. In this situation, categorizing both the probabilities and the consequences can serve to separate high-priority risk items from lower-priority items.5 At Infosys, the probability of a risk occurring is categorized as low, medium, or high. Table 6.1 gives the probability range for each of these categories.

To rank the effects of a risk on a project, you must select a unit of impact. To simplify risk management, Infosys project managers rate the risk impact on a scale of 1 to 10. Within this scale, the risk effects can be rated as low, medium, high, or very high. Table 6.2 gives the range for the consequences for each of these ratings.

With these ratings and ranges for each rating in hand, the following simple method for risk prioritization can be specified:

1.       For each risk, rate the probability of its happening as low, medium, or high. If necessary, assign probability values in the ranges given for each rating.

Level of Consequences

Range

Low

0.0 3.0

Medium

3.0 7.0

High

7.0 9.0

Very high

9.0 10.0

2.       For each risk, assess its impact on the project as low, medium, high, or very high. If necessary, assign a weight on a scale of 1 to 10.

3.       Rank the risks based on the probability and effects on the project; for example, a high-probability, high-impact item will have higher rank than a risk item with a medium probability and high impact. In case of conflict, use your judgment (or assign numbers to compute a numeric value of risk exposure).

4.       Select the top few risk items for mitigation and tracking.

The main objective of risk management is to identify the top few risk items and then focus on them. For this purpose, using classification works well. Clearly, a risk that has a high probability of occurring and that has high consequences is a risk with high risk exposure and therefore one with a high priority for risk management.

When you work with classifications, a problem in prioritization can arise if the risk probability and risk effects ratings are either (high, medium) or (medium, high). In this case, it is not clear which risk should be ranked higher. An easy approach to handle this situation is to mitigate both the risks. If needed, you can differentiate between these types of risks by using actual numbers.

This approach for prioritizing risks helps focus attention on high risks, but it does not help you in making a cost-benefit analysis of risk mitigation options. That is, by stating the consequences in terms of a scale rather than in terms of money value, this method does not allow you to calculate the expected loss in financial terms. Hence, you cannot analyze whether a certain risk mitigation strategy, costing a certain amount, is worth employing. Such an analysis is generally not needed, however, because the focus of risk management is usually on managing risks at the lowest cost and not on whether risk management itself is beneficial. On the other hand, if you must make a decision about whether a risk should be managed or whether it is financially smarter to leave it unmanaged, you must understand the financial impact of the risk.


Page 15

6.3 RISK CONTROL

Once a project manager has identified and prioritized the risks, the question becomes what to do about them. Knowing the risks is of value only if you can prepare a plan so that their consequences are minimal that is the basic goal of risk management. You minimize the effects of risk in the second step of risk management: risk control. Essentially, this step involves planning the risk mitigation followed by executing the plan and monitoring the risks.

6.3.1 Risk Management Planning

Once risks are identified and prioritized, it becomes clear which risks a project manager should handle. To manage the risks, proper planning is essential. The main task is to identify the actions needed to minimize the risk consequences, generally called risk mitigation steps. As with risk identification, you refer to a list of commonly used risk mitigation steps for various risks and select a suitable risk mitigation step. The list used at Infosys appears in Table 6.3. This table is a starting point not only for identifying risks but also for selecting risk mitigation steps after the risks have been prioritized. As with identification, you are not restricted to the steps mentioned in Table 6.3. You can use the process database to identify the risks and the risk mitigation steps.

Most of the risks and mitigation steps in Table 6.3 are self-explanatory. As you can see, the top few risks are concerned with manpower and requirements. Many of the items here are similar to those in the top risk lists given in Boehm3 and Hall.5 Selecting a risk mitigation step is not just an intellectual exercise. The risk mitigation step must be executed (and monitored). To ensure that the needed actions are executed properly, you must incorporate them into the project schedule. In other words, you must update the project schedule which lists the various activities and specifies when they will occur to include the actions related to the chosen risk mitigation steps.

6.3.2 Risk Monitoring and Tracking

Risk prioritization and consequent planning are based on the risk perception at the time the risk analysis is performed. The first risk analysis takes place during project planning, and the initial risk management plan reflects the view of the situation at that time. Because risks are probabilistic events, frequently depending on external factors, the threat due to risks may change with time as factors change. Clearly, then, the risk perception may also change with time. Furthermore, the risk mitigation steps undertaken may affect the risk perception.

This dynamism implies that risks in a project should not be treated as static and must be reevaluated periodically. Hence, in addition to monitoring the progress of the planned risk mitigation steps, you must periodically revisit the risk perception for the entire project. The results of this review are reported in each milestone analysis report (see Chapter 11); you report the status of the risk mitigation steps along with the current risk perception and strategy. To prepare this report, you make a fresh risk analysis to determine whether the priorities have changed.

Sequence Number

Risk Category

Risk Mitigation Steps

1

Shortage of technically trained manpower

         Make estimates with a little allowance for initial learning time.

         Maintain buffers of extra resources.

         Define a project-specific training program.

         Conduct cross-training sessions.

2

Too many requirement changes

         Obtain sign-off for the initial requirements specification from the client.

         Convince the client that changes in requirements will affect the schedule.

         Define a procedure to handle requirement changes.

         Negotiate payment on actual effort.

3

Unclear requirements

         Use experience and logic to make some assumptions and keep the client informed; obtain sign-off.

         Develop a prototype and have the requirements reviewed by the client.

4

Manpower attrition

         Ensure that multiple resources are assigned on key project areas.

         Have team-building sessions.

         Rotate jobs among team members.

         Keep extra resources in the project as backup.

         Maintain proper documentation of each individual's work.

         Follow the configuration management process and guidelines strictly.

5

Externally driven decisions forced on the project

         Outline disadvantages with supporting facts and data and negotiate with the personnel responsible for forcing the decisions.

         If inevitable, identify the actual risk and follow its mitigation plan.

6

Not meeting performance requirements

         Define the performance criteria clearly and have them reviewed by the client.

         Define standards to be followed to meet the performance criteria.

         Prepare the design to meet performance criteria and review it.

         Simulate or prototype performance of critical transactions.

         Test with a representative volume of data where possible.

         Conduct stress tests where possible.

7

Unrealistic schedules

         Negotiate for a better schedule.

         Identify parallel tasks.

         Have resources ready early.

         Identify areas that can be automated.

         If the critical path is not within the schedule, negotiate with the client.

         Negotiate payment on actual effort.

8

Working on new technology (hardware and software)

         Consider a phased delivery.

         Begin with the delivery of critical modules.

         Include time in the schedule for a learning curve.

         Provide training in the new technology.

         Develop a proof-of-concept application.

9

Insufficient business knowledge

         Increase interaction with the client and ensure adequate knowledge transfer.

         Organize domain knowledge training.

         Simulate or prototype the business transaction for the client and get it approved.

10

Link failure or slow performance

         Set proper expectations with the client.

         Plan ahead for the link load.

         Plan for optimal link usage.


Page 16

6.4 EXAMPLES

This section includes two risk management plans: one for the ACIC project and one from another project (here called XYZ). As you will see, the risk management plans tend to be small usually a table that fits on a page. The activities mentioned in the mitigation plan become part of project activities and may even be explicitly scheduled.

6.4.1 The ACIC Project

The ACIC project manager chose to work with numbers for risk prioritization and analysis. As shown in Table 6.4 , the top risk items have impact ratings that range from 8 to 3. The risk mitigation steps are also shown for each risk. For example, the second risk is working with the RUP methodology. That is, the project incurs a risk because the methodology is new to the project personnel. Furthermore, it seems that other projects have not used RUP. One of the risk mitigation plans is the most obvious one: to plan for training in the RUP methodology. In addition, it is suggested that the people from the R&D labs be consulted because they have knowledge of RUP and related concepts, that the customer be kept in the loop continuously, and that any problems be escalated quickly.

Sequence Number

Risks

Probability

Impact

Risk Exposure

Mitigation Plan

1

We will need support from the database architect and the customer's database administrator.

0.5

8

4

Plan carefully for the time required from each of these groups and give enough prior notice.

Have an onsite coordinator work closely with these groups.

2

Because RUP is being used for the first time, the understanding of the team may not be complete.

0.9

3

2.7

Work closely with experts in the Infosys R&D lab.

Keep the customer in the loop throughout the project and escalate for any schedule or effort deviations.

Train the team on RUP methodology.

3

Personnel attrition: Team members might leave on short notice.

0.3

7

2.1

Assign tasks so that more than one person is aware of the units and use cases in the project.

4

Working with the customer's mainframe DB2 over the link: Link may not be as efficient as it is expected.

0.1

8

0.8

Do extra code reviews, desk checking, etc. to minimize the reliance on the link.

Escalate as soon as the link goes down.

Note that once these options are accepted as the risk mitigation steps, they influence the detailed schedule of the project; the schedule must include time for appropriate training and proof-of-concept building activities. This need will arise with many risk mitigation steps. Because they represent actions and because the detailed project schedule represents most of the actions to be taken in the project, the risk mitigation steps will frequently change the detailed project schedule, adding to the project's overall effort requirement.

6.4.2 The XYZ Project

This project used the rating system for its risk management. Table 6.5 shows the various ratings and the risk mitigation steps. This risk management plan is a part of the project management plan for the project and has been extracted from it.

The method for performing the risk analysis at a milestone is essentially the same as described earlier, except that more attention is given to the risks listed in the project plan (that is, greater emphasis is placed on the output of earlier risk analyses for the project). During this risk analysis, project managers may reprioritize risks. In the XYZ project, when an analysis was done at a milestone about three months after the initial risk management plan was made, the risk perception had changed somewhat. Table 6.6 gives some of the risks for which the exposure had changed.

Based on the experience in the project to date, the project manager decided that the consequences of change reconciliation were considerably less dire. This situation might have arisen because, for example, the reconciliation problems encountered had been less difficult than expected. Similarly, the perception of the risk of manpower attrition had increased again, perhaps because of experience with team members and perhaps the fact that people were leaving in the middle of the project was now perceived as a greater problem than at the start of the project. Whenever risks are analyzed, the risk mitigation plans may also change, depending on the current realities of the project and the nature of risks. In this project, there was no change in the risk mitigation plans.


Page 17

6.5 SUMMARY

A risk for a project is a condition whose occurrence is not certain but that can adversely affect the project. Risk management requires that risks be identified and prioritized and, for the top few risks, that actions be taken to minimize their impact. The cost of risk mitigation may seem wasted when the risks do not materialize, but they must be incurred to minimize the loss in case the risk materializes.

Sequence Number

Risk

Probability

Consequences

Risk Exposure

Mitigation Plan

1

Failure to meet the high performance

High

High

High

         Indicate expected performance to clients through requirements prototypes.

         Use tips from body of knowledge database to improve performance.

         Make team aware of the requirements.

         Update the review checklist to look for performance pitfalls.

         Study and improve performance constantly.

         Follow guidelines from earlier performance studies.

         Test application for meeting performance expectations during integration and system testing.

2

Lack of availability of persons with the right skills

Medium

Medium

Medium

         Train resources.

         Review prototype with customer.

         Develop coding practices.

3

Complexity of application requirements

Medium

Medium

Medium

         Ensure ongoing knowledge transfer.

         Deploy persons with prior experience with the application.

4

Manpower attrition

Medium

Medium

Medium

         Train a core group of four people.

         Rotate onsite assignments among people.

         Identify backups for key roles.

5

Unclear requirements

Medium

Medium

Medium

         Review a prototype.

         Conduct a midstage review.

6

Difficulty of reconciliation configuration of changes done in onsite maintenance during off-shore development

Medium

Low

Medium

         Create a management plan and adhere to well-defined reconciliation approach.

         Reconcile once per month (first Tuesday or next working day).

         Do not reconcile changes done after a cut-off date.

Sequence Number

Risk

Current Probability

Current Consequences

Current Risk Exposure

2

Manpower attrition

High

High

High

3

Difficulty of reconciliation of changes created in onsite maintenance during off-shore development

Low

Low/medium

Low

Following are some of the key lessons from the Infosys approach to risk management:

         To help you identify risks, a list of commonly occurring risks is a good starting point. In addition, look ahead and try to visualize everything that can go wrong in the project.

         For risk prioritization, a simple and effective mechanism is to classify the probabilities of risks and their impacts into categories such as low, medium, and high, and then manage the risks that have high probabilities and impact.

         For the top few risks, plan the risk mitigation steps, and ensure that they are properly executed during the project.

         Monitor and reevaluate the risks periodically, perhaps at milestones, to see whether the risk mitigation steps are having an effect and to revisit risk perception.

With respect to the CMM, the Project Planning KPA of level 2 requires that a project have a risk management plan. Proper processes for risk management and monitoring are a requirement for the Integrated Software Management KPA at CMM level 3.


Page 18

6.6 REFERENCES

1. R.N. Charette. Large-scale project management is risk management. IEEE Software, July 1996.

2. N. Brown. Industrial-strength management strategies. IEEE Software, July 1996.

3. B. Boehm. Tutorial: Software Risk Management. IEEE Computer Society, 1989.

4. R. Charette. Software Engineering Risk Analysis and Management. McGraw Hill, 1989.

5. E.M. Hall. Managing Risk: Methods for Software Systems Development. Addison-Wesley, 1998.

6. B. Boehm. A spiral model of software development and enhancement. IEEE Computer, May 1988.

7. M. Carr et al. Taxonomy-based Risk Identification. Technical Report, CMU/SEI-93-TR-006, 1993.


Page 19

A project management plan is merely a document that can be used to guide the execution of a project. Unless the actual performance of the execution is tracked against the plan, the plan has limited value. And for project tracking, the value of certain key parameters must be measured during the project. Tracking is a difficult task, and, as with other tasks, if you want to perform it properly you must plan for it. During planning, you must decide on issues such as how the tasks, the effort, and the defects will be tracked, what tools will be used, what reporting structure and frequency will be followed, and so on.

This chapter discusses how measurements are made in projects at Infosys. It also describes project tracking planning and the selection of thresholds for performance variation, which are used to trigger management actions. Actual project tracking is discussed in Chapters 10 and 11.


Page 20

7.1 CONCEPTS IN MEASUREMENT

The basic purpose of measurements in a project is to effectively control the project. This section discusses some concepts related to metrics and measurement and the basic metrics that you should measure for controlling a project. One approach for process control is statistical process control. This section also discusses some concepts relating to SPC and the way SPC can be used for software.

7.1.1 Metrics and Measurements

Software metrics can be used to quantitatively characterize various aspects of the software process or software products. Process metrics quantify attributes of the software process or the development environment, whereas product metrics are measures for the software products.1,2 Product metrics remain independent of the process used to produce the product. Examples of process metrics include productivity, quality, resource metrics, defect injection rate, and defect removal efficiency. Examples of product metrics include size, reliability, quality (quality can be viewed as a product metric as well as a process metric), complexity of the code, and functionality.

The use of metrics necessarily requires that measurements be made to obtain data. For any metrics program, you must clearly understand the goals for collecting data as well as the models that are used for making judgments based on the data. In general, which metrics to use and which measurements to take will depend on the project and organization goals; you can use a framework, such as the goal-question-metric paradigm, to determine the metrics that need to be measured.3,4 In practice, however, a few metrics suffice for most situations, and special metrics are needed only for special situations. Schedule, size, effort, and defects are the basic measurements for projects and form a stable metrics set.5,6

Schedule is one of the most important metrics because most projects are driven by schedules and deadlines. It is, however, the easiest to measure because calendar time is usually used. Effort is the main resource consumed in a software project. Consequently, tracking of effort is a key activity during monitoring; it is essential for evaluating whether the project is executing within budget. That is, this data is needed to make statements such as "The cost of the project is likely to be about 30% more than projected earlier" or "The project is likely to finish within budget."

Because defects have a direct relationship to software quality, tracking of defects is critical for ensuring quality. A large software project may include thousands of defects that are found by different people at different stages. Often the person who fixes a defect is not the same person who finds or reports it. Generally, a project manager will want to remove most or all of the defects found before the final delivery of the software. In such a scenario, defect reporting and closure cannot be done informally. The use of informal mechanisms may lead to defects being found but later forgotten, so defects end up not being removed or extra effort must be spent in finding the defect again. Hence, at the very least, defects must be logged and their closure tracked. For this procedure, you need information, such as the manifestation of the defect, its location, and the names of the person who found it and the person who closed it. Once each defect found is logged (and later closed), analysis can focus on how many defects have been found so far, what percentage of defects are still open, and other issues. Defect tracking is considered one of the best practices for managing a project.7

Merely logging defects and tracking them is not sufficient to support other desirable analyses. To understand what percentage of defects are caught where, you also need to record information about the phases at which defects are detected. To understand the defect removal efficiency of various quality control tasks and thereby improve their performance, you must know not only where a defect is detected but also where it was injected. In other words, for each defect logged, you should also provide information about the phase in which the defect was introduced.

Size is another fundamental metric because many data (for example, delivered defect density) are normalized with respect to size. Without size data, you cannot predict performance using past data. Also, without normalization with respect to a standard measure of size, you cannot benchmark performance for comparison purposes. The two common measures for size are lines of code (LOC) and function points. If you use lines of code as a measure, productivity differs with the programming language. Function points provide uniformity.

7.1.2 Process Monitoring through Statistical Process Control

Statistical process control has been used with great success in manufacturing, and its use in software is also increasing.8 Here we briefly discuss some general concepts of SPC; for more information, you can consult any textbook on statistical quality control.9,10 In Chapters 10 and 11 you will see how SPC concepts are used for project monitoring.

A process is used to produce output, and the quality of the output can be defined in terms of certain quality characteristics. A number of factors affect the variability in the value of these characteristics. These factors can be classified into two categories: natural (or inherent) causes of variability, and assignable (or special) causes. Natural causes are those that are always present and each of which contributes to the variability. It's not practical to control these causes unless the process itself is changed. Assignable causes, on the other hand, are those that occur once in a while, have a larger influence over variability in the process performance, and can be controlled. Figure 7.1 illustrates the relationship between causes and quality characteristics.

Figure 7.1. Assignable and natural causes

Which type of project lends itself to the function point method of top down estimating?

A process is said to be under statistical control if the variability in the quality characteristics is due to natural causes only. The goal of SPC is to keep the production process in statistical control.

Control charts are a favorite tool for applying SPC. To build a control chart, the output of a process is considered to be a stream of numeric values representing the values of the characteristic of interest. Subgroups of data are taken from this stream, and the mean values for the subgroups are plotted, giving an X-bar chart. A lower control limit (LCL) and an upper control limit (UCL) are established. If a point falls outside the control limits, the large variability is considered to be due to assignable causes. Another chart, called an R-chart, plots the range (the difference between the minimum and maximum values) of the chosen subgroups. Control limits are established for the R-chart, and a point falling outside these control limits is also considered as having assignable causes.

By convention, LCL and UCL are frequently set at 3-sigma around the mean, where sigma is the standard deviation for data with only normal variability (that is, variability due to natural causes). With these limits, the probability of a false alarm in which a point with natural variability falls outside the limits is only 0.27%.

When the production process does not yield the same item repeatedly, as is the case with software processes, forming subgroups may not make sense; individual values are therefore considered. For such processes, XMR charts9,10 can be used. In an XMR chart, a moving range of two consecutive values is considered as the range for the R-chart. For the X-bar chart, the individual values are plotted; the control limits are then determined using the average moving range.

Note that control limits are different from specification limits. Specification limits specify, based on the requirements, the performance that is desired from the process. Control limits, on the other hand, based on actual data from the process, determine the actual performance capability of the process that is, what the process actually is capable of delivering. Clearly, if the control limits are within the specification limits (the specification limits are wider than the control limits), the process is capable of delivering output that will meet the specifications most of the time. On the other hand, if the specification limit is within the control limits, the probability of the process producing an outcome that does not satisfy the requirements increases. Based on the relationship between the specification limit and the control limit, the capability of a process can be defined formally.9,10

You use the control charts to continuously monitor the performance of the process and identify an out-of-control situation. Separately, you decide what action will be taken when a point representing an output falls outside the control limit. Generally, two types of actions are performed:

         Rework the output so that it has acceptable characteristics that is, take corrective action.

         Conduct further analysis to identify the assignable causes and eliminate them from the process that is, take preventive actions.

To employ control charts for software processes, you must first identify the processes to which SPC can be applied. One choice is the overall process, whose output is the software product to be delivered. The characteristics that can be studied for the output of this process include productivity, delivered defect density, and defect injection rate, among others. You can obtain the values of most of these characteristics for the output of the overall process only after the project ends, so SPC for the overall process has limited value for project monitoring and control. Its value lies primarily in understanding and improving the capability of the process.

To control a project, you can deploy SPC for "mini-processes" that are executed during the course of the project, such as the review process or testing process. Under SPC, as soon as the process is executed, its results can be analyzed. If required, you can then apply control in the form of corrective or preventive actions. Through corrective actions, the out-of-limit output is made acceptable; preventive actions help to improve execution of the remainder of the project. Chapters 10 and 11 discuss the use of SPC for monitoring projects.

Given the possibility of a large variation in performance in software processes, it is not an easy task to identify points having only natural variability so that you can determine the control limits. Hence, to compute the control limits from past performance data, you must use your judgment to determine which data points should be excluded. Furthermore, past data should not be used blindly, and discerning management must always support its use. For example, you cannot assume that a process has failed just because the performance is out of the range computed from past data.2,11 A more suitable approach is to use the performance range to draw attention to a deviation and then analyze the reasons for the deviation.


Page 21

7.2 MEASUREMENTS

Any quantitative control of a project depends critically on the measurements made during the project. To perform measurements during project execution, you must plan carefully regarding what to measure, when to measure, and how to measure. Hence, measurement planning is a key element in project planning. This section discusses the way standard measurements are done in projects at Infosys. Project managers may add to these measurements if their projects require it.

7.2.1 Collecting Effort Data

To help a project manager monitor the effort, each employee records in a weekly activity report (WAR) system the effort spent on various tasks. This online system, developed in-house, stores all submitted WARs in a centralized database. Each person submits his WAR each week. On submission, the report goes to the individual's supervisor for approval. Once it is approved, the WAR submission is final and cannot be changed. Everyone submits a WAR, including the CEO, and if a WAR is not submitted within a given time period, leave is deducted.

A WAR entry consists of a sequence of records, one for each week. Each record is a list of items, with each item containing the following fields:

         Program code

         Module code

         Activity code

         Activity description

         Hours for Monday through Sunday

The activity code characterizes the type of activity. The program code and module code permit separation of effort data with respect to modules or programs, a consideration that is important for component-level monitoring. To support analysis and project comparisons, it is important to standardize the activities against which effort is reported. Having a standardized set of activity codes helps to achieve this goal. Table 7.1 shows the activity codes used in Infosys projects. (These are different from the ones given in my earlier book because the codes were changed with the introduction with a new Web-based WAR system.)

In the activity codes, a separate code for rework effort is provided for many phases. This classification helps in computing the cost of quality. With this level of refinement, you can carry out a phase-wise analysis or a subphase-wise analysis of the effort data. The program code and module code, which are specified by the project, can be used to record effort data for different units in the project, thereby facilitating unit-wise analysis.

To facilitate project-level analysis of planned versus actual effort spent, the WAR system is connected to the Microsoft Project (MSP) depiction of the project. Project staff can begin submitting WARs for a project only after the MSP for the project has been submitted (once the MSP is submitted, the system knows which people are supposed to be working on the project). Planned activities are defined as those listed in the MSP and assigned to an authorized person in the project. Unplanned activities are all other project activities.

When entering the WAR for a week, the user works with a screen that is divided into two sections: planned activities and unplanned activities. All activities that are assigned in the MSP to a particular person for this week show up in her planned activities section for that project. The user cannot add or modify activities that show up in this section. She can enter only the hours spent each day for the different activities provided. To log the time spent on activities not listed in the planned section, the user can enter a code, its description, and the hours spent each day on these activities in the unplanned section for the project.

7.2.2 Logging and Tracking Defects

In an Infosys project, defect detection and removal proceed as follows. A defect is found and recorded by a submitter. The defect is then in the state "submitted." Next, the project manager assigns the job of fixing the defect to someone, usually the author of the document or code in which the defect was found. This person does the debugging and fixes the reported defect, and the defect then enters the "fixed" state. A defect that is fixed is still not closed. Another person, typically the submitter, verifies that the defect has been fixed. After this verification, the defect can be marked "closed." In other words, the general life cycle of a defect has three states: submitted, fixed, and closed (see Figure 7.2). A defect that is not closed is also called open.

Figure 7.2. Life cycle of a defect

Which type of project lends itself to the function point method of top down estimating?

Activity Code

Description

PAC

Acceptance

PACRW

Rework after acceptance testing

PCAL

Project catch-all

PCD

Coding and self unit testing

PCDRV

Code walkthrough/review

PCDRW

Rework after code walkthrough

PCM

Configuration management

PCOMM

Communication

PCSPT

Customer support activities

PDBA

Database administration activities

PDD

Detailed design

PDDRV

Detailed design review

PDDR

Rework after detailed design review

PDOC

Documentation

PERV

Review of models and drawings

PERW

Rework of models and drawings

PEXEC

Execution of modeling and drafting

PHD

High-level design

PHDRV

High-level design reviews

PHDRW

Rework after high-level design review

PIA

Impact analysis

PINS

Installation/customer training

PIT

Integration testing

PITRW

Rework after integration testing

PPI

Project initiation

PPMCL

Project closure activities

PPMPT

Project planning and tracking

PRES

Research on technical problems

PRS

Requirement specification activities

PRSRV

Review of requirements specifications

PRSRW

Rework after requirements review

PSP

Strategic planning activities

PST

System testing

PSTRW

Rework after system testing

PTRE

Project-specific trainee activities

PUT

Independent unit testing

PUTRW

Rework after independent unit testing

PWTR

Waiting for resources

PWY

Effort during warranty

A defect control system (DCS) is used in projects for logging and tracking defects. The system permits various types of analysis. Table 7.2 shows the information that is recorded for each defect logged in to the system.

To determine the defect injection stage requires analysis of the defect. Whereas defect detection stages consist of the review and testing activities, defect injection stages include the stages that produce work products, such as design and coding. Based on the nature of the defect, some judgments can be made about when it might have been introduced. Unlike the defect detection stage, which is known with certainty, the defect injection stage is more ambiguous; it is estimated from the nature of the defect and other related information. Using stage injected and stage detected information, you can compute the defect removal efficiencies, percentage distributions, and other metrics.

Sometimes it is desirable to understand the nature of defects without reference to stages, but rather in terms of the defect category. Such a classification can help you to understand the distribution of defects across categories. For this reason, the type of defect is also recorded. Table 7.3 shows the types of defects possible, along with some examples. A project can also define its own type classification.

Data

Description

Mandatory/Optional

Project code

Code of the project for which defects are captured

M

Description

Description of the defect

M

Module code

Module code

O

Program name

Name of program in which the defect was found

O

Stage detected

Stage in which the defect was detected

M

Stage injected

Stage at which the defect was injected/origin of defect

M

Type

Classification of the defect

M

Severity

Severity of the defect

M

Review type

Type of review

O

Status

Current status of the defect

M

Submitter

Name of the person who detected the defect

M

Owner

Name of the person who owns the defect

M

Submit date

Date on which the defect was submitted to the owner

M

Close date

Date on which the submitted defect was closed

M

Finally, the severity of the defect is recorded. This information is important for the project manager. For example, if a defect is severe, you will likely schedule it so that it gets fixed soon. Also, you might decide that minor or unimportant defects need not be fixed for an urgent delivery. Table 7.4 shows the classification used at Infosys.

From this information, various analyses are possible. For example, you can break down the defects with respect to type, severity, or module; plot trends of open and closed defects with respect to modules, severity, or total defects; determine the weekly defect injection rate; determine defect removal efficiency; determine defect injection rates in different phases, and so on. In Chapter 11 you will see some uses of this data for monitoring the quality dimension and for preventing defects. That chapter also describes an example of the defect data entered in the case study.

Defect Type

Example

Logic

Insufficient/incorrect errors in algorithms used; wrong conditions, test cases, or design documents

Standards

Problems with coding/documentation standards such as indentation, alignment, layout, modularity, comments, hard-coding, and misspelling

Redundant code

Same piece of code used in many programs or in the same program

User interface

Specified function keys not working; improper menu navigation

Performance

Poor processing speed; system crash because of file size; memory problems

Reusability

Inability to reuse the code

Design issue

Specific design-related matters

Memory management defects

Defects such as core dump, array overflow, illegal function call, system hangs, or memory overflow

Document defects

Defects found while reviewing documents such as the project plan, configuration management plan, or specifications

Consistency

Failure to updating or delete records in the same order throughout the system

Traceability

Lack of traceability of program source to specifications

Portability

Code not independent of the platform

Severity Type

Explanation for Categorization

Critical

Defect may be very critical in terms of affecting the schedule, or it may be a showstopper that is, it stops the user from using the system further.

Major

The same type of defect has occurred in many programs or modules. We need to correct everything. For example, coding standards are not followed in any program. Alternatively, the defect stops the user from proceeding in the normal way but a workaround exists.

Minor

This defect is isolated or does not stop the user from proceeding, but it causes inconvenience.

Cosmetic

A defect that in no way affects the performance of the software product for example, esthetic issues and grammatical errors in messages.

7.2.3 Measuring Schedule

Measuring schedule is straightforward because you use calendar time. The detailed activities and the schedule are usually captured in the MSP schedule, so the estimated dates and duration of tasks are given in the MSP. Knowing the actual dates, you can easily determine the actual duration of a task.

7.2.4 Measuring Size

If the bottom-up estimation technique is used, size is estimated in terms of the number of programs of different complexities. Although this metric is useful for estimation, it does not permit a standard definition of productivity that can be meaningfully compared across projects. The same problem arises if lines of code (LOC) are used as a size measure; productivity differs with the programming language. To normalize and employ a uniform size measure for the purposes of creating a baseline and comparing performance, function points are used as the size measure.

The size of delivered software is usually measured in terms of LOC through the use of regular editors and line counters. This count is made when the project is completed and ready for delivery. From the size measure in LOC, as discussed before, size in function points is computed using published conversion tables.12


Page 22

7.3 PROJECT TRACKING

The main goal of tracking is for project managers to get visibility into the project execution so that they can determine whether any action needs to be taken to ensure that the project goals are met. Because meeting the established project goals is the basic motive, all aspects of project execution that can affect the attainment of the goals must be monitored, and this monitoring must be planned. At Infosys, project managers typically plan for the following tracking:

         Activities tracking

         Defect tracking

         Issues tracking

Activities tracking looks at which planned activities have been completed. If the granularity of an activity is small, then at the lowest level you consider it to be in one of two states: not done or fully done. For higher-level tasks, you can compute the percentage completed from the state of the lowest-level tasks and their estimates.

Defect tracking is done in connection with defect logging, as discussed earlier.

Issues tracking ensures that clarifications and other problems that have the potential to delay the project do not go out of control. Chapter 11 explains how these tracking activities are performed at Infosys. During planning, project managers specify what type of tracking they plan to do and what tools or methods they will use.

In addition, to keep track of the project's status along the effort, schedule, and quality dimensions, project managers also plan for the following:

         Activity-level monitoring

         Status reports

         Milestone reports

Activity-level monitoring ensures that each activity has been done properly. You monitor activities quantitatively through the use of statistical process control. Based on past performance, you establish limits for key performance parameters of certain tasks. Then you compare actual performance of an activity to the established limits. If the performance is not within acceptable limits, you might take certain actions. At Infosys, reviews and unit testing, discussed in Chapters 10 and 11, are the two main activities that employ this approach.

Status reports are usually prepared weekly to help you to take stock of what has happened and what needs to be done. At project milestones, you conduct a more elaborate exercise to quantitatively check the actual versus estimated data along the effort, schedule, and defect dimensions. In addition, you monitor risks, training, reviews, customer complaints, and so on. The milestone analysis plays an important role in controlling the project. To ensure that the milestone analysis is done often enough to allow timely intervention, if the milestones required by the customer are too far apart, you plan internal milestones so that a milestone analysis is done every three to five weeks.

At milestones, you analyze actual versus estimated for effort, schedule, and defects. If the deviation is significant, some corrective action is called for. To differentiate the normal from the significant, you specify the acceptable deviation limits; to set these limits, you employ concepts of control charts. Control limits have been defined at Infosys for effort and schedule deviation (between the actual and estimated). These values, originally based on judgment and experience, are now based on past data and are computed in the same manner as other control limits. The earlier limits were 35% for effort deviation and 15% for schedule; with improvements in the process, these limits have been reduced to 20% and 10%, respectively. Figures 7.3 and 7.4 give the control charts showing the deviation in schedule and effort.

Figure 7.3. Control chart for schedule deviation

Which type of project lends itself to the function point method of top down estimating?

Figure 7.4. Control chart for effort deviation

Which type of project lends itself to the function point method of top down estimating?

If the deviation at the milestone exceeds these limits, it may imply that the project may run into trouble and might not meet its objectives; under time pressure, the project team might therefore start taking undesirable shortcuts. This situation calls for project managers to understand the reasons for the variation and to apply corrective and preventive actions if necessary.

These limits are based on past data and experience. You must set your project's own limits, which may be higher than the organization's limits, during planning.


Page 23

7.4 THE ACIC MEASUREMENT AND TRACKING PLAN

In the ACIC project the standard metrics of size, effort, defect, and schedule were measured. The plan was to use line counters for size, the WAR system for effort, a defect control system called BugsBunny for defects, and MSP for schedule.

The project manager planned to use MSP for activity tracking and to have regular meetings to monitor the status of the various activities. The issues were classified into onsite, customer, business manager, and support services and tracked separately. Customer feedback complaints as well as compliments was logged.

Status reports were sent weekly to the business manager as well as the customer. The deviation limits for the first five milestones were set to 10% for effort and schedule and 20% for defects. For the rest of the milestones, the limits were set to 5% for effort and schedule and 20% for defects. Milestone reports were also sent to the business manager and the customer.

The final outcome of tracking and measurement planning is included in the project management plan of the ACIC project, given in Chapter 8.


Page 24

7.5 SUMMARY

During project planning, you must decide how you plan to monitor the progress of the project. Progress monitoring is essential to ensure that the project is progressing toward the goals and to allow you to take corrective actions if the situation warrants. Project monitoring usually requires measurements.

Following are some of the lessons from the measurement and tracking planning at Infosys:

         Plan to measure size, schedule, effort, and defects. These suffice for most software projects.

         Classify effort in a few categories, and collect effort data using an automated system with activity codes for each category. To avoid inaccuracies due to poor memory recall, log effort data frequently.

         Log defects and track them to closure. For a defect, also record its type, detection stage, injection stage, and severity to support analyses such as defect removal efficiency, delivered quality, and defect injection rate.

         For performance analysis at milestones, establish acceptable limits for performance variation from planned for effort, schedule, and defects. During project execution, if the performance goes beyond these limits, management intervention may be warranted.

Although project tracking and measurement are required by the Software Project Tracking and Oversight KPA at level 2, by the Integrated Project Management KPA at level 3, and both the KPAs at level 4, the CMM does not explicitly state the need for planning for these measurements. Given that there is a general underlying principle in the CMM that major activities to be performed must be planned, planning for measurement is implied by these KPAs.


Page 25

7.6 REFERENCES

1. S.D. Conte, H.E. Dunsmore, and V.Y. Shen. Software Engineering Metrics and Models. Benjamin/Cummings, 1986.

2. S.H. Kan. Metrics and Models in Software Quality Engineering. Addison-Wesley, 1995.

3. V.R. Basili and D.M. Weiss. A methodology for collecting valid software engineering data. IEEE Transactions on Software Engineering, 10(6), 1984.

4. V.R. Basili, G. Caldiera, and H.D. Rombach. Goal question metric paradigm. In Encyclopedia of Software Engineering, John J. Marciniak, editor. John Wiley and Sons, 1994.

5. R. Grady and D. Caswell. Software Metrics: Establishing a Company-wide Program. Prentice Hall, 1987.

6. Carnegie Mellon University/Software Engineering Institute. The Capability Maturity Model: Guidelines for Improving the Software Process. Addison-Wesley, 1995.

7. N. Brown. Industrial-strength management strategies. IEEE Software, July 1996.

8. W.A. Florac and A.D. Carleton. Measuring the Software Process: Statistical Process Control for Software Process Improvement. Addison-Wesley, 1999.

9. D.C. Montgomery. Introduction to Statistical Quality Control, third edition. John Wiley and Sons, 1996.

10. D.J. Wheeler and D.S. Chambers. Understanding Statistical Process Control, second edition. SPS Press, 1992.

11. W. Humphrey. Managing the Software Process. Addison-Wesley, 1989.

12. C. Jones. Applied Software Measurement: Assuring Productivity and Quality, second edition. McGraw Hill, 1996.


Page 26

The project management plan (PMP) document is the culmination of all planning activities undertaken by project managers. The outputs of the various planning activities appear in this document, which becomes the baseline document guiding the overall execution of the project. It should not be confused with the detailed project schedule, which represents only the schedule and assignment of activities.

Documenting the planning outputs enables the project plan to be reviewed for deficiencies. At Infosys, project plans are usually reviewed by a group that includes project managers, members of the SEPG, and senior management. In many instances, a project plan review has revealed glaring shortcomings that, if not corrected, could have spelled trouble for the project. A thorough review of the management plan is one of the best ways to nip potential problems in the bud a huge value to project managers, particularly those who are less experienced.

The document also serves an important communication purpose. It gives senior management an overall view of the project goals and commitments and describes how the project will be managed to meet them. It gives the project team a comprehensive view of the project and the roles of the individual team members.

Although we have discussed most of the planning activities, so far we have not discussed the team and communication issues, which are covered in this chapter. Here, we also discuss the structure of the template used at Infosys to document the plan and examine the project management plan of the ACIC project.