The PERIL Database
(Excerpt from chapter 2 of Identifying and Managing Project Risk, First Edition, © 2003 by Tom Kendrick. AMACOM)
Over the past ten years, in the context of a series of workshops on risk management, hundreds of project leaders have described typical past project problems, defining both what went wrong and the amount of impact it had on their projects. This data is collected in the Project Experience Risk Information Library (PERIL) database, and it serves as the basis of the following analysis of high-tech project risk.
One useful dichotomy in risk management is between the "known" risks, the risks that occur frequently enough to be analyzed in advance, and the "unknown" risks, those that result from the uniqueness of the work and are difficult or impossible to anticipate. While the PERIL database contains a few unusual situations that are unlikely to recur, nearly all of the data represent situations that are typical of technical projects, so PERIL also provides a template for identifying risk situations that might otherwise fall into the "unknown" category.
First, what the information in PERIL is not. It is not comprehensive. It represents a small fraction of the tens of thousands of projects undertaken during this time by the project managers from whom it was collected, and it does not represent all the problems encountered even for the projects that are included. It is not unbiased. Several sources of potential bias are obvious: the data were not collected randomly, they are self-reported, and the information comes from a constituency at least interested enough in project and risk management to invest time attending the workshop. Another bias is toward more significant risks; few minor risks are reported here, as the point of the exercise was to collect data on major problems. Having said all this, the risk information collected represents a wide range of risks typical of current projects, and even with its flaws a number of patterns emerge. Some of the bias may even make the data more useful, as a focus on more significant problems is consistent with accepted strategies for risk management. However, in extending this analysis to other situations, be aware that "your mileage may vary."
Now, what the PERIL database is. The information collected covers a wide range of projects. Slightly more than half are product development projects, and the rest are information technology, customer solution, or process improvement projects. The projects are worldwide, with a majority from the Americas (primarily United States, Canada, and Mexico). The rest of the cases are from Asia (Singapore and India) and from Europe and the Middle East (from a number of nations, but mainly Germany and the United Kingdom). Whatever the type or location, most of these projects share a strong dependence on new or relatively new technology and significant investment in software development. There are both longer and shorter projects represented, but most had durations between six months and one year. While there are some very large programs in PERIL, typical staffing on these projects was between 10 and 25 people.
The raw project numbers in the PERIL database are:
In order to normalize the data for analysis and comparison purposes, a consistent measure for "impact" is used. The most typical serious impact reported was deadline slip (in weeks), so I estimated an equivalent slippage to this whenever the impact was primarily unplanned overtime, scope reduction, or some other project change. In cases where the deadline was mandatory, the data reported is the equivalent duration that would have been required if the overtime had been worked on a standard schedule, or the duration that would have been required to restore any deletions from the project scope. When this was necessary, very conservative estimates were used in making these transformations. The average impact for all records was slightly over five weeks, representing about a 20 percent slip for a typical nine-month project. The averages by project type and by region were consistently very close to the average for all of the data, ranging about four and a half weeks up to six and a half weeks.
Categorizing risks is a useful way to identify specific problems. Categories suggested by the project triple constraint—scope, schedule, and resources—are used to organize the PERIL database. The relative occurrence and impact of risks of various types provide the basis for improved risk discovery, and for more selective and cost-effective risk management. The resource, schedule, and scope risks in PERIL are further subdivided into categories and subcategories based on the sources of the risks.
For most of the risks, the categorization was fairly obvious. For others, the risk spanned a number of factors, and the categorization was a judgment call. In each case, however, the risk was grouped under the project parameter where it had the largest effect, and then by its primary perceived root cause. While schedule risks are most numerous, they seem slightly less damaging, on average, than the other risks (but they still typically caused nearly a month of project slip). Scope risks represent the most impact on project delivery, followed by resource risks. The data are shown here:
|Count||Cumulative Impact (Weeks)||Average Impact (Weeks)|
Scope risks dominate the data, but all categories are significant. A Pareto chart summarizing this total impact by category is in Figure 2-3.
Figure 2-3: Risks in the PERIL database
Each of these three categories is further characterized by root cause, and a summary of this data is in Figure 2-4. The PERIL database offers insight into the sources and magnitudes of technical project risk, and detail descriptions of the analysis for each category are spread through the next three chapters, with scope risks discussed in Chapter 3, schedule risks in Chapter 4, and resource risks in Chapter 5.
Figure 2-4: Subcategory risks in the PERIL database