SwansonSoftware Home > Process > Software Reviews

Software Reviews

Draft Version 1.2, September 30, 2007
Gregory Swanson


  1. 1. Introduction
    1. 1.1 Components
    2. 1.2 Cost
  2. 2. Overview of Review Types
    1. 2.1 Fagan's Software Review
    2. 2.2 Active Design Review
    3. 2.3 Two-Person Review
    4. 2.4 N-Fold Review
    5. 2.5 Phased Review
    6. 2.6 IEEE Standard 1028 For Software Reviews
    7. 2.7 Informal Reviews
      1. 2.7.1 Walkthrough
      2. 2.7.2 Pair Programming
      3. 2.7.3 Peer Check
      4. 2.7.4 Pass-Around
  3. 3. Review Process
    1. 3.1 Review Task
    2. 3.2 Reading Technique
    3. 3.3 Planning On-going Reviews
  4. 4. Sample Forms
    1. 4.1 Technical Review Summary Report
    2. 4.2 Design / Specification Grade Sheet
  5. References


One of the key components of a well-run software process is the software review. The need for software reviews arises from fundamental inadequacies in the way software is made:

Michael Fagan recognized these problems and in 1972 introduced the software review at IBM. Since then, several review process structures have appeared in the literature. Wong provides an overview of them as well as a bibliographical reference (1996). Each review process has strengths and weaknesses (discussed below) that will determine its appropriateness in a particular organization.

Software reviews have the primary objective of finding errors before a lot of time and effort is wasted building them into the software. In addition to finding errors, software reviews have the following objectives (based on Humphrey, 1989 p. 173, and McConnell, 1993):


Depending on the formality of the process, some of the following review components may not be needed. Based on Humphrey, 1989 (Humphrey refers to software reviews as inspections):


Several factors affect the cost of inspections.

McConnell (1993) said about 15 percent of a project's cost will go to inspections when both design and code inspections are done (p. 577). When you consider that 60 to 90 percent of a product's defects can be found with inspections (ibid. p. 576), that's a bargain.

Overview of Review Types

Some authors have found it sufficient to classify reviews based mostly on formality. For example, McConnell (1993) has found the following categories:

There are several distinct review types that the above classification scheme glosses over. They are covered well in Wong (2006):

Fagan's Software Review

Michael Fagan had two purposes in mind: 1) improve software quality, and 2) increase software developer productivity. The review process he describes includes six main steps (Wong p. 17):

Some steps may not be appropriate for a particular software artifact. E. g. the overview step is less useful for a code review.

Wong found that research results do not support some review practices:

Active Design Review

Active Design Review was described by Parnas and Weiss (1985). The rationale for ADR are (from Wong, p. 20):

  1. "When reviewers are overloaded with information they are unable to find defects effectively."
  2. Reviewers are often not familiar with the objective of the design and they are often unable to understand detailed levels of the artefact."
  3. "Large group review meetings often fall short of their objectives."

ADR process has three steps (Wong, p. 20):

  1. "The author presents an overview of the design artefact"
  2. Defect detection - "the author provides an open-ended questionnaire to help reviewers find defects in the design artefact"
  3. Defect collection - a "segmented review meeting strategy allows reviewers to concentrate" on small dimensions, minimizing information overload.

ADR has two roles: author, and reviewer.

Two-Person Review

Two-Person Review (TPR) was described by Bisant and Lyle (1989). Similar to ADR in that there are two roles: author and reviewer. TPR "adopts Fagan's formal review method but removes the role of the moderator (Wong, p. 21)."

N-Fold Review

N-Fold Review was described by Martin and Tsai (1990). Martin and Tsai's premise is that multiple review teams "working in parallel sessions" will find "a large number of defects" in an artefact, whereas a single team will likely find a small number (Wong, p. 21).

Tasks are divided so that groups do not duplicate each other's efforts. Fagan's (six-step) review process is followed, with participants (three or four) filling the roles of author, moderator, and reviewer (ibid).

Wong cites research that shows low defect redundancy (where each team finds the same defect), and defect discovery of 35% for a single team vs. 78% for all teams in the study (ibid).

The success of N-Fold Review depends on 1) adequate availability of expertise, and 2) ability to meet the additional costs required by multiple review teams.

Phased Review

Phased Review (PR) was described by Knight and Myers (1993). PR adopts ideas from ADR, Fagan's Software Review, and N-Fold Review. It follows Fagan's six-step process, in a series of "mini-reviews" called "phases." A phase is a full examination of one property of the artefact, such as reusability or maintainability (Wong, p. 22)

All work (including rework) must be completed for a review phase before progressing to the next phase. PRs can be performed with either a single reviewer or multiple reviewers. In the multiple reviewer approach, reviewers examine the artefact using copies of a checklist, then meet to discuss defects (ibid).

Wong finds that PRs are not widely used in practice, perhaps because they have the drawback of higher cost over other review processes.

IEEE Standard 1028 For Software Reviews

The standard describes a systematic, seven step review process (Wong, p. 24):

  1. Introduction - describes the objective and provides an overview of the procedures.
  2. Responsibilities - defines roles and responsibilities.
  3. Inputs - describes the input requirements.
  4. Entry criteria - describes the criteria to be met before starting a review, including authorization and initiating events.
  5. Procedures - details the procedures, which include planning the review, overview of procedures, preparation, examinatjion/evaluation/recording of results, rework and followup.
  6. Exit criteria - describes the criteria to be met before the review can be considered complete.
  7. Output - describes the minimum set of deliverables to be produced.

The IEEE standard describes a process similar to Fagan's. However, Wong finds that it has three shortcomings (ibid. p. 24):

  1. It does not provide "explicit suggestions" for adopting a sustainable review approach
  2. Relationships between input, process, and output are missing, because the guidelines only conceptualize the input-output relations
  3. Reviewer's characteristics are ignored (e.g. experience).

Informal Reviews


A walkthrough is where the author describes the artefact to a group of peers and seeks comments. A walkthrough differes from a formal software review in that

  1. the process is largely unstructured
  2. it does not follow a defined procedure
  3. it does not require management reporting and does not generate measurement metrics

Walkthrough approach is appropriate "where the primary review objective is to educate others about the product" (ibid). A "structured" walkthrough process is described in detail in Yourdon (1989):

Pair Programming

Pair programming is where "two programmers work simultaneously on the same program at a single workstation" (ibid p. 25). The result is continuous, incremental review of the working product.

Peer Check

Often presented as the least expensive approach to software review, Peer Check is where "only one person besides the author examines the software artefact" (ibid). Similar to individual software review, but has "no consistent set of practices.." Usually informal, Peer Check can be formalized. Its drawback is that it relies on the knowledge and skill of the reviewer.


Pass-around is where the author distributes copies of the artefact to reviewers for concurrent review (Wong, p. 26). Reviewers can see comments others have already written, which reveals differences in interpretation and helps minimize redundancy. There is no group meeting.

Review Process

This section describes the tasks and techniques for conducting a software review, as well as what to look for.

Review Task

This section describes the tasks and techniques for conducting a software review, as well as what to look for.

Tasks are categorized into four artefacts, the work products that are reviewed (adapted from Wong, 2006):

  1. Requirements artefact
  2. Design artefact
  3. Code artefact
  4. Testing artefact

What To Look For

The following sections describe what to look for in the artefact (adapted from Yourdon (1989) and McConnell (1993)).

Reading Technique

Wong (ibid) describes reading technique as a mechanism an individual reviewer uses to detect defects. Reading technique is "a series of procedures a reviewer can use to obtain an understanding of the artefact…providing a systematic guideline for finding defects." (3 and 4 increases reading coverage and defect detection performance.)

  1. Ad-hoc - there are no explicit instructions.
  2. Checklist-based reading (CBR) - usually no longer than one page of items in the checklist. The checklist guides reviewers by pointing out particular concerns that have been trouble spots in the past. If you are implementing reviews for the first time there is no history of trouble spots, so you can use a generic checklist to start. When you finish your first review you may notice that certain recurring problems arose, e.g. un-used variables declared in different places. These are added to the checklist, and other items that were less relevant are removed. There are several shortcomings to CBR.
  3. Stepwise abstraction - especially helpful for code documents, because the reading instructions are more structured and precise.
  4. Scenario-based reading - "provides systematic guidance for reviewers on what and how to find defects." A scenario "may contain detailed descriptive questions...or how to review an artefact." Effectiveness depends on the design and content of the scenario questions, and typically requires comprehensive training. Research indicates greater defect detection (about 35%) with this technique.

Planning On-going Reviews

With on-going inspections, you will know what is next on the agenda for review.

Sample Forms

Freedman and Weinberg (1990) describe a variety of evaluation criteria in their thorough (if dated) book. The forms are all paper, but today they would more likely be a simple spreadsheet, or even Web-based forms hooked up to a database.

Technical Review Summary Report

Adapted from Freedman and Weinberg (p. 183). This report is in a Q&A format. The report describes what was reviewed, who did the reviewing, and what was their conclusion. Fields would include:

Design / Specification Grade Sheet

Adapted from Freedman and Weinberg (p. 324). Fields could include:



Bisant, D. B., and Lyle, J. R., 1989, A Two Person Inspection Method to Improve Programming Productivity: IEEE Transactions on Software Engineering, 15 (10), 1294-1304.

Freedman, D. p., and Weinberg, G. M., 1990, Handbook of Walkthroughs, Inspections, and Technical Reviews: Evaluating Programs, Projects, and Products, 3rd Edition: Dorset House ISBN: 0-932633-19-6

Humphrey, Watts S., 1989, Managing the Software Process: Addison-Wesley, 494 pages. ISBN: 0-201-18095-2

Martin, J., and Tsai, W. T., 1992, N-fold Inspection: A Requirements Analysis Technique: Communications of ACM, 33(2), 225-232.

Knight, J. C., and Myers, A. E., 1993, An Improved Inspection Technique: Communications of ACM, 36(11), 50-69.

McConnell, Steve, 1993, Code Complete: A Practical Handbook of Software Construction: Microsoft Press, 880 pages. ISBN: 1-55615-484-4

McConnell, Steve, 1996, Rapid Development: Taming Wild Software Schedules: Microsoft Press, 660 pages. ISBN: 1-55615-900-5

Parnas, D. L., and Weiss, D. M., 1985, Active Design Reviews: Principles and Practices: Proceeding of ICSE '85, Aug. 28-30, pp. 132-136. IEEE Computer Society.

Wong, Y. K., 2006, Modern Software Review: Techniques And Technologies: IRM Press, 324 pages. ISBN: 159904014X

Yourdon, Edward, 1989, Structured Walkthroughs, fourth edition: Yourdon Press/Prentice Hall, 193 pages. ISBN: 0-13-855289-4

Top of page