Draft Version 0.6, April 25, 2015
A lifecycle model, a.k.a. software development lifecycle model (SDLC), describes software engineering processes used to build software. Software engineering processes are complex, and a complete model of complex processes must be complex (Humphrey 1989, p. 249). A complete model includes views for four aspects of the system: functional, behavioral, structural, and conceptual (ibid. p. 252).
The lifecycle model is a Process Model, however, there are many kinds of lifecycle models that use different strategies for building software. Your choice of lifecycle model should be based on how a particular model will work in your environment, or for a specific project; e.g. factors that affect choice of lifecycle model include how constrained the project schedule is, how well-known the requirements are, how sophisticated the team is, etc. Your ability to choose the most appropriate model requires that you know several models and the pros and cons of each: a particular model's effectiveness depends on the context in which it is used (McConnell 1996, p. 154).
Lifecycle models are described and used at different levels of detail (Humphrey 1989, p. 249). Humphrey defines three levels: U (universal), W (worldly), and A (atomic).
Most of the time when we talk about SDLC's, we really mean the W-level models that fit within the context of a U-level process model like Rational Unified Process, Capability Maturity Model, etc. That is the point of view that this article takes: lifecycle models at the W-level, where the actual work in building software occurs.
This section presents a summary of the characteristics of several lifecycle models (from Boehm (1988), Humphrey (1989), and McConnell (1996)). In general, there are two ways lifecycle models are presented:
Models are given names that indicate the fundamental strategy that the model uses ("evolutionary prototyping", "staged delivery", etc.) or the appearance of the diagram associated with the model ("spiral", "sashimi", etc.). Probably the best discussion of these is in McConnell's book (1996).
Models are constructed using process cell diagrams. Process cells represent development procedures such as requirements gathering, detailed design, implementation, etc. Process elements are added to address problems, and connected as needed (Humphrey, 1989). Probably the best discussion of these is in Humphrey's book.
Lifecycle models are not mutually exclusive; in fact, you should be prepared to change models as a project evolves, so you can make use of model features that apply to a specific situation (McConnell).
Boehm described this model as the root of the problem from which the first lifecycle model appeared, back in 1956 (p. 63).
The waterfall model is used often to introduce the concept of lifecycle models, and then to point out the problem of a model that confines activities to rigid and inflexible phases. It is a formal process for developing software in a top-down sequence (DeGrace and Stahl, p. 59). It is very document oriented, and works better when a project can be well-defined.
Because of these characteristics, the waterfall model is suitable for projects where "quality requirements dominate cost and schedule requirements" (McConnell, p. 137).
Traditional implementations of the waterfall model include a sequence of 6 to 9 phases that begin with some form of document that describes the phase in which work is beginning, and conclude with the production of a document as output that may be input for the next phase. While backing up to correct a mistake made in a previous phase is allowed, it is difficult and typically very costly.
To give you an idea of the amount of documentation this model can produce, here is a list from DeGrace and Stahl (p. 54):
The waterfall model treats activities as sequential and disjoint, and expects that the project is well-defined (McConnell, p. 143). It assumes that requirements can be complete, which is seldom the case (DeGrace and Stahl, p. 68; McConnell, p. 137). It was introduced at a time when computers and computer time were very expensive relative to the cost of personnel (DeGrace and Stahl, p. 70).
The Sashimi model originated in Japan as an improvement based on experience with the waterfall model (DeGrace and Stahl, p. 154). There is greater overlap between phases, fewer phases, and several activities are merged into the phases (rather than in separate phases). The amount of documentation is reduced because less is needed when there is personnel continuity between phases and activities (McConnell, p. 144).
Greater overlap between phases causes difficulty in determining milestones, and reduces your ability to track progress (McConnell, p. 144). Potentials for miscommunication and mistaken assumptions means that team members need a higher level of sophistication to avoid these pitfalls.
This model solves the problem with the waterfall model in which implementation of well-understood parts of a system are not allowed until the design of difficult parts are complete (McConnell, p. 145).
Implementing a waterfall with subprojects requires the architecture to have the system broken into subsystems that can be implemented as separate projects (McConnell, p. 145).
Unforseen interdependencies between the subsystems (ibid.).
Addresses the problem with the waterfall model where you are required to fully define requirements before architectural design can proceed (although you can apply this technique to more than just the requirements phase) (McConnell, p. 146).
DeGrace and Stahl use a similar technique they call the whirlpool model (p. 97). In this model, a loop (or spiral; the whirlpool) is added to address risk in a particular activity such as requirements analysis. This spiral might encompass other activities, so that an iteration of these steps can occur to work out problems.
DeGrace and Stahl's model has two spirals. One is between the design/implementation/test phase and installation/delivery phase. This loop is called the "verification loop" because it reconciles the system's functionality with the requirements as the developers understood them (p. 98). The second spiral is between the installation/delivery phase and the initiation phase, called the "validation loop" because it reconciles the system with the expectations of the users (or those who initiated the project) (p. 101).
This is one the "best practices" McConnell describes as key to attaining the most reliable reduction in development time (ibid). Evolutionary prototyping addresses the problem of poorly understood or changing requirements by allowing the system concept to evolve as development progresses (McConnell, p. 147). The customer is shown a prototype of some aspect of the system, provides feedback, and functionality is adjusted for the next prototype. At some point, aggreement on functionality is reached and the final development phase begins, where all remaining work is completed.
Though there are many potential risks associated with this model, most are easily managed. Even with so many potential risks, the potential to reduce development time by 45 to 80 percent (ibid. p. 441) should outweigh them.
McConnell suggests using a modification of staged delivery that includes aspects of evolutionary prototyping to manage this risk.
McConnell suggests that you treat the design of a prototype so that it can be modified easily into production-quality; also, if you create a throwaway prototype, be sure to throw it away (ibid.).
McConnell suggests adding code to slow down the performance if the prototype is too fast, and explain that the final product will be written in a language that processes strings more efficiently (ibid.).
Prototyping generally produces better designs, but there are a few factors that can interfere (ibid. p. 439).
This risk is managed in the same way it is managed with any lifecycle approach. With evolutionary prototyping, you need to be aware of the tendency and be ready to take precautions.
Staged delivery addresses the problem with the waterfall model where there is no visible progress of the project from the end user's perspective, because nothing is delivered until everything is finished (ibid. p. 148).
With staged delivery, software is delivered in "successive stages" as the project progresses. Unlike evolutionary prototyping, staged delivery requires that you know what you are building - the requirements analysis has been done and the system concept is well-defined. This model works well with software that is customized for each customer, from a base product. The customer can begin using the system while development of the customizations continues.
Staged delivery does not reduce development time as does evolutionary prototyping, but it does improve visibility of development progress (ibid. p. 550). If there are problems, you will know sooner.
Other benefits provided by staged delivery:
Feature creep - it is typical for users to find functionality that they want added, once they have a system to use (ibid.).
Evolutionary delivery is a combination of evolutionary prototyping and staged delivery (McConnell, p. 425). The degree to which one balances the other is flexible. This model works well for customized software for situations where the customer needs to use the software before deciding if modifications are required.
Evolutionary delivery can be balanced more toward evolutionary prototyping; this provides the customer with "highly visible signs of progress" (ibid. p. 426), provides flexibility to change the system based on user requests, and provides less control for management in terms of project schedule. On the other hand, evolutionary delivery can be balanced more toward staged delivery; this also provides the customer with "highly visible signs of progress", but provides little flexibility to change the system based on user requests, and provides more control for management.
Successful use of evolutionary delivery requires that you begin with a basic idea of the system, and you use that to build a system architecture and core (ibid., p. 427). The architecture needs to be flexible so it can change as the system evolves.
While pure staged delivery does not allow the system architecture to evolve, evolutionary delivery does at the expense of some control over project schedule. You can gain back control by balancing the project toward staged delivery. E.g. McConnell describes an example where you decide at the outset on a set of four evolutionary deliveries, and each successive delivery incorporates features which evolved out of customer feedback (ibid. p. 428).
When an evolutionary delivery model is balanced more towards evolutionary prototyping, it takes on the same risks associated with that model. When an evolutionary delivery model is balanced towards staged delivery, it takes on the risks associated with staged delivery (ibid. p. 429).
The spiral model was designed to reduce risks that stem from a lack of understanding of requirements, architecture, the technology used, etc. (ibid. p. 141).
Each layer of the spiral - one complete loop - is an iteration that includes steps for resolving risk with a deliverable. This might be prototyping to determine performance capabilities, delivering a prototype to evaluate vague requirements, etc. The final loop uses the waterfall approach, after risks have been considered and reduced to acceptable levels (DeGrace and Stahl, p. 116; McConnell, p. 142).
McConnell (1996) recommends that you answer several questions about the project and then use those answers to choose from a matrix that shows how lifecycle models work under different circumstances (p. 154).
Boehm, Barry W., 1988, A Spiral Model of Software Development and Enhancement. Computer, May: p. 61-72.
DeGrace, Peter, and Stahl, Leslie Hulet, 1990, Wicked Problems, Righteous Solutions: A Catalogue of Modern Software Engineering Paradigms: Prentice-Hall, 244 pages. ISBN: 0-13-590126-X
Humphrey, Watts S., 1989, Managing the Software Process: Addison-Wesley, 494 pages. ISBN: 0-201-18095-2
McConnell, Steve, 1996, Rapid Development: Taming Wild Software Schedules: Microsoft Press, 660 pages. ISBN: 1-55615-900-5