System accident

Unanticipated interaction of multiple failures in a complex system
(Learn how and when to remove this template message)

A system accident (or normal accident) is an "unanticipated interaction of multiple failures" in a complex system.[1] This complexity can either be of technology or of human organizations and is frequently both. A system accident can be easy to see in hindsight, but extremely difficult in foresight because there are simply too many action pathways to seriously consider all of them. Charles Perrow first developed these ideas in the mid-1980s.[2] Safety systems themselves are sometimes the added complexity which leads to this type of accident.[3]

Pilot and author William Langewiesche used Perrow's concept in his analysis of the factors at play in a 1996 aviation disaster. He wrote in The Atlantic in 1998: "the control and operation of some of the riskiest technologies require organizations so complex that serious failures are virtually guaranteed to occur."[4][a]

Characteristics and overview

In 2012 Charles Perrow wrote, "A normal accident [system accident] is where everyone tries very hard to play safe, but unexpected interaction of two or more failures (because of interactive complexity), causes a cascade of failures (because of tight coupling)." Perrow uses the term normal accident to emphasize that, given the current level of technology, such accidents are highly likely over a number of years or decades.[5] James Reason extended this approach with human reliability[6] and the Swiss cheese model, now widely accepted in aviation safety and healthcare.

These accidents often resemble Rube Goldberg devices in the way that small errors of judgment, flaws in technology, and insignificant damages combine to form an emergent disaster. Langewiesche writes about, "an entire pretend reality that includes unworkable chains of command, unlearnable training programs, unreadable manuals, and the fiction of regulations, checks, and controls."[4] The more formality and effort to get it exactly right, at times can actually make failure more likely.[4][b] For example, employees are more likely to delay reporting any changes, problems, and unexpected conditions, wherever organizational procedures involved in adjusting to changing conditions are complex, difficult, or laborious.

A contrasting idea is that of the high reliability organization.[7] In his assessment of the vulnerabilities of complex systems, Scott Sagan, for example, discusses in multiple publications their robust reliability, especially regarding nuclear weapons. The Limits of Safety (1993) provided an extensive review of close calls during the Cold War that could have resulted in a nuclear war by accident.[8]

System accident examples