In one of the strangest reviews I've read in my long career as someone who reads book reviews instead of books, D.L. Sims says of Charles Perrow's Normal Accidents:
Much in this book irritates. The sociological theory adds nothing to the argument. The style is infuriating; the long, opening example is tedious and frivolous. There are many assertions which are not technically sustained, particularly the analysis of marine accidents, which is naïve and alarmist. Perrow is too reluctant to use quantitative arguments; there is an implied assumption that dread or hopeless means an infinite disbenefit.
This book is important. The principles are sound....With its publication, the science of risk assessment ceases to belong to the shamans.
If you're wondering how he got from the first quotation to the second--from reading an irritating, tedious, frivolous, naïve, alarmist book that does its readers infinite disbenefit to an important and sound study whose publication steals the science of risk assessment back from the shamans (who probably didn't much want it anyhow)--be prepared to keep wondering, because those are the final two paragraphs (sans some minor technical detail) of Sims' review. Not that any of this germane to what I want to discuss, which is the thesis of Perrow's Normal Accidents, not its incomprehensible reception.
Because I'm not the most adept summarizer in the world--what with my innate and inane tendency to mock anything so uncomplicated even I can understand it--I'll let Bill Williams handle it:
Perrow offers the following hypothesis: in technological systems the propensity to accident is built in: the more complex the system the higher the probability--the more tightly coupled the higher the probability. Complex systems are those with many pathways and links between the parts, the full extent of which we often do not understand. Tightly coupled systems are those where consquence follows cause rapidly and inexorably, with little time or opportunity for corrective action...the less complex the system, the less likely to occur are the complicated consequences which may be serious, unforeseen or difficult to diagnose. The less tightly coupled, the more opportunity there is to think, and to try possible corrections.
But the obverse is also true: the more tightly coupled, the less opportunity there is to think and try possible corrections. And, as you probably already surmised, systems have been becoming increasingly tightly coupled for a couple of decades now. I ran across the Perrow first in William Langewiesche's article on the downing of ValuJet Flight 592, then again in Jerome Groopman's article on the increasing popularity of patient simulators in medical schools.
There's one part of Perrow's thesis I've neglected to mention: the more tightly coupled a system, the less culpable any one actor in that system is when something horrible happens. Any accident not caused by gross incompetence must be considered "normal" because no amount of precaution could have prevented it from happening. I'll let Langewische explain:
The failure of one part--whether material, psychological, or organizational--may coincide with the failure of an entirely different part, and this unforeseeable combination will cause the failure of other parts, and so on. If the system is large, the possible combinations of failures are practically infinite. Such unravelings seem to have an intelligence of their own: they expose hidden connections, neutralize redundancies, bypass "fire-walls," and exploit chance circumstances that no engineer could have planned for.
That sounds awful, and it is, esp. when you consider that Langewiesche is talking about flying airplanes and Perrow talks about nuclear power plants. But it gets worse. According to Perrow, not only will tightly coupled systems inevitably fail--how's that for naïve alarmism?--without some basic modification to their design, but instead of designing systems that are less tightly coupled, most industries are designing ones that are more. Perrow again:
Rather than technology determining organizational structure, it would appear that machines and equipment are designed so that they reinforce existing [i.e. craptastic] structures and reproduce these structures in new settings.
He offers some examples:
Fifteen-foot banks of identical switches with small code numbers underneath them in nuclear power plants, sophisticated army weapons that personnel cannot aim or even fire.
But it gets even worse. As workers become less skilled and more reliant on technology, the probability of "normal accidents" occuring becomes even greater:
For want of a robot, an operator is used. This perspective, ingrained in students by engineering schools and common in top management, pervades the culture of the design engineer. It leads to equipment that at best is only to be monitored by an operator, and thus leads to a social structure of incentives, punishments, physical layouts, output measures, etc., that reinforce the perspective of designing out the "man" in the loop.
For example, increasingly sophisticated navigation and collision-avoidance aids have not reduced marine accidents. Gaffney (1982) noted that the response of the U.S. marine community has been more automation, with ships' officers doing more monitoring of equipment. The response of European marine communities, after it became apparent that phrases such as "radar-assisted collisions" and inertial navigation system groundings were real phenomena, not clever jokes, was to look at the organizational context.
The basic lesson being that the more we rely on technology, the more that technology is designed increase our reliance on it, and the more its designed to increase our reliance on it, the more likely we can accomplish fuck all in the event of an emergency.
Why am I writing about this? Simple: because I'm a homologist at heart and I spy a homology with the current state of literary studies. Like technology, theory is our friend. But as theory becomes more ingrained in our "critical" faculties, its begins to replace supple, responsive thought with empty, shallow couplings of extant critical technologies. Am I saying that theorists consider the people who read their works the way design engineers think of operators? Am I saying that Spivak or Bhaba sits around thinking "for want of a robot" and writes accordingly? I am. Theorists want us to be robots, and as regular readers of this fine blog already know, robots eat babies. Therefore...