Searching for Safety
eBook - ePub

Searching for Safety

  1. 272 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Searching for Safety

About this book

Protecting ourselves against the risks associated with modern technologies has emerged as a major public concern throughout the industrialized world. Searching for Safety is unique in its exposition of a theory that explains how and why risk taking makes life safer and exposes the high risk of avoiding change. The book covers a wide range, including how the human body, as well as plants, animals, and insects, cope with danger. Wildavsky asks whether piling on safety measures actually improves safety. While he agrees that society should sometimes try to prevent large-scale harm, he explains why a strategy of resilience—learning from error how to bounce back in better shape—is usually better. His intention is to shift the debate about risk from passive prevention of harm to an active search for safety. This book will be of special interest to those concerned with risk involving technology, health, safety, environmental protection, regulation, and more.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Searching for Safety by Aaron Wildavsky in PDF and/or ePUB format, as well as other popular books in Politics & International Relations & Sociology. We have over one million books available in our catalogue for you to explore.

SECTION I
STRATEGIES

1

Trial and Error Versus Trial Without Error

There are two bedrock approaches to managing risk—trial and error, and trial without error. According to the doctrine of “trial without error,” no change whatsoever will be allowed unless there is solid proof that the proposed substance or action will do no harm. All doubts, uncertainties, and conflicting interpretations will thus be resolved by disallowing trials. Taking “error” here to mean damage to life, this—prohibiting new products unless they can be proven in advance to be harmless—is an extraordinarily stringent prohibition. Surely no scientist (or businessman or politician or citizen) can guarantee in the present that future generations will be better off because of any individual action.
True, without trials there can be no new errors; but without these errors, there is also less new learning. Science, its historians say, is more about rejecting than accepting hypotheses. Knowledge grows through critical appraisal of the failure of existing theory to explain or predict events. Learning by criticizing implies that existing theory is in error—not necessarily absolutely, but relative to better knowledge. Rules for democracy say little about what one does in office, but much more about getting officials out of office. “Throwing the rascals out” is the essence of democracy. Similarly, in social life it is not the ability to avoid error entirely (even Goncharov’s Oblomov, who spends his life in bed, cannot do that), but learning how to overcome it that is precious. As Joseph Marone and Edward Woodhouse say: “This is the classic trial-and-error strategy for dealing with complex problems: (1) establish a policy, (2) observe the effects, (3) correct for undesired effects, (4) observe the effects of the correction, and (5) correct again.”1
The current debate on risk, however, which proposes a radical revision of this strategy, results in the opposite doctrine: no trials without prior guarantees against error. I do not mean to imply that proponents of trial without error would never permit error. They see “no errors” as the goal (albeit one that cannot be fully realized) only for certain classes of situations. In this perspective, trial and error is all right in its circumscribed place. But that place would be limited to conditions in which possible consequences are quite modest (as distinguished from catastrophic) and where feedback is fast. This limitation implies a certain foreseeability of the possible sorts of error and the extent of their consequences. Yet this presumption itself may be erroneous; that is, it ignores the most dangerous source of error, namely, the unexpected. When large adverse consequences probably will occur, and when preventive measures are likely to make things better (without, in other ways, making them worse), of course no one disputes that trials should be regulated. The difficulty, as usual, lies in reaching agreement about whether and when a catastrophe is coming. One side wants special reasons to stop experimentation, and the other wants special conditions to start. Which bias, the question is, is safest?
The outcome of analysis depends in large part on how the criterion of choice is defined. Some prominent environmental economists, such as Allen Kneese, would opt for the standard of efficiency called Pareto optimality, under which actions are justified if they make some people better off without harming others. But this criterion assumes, erroneously, that it is possible to separate harmful from beneficial effects. Thus, a vaccine that saves millions but kills a few would not be justified, even though the health of society as a whole, and of almost all of its members, would be improved. Indeed, the pursuit of Pareto optimality can strangle growth and change, because any new developments are likely to hurt someone, somewhere, sometime. Lindblom’s criticism is justified:
Economists often blunder into the conclusion that policy makers should choose Pareto efficient solutions because they help some persons and hurt no others. Not so. If, as is typically the case—and perhaps always the case—there are still other solutions that bring substantial advantages to large numbers of persons and these advantages are worth seeking even at loss to other persons—for example, protecting civil liberties of minorities even if doing so is greatly irritating and obstructive to others—then, there remains a conflict as to what is to be done. The Pareto efficient solution is not necessarily the best choice.2
In discussing trial without error with participants in the risk debate, I often sense an air of disbelief, as if no reasonable person would support such a practice. But people do; I shall show that trial without error is indeed the prevailing doctrine among the risk-averse and that in important respects it is government policy. For illustrative purposes, I have deliberately chosen the most persuasive exponents of this doctrine.

No Trials Without Prior Guarantees Against Error

Trial without error is proposed as a criterion of choice by David W. Pearce, who wishes to prevent technologies from being introduced “without first having solved the problems they create. This ‘reverse solution’ phenomenon characterizes the use of nuclear power, where waste disposal problems remain to be solved even though the source of the waste, the power stations themselves, forms part of whole energy programs.”3 There is nothing unusual today about this way of introducing new technologies. In the past, however, it was common practice to solve the problems associated with novelty as they surfaced following adoption of the innovation. One could well ask whether any technology, including the most benign, would ever have been established if it had first been forced to demonstrate that it would do no harm.
In 1865, to take but a single instance, a million cubic feet of gas exploded at the London Gas-Works, killing ten people and burning twenty. The newspapers screamed that the metropolis faced disaster.
If half London would be blown to pieces by the explosion of the comparatively small quantity of gas stored at Blackfriars, it might be feared that if all the gasholders in the metropolis were to ‘go off,’ half the towns in the kingdom would suffer, and to be perfectly secure, the source of danger must be removed to the Land’s End.4
Could anyone who planned to introduce gas heating or lighting have certified in advance that there would be no explosions, no danger of blowing up the city? I think not. Nonetheless, the gas industry, without such guarantees, did flourish.
But Pearce sees otherwise. In order to guard against potential harm from new technology, he suggests amassing information from experts on both sides, with attention being paid to the possibility of refusing to go ahead with a particular technology. By funding the opposition and by bringing in wider publics, Pearce hopes to insure that “surveillance of new technology is carried out in such a way that no new venture is embarked upon without the means of control being ‘reasonably’ assured in advance.”5 This, I say, is not trial and error but a new doctrine: no trials without prior guarantees against error.
The most persuasive and most common argument is that trial and error should not be used unless the consequences of worst-case errors are knowable in advance to be sufficiently benign to permit new trials. For if irreversible damage to large populations resulted, no one might be around to take on the next trial. A strong statement of this view comes from Robert E. Goodin:
Trial and error and learning by doing are appropriate, either for… discovering what the risks are or for the adaptive task of overcoming them only under very special conditions. These are conspicuously lacking in the case of nuclear power. First, we must have good reasons for believing that the errors, if they occur, will be small. Otherwise the lessons may be far too costly. Some nuclear mishaps will no doubt be modest. But for the same reasons small accidents are possible so too are large ones and some of the errors resulting in failure of nuclear reactor safeguards may be very costly indeed. This makes trial and error inappropriate in that setting. Second, errors must be immediately recognizable and correctable. The impact of radioactive emissions from operating plants or of leaks of radioactive waste products from storage sites upon human populations or the natural environment may well be a ‘sleeper’ effect that does not appear in time for us to revise our original policy accordingly.6
Past practice had encouraged people to act unless there were good reasons for not doing so. Goodin reverses that criterion, explicitly replacing it with a requirement for “very special conditions” before trying anything new. His justification, like Pearce’s, is the potential danger of nuclear energy, or of any other technology that might lead to irreversible damage.
Yet the argument against taking any irreversible actions is not as broadly applicable as it may appear. On this ground many policies and practices that make up the warp and woof of daily life would actually have to be abandoned. Maurice Richter makes the case well:
Our legal system makes it relatively easy for people to commit themselves to specified courses of action “irreversibly” through the signing of contracts; a contractual agreement that is too easily reversible may thereby lose much of its value. The movement away from irreversibility in marriage is widely regarded as a social problem. Why, then, should irreversibility, which is sought in so many other contexts, be considered a defect when it appears in material technology? There may be a good reason, but the burden of proof falls on those who insist that reversibility in technology is a valid general principle, and they have hardly proved their case.7
Put a little differently, we might want reversibility in some areas of life (say, alternation in political office), but not in others (say, diversion of social security funds to other purposes).
Returning to the effects of nuclear radiation, there are extraordinarily sensitive means available for measuring radiation, down to the decay of single atoms. Moreover, human exposure (consider Hiroshima and Nagasaki) has been so intensively studied that it is possible to accurately estimate the health risk of exposure to a given dose, including long-range risk. This comparatively great understanding of radiation notwithstanding, however, no reasonable person could say with complete certainty that any particular dose—for given individuals, or, still more remote, large populations—would never produce irreversible consequences. And there is still doubt about the long-term effects of very small doses. Even when the best estimates of risk (the magnitude of the hazard/error times the probability of occurrence) approach zero, one can always imagine some concatenation of events that make it impossible (viz., Chernobyl) to rule out potential catastrophe. Presumably, then, the only safe action, according to the “trial-without-error” school, is no trials at all.
Though agreeing that there has been useful learning about nuclear energy, Goodin draws a pessimistic conclusion:
Sometimes, once we have found out what is going wrong and why, we can even arrange to prevent it from recurring. Precisely this sort of learning by doing has been shown to be responsible for dramatic improvements in the operating efficiency of nuclear reactors. That finding, however, is as much a cause for concern as for hope. It is shocking that there is any room at all left for learning in an operational nuclear reactor, given the magnitude of the disaster that might result from ignorance or error in that setting.8
Heads, I win; tails, you lose. Here (as elsewhere) correcting error actually did prove to be an effective route to increased safety. So, since trial and error is exactly what Goodin wishes to prevent, he needs a stronger argument for its inadvisability.
Goodin does argue that nuclear power plants are different because “we would be living not merely with risk but also with irresolvable uncertainties.”9 But I hold that this is not good enough; after all, every technology, viewed in advance, has “irresolvable” uncertainties. Only experience can tell us which among all imaginable hazards will in fact materialize and hence justify measures to reduce them. “Irresolvable” uncertainty about the future is a condition of human life. One thing no one can have for sure is a guarantee that things will always turn out all right in the future.
Turning to the only recent and comprehensive study of trial and error as a strategy for securing safety (it covers toxic chemicals, nuclear power, the greenhouse effect, genetic engineering, and threats to the ozone layer), Morone and Woodhouse “…were pleasantly surprised to find how much learning from error has occurred. In part because the ecosystem (so far) has been more forgiving than reasonably might have been expected, trial-and-error has been an important component of the system for averting catastrophe”10 They conclude:
For years, regulation of toxic substances proceeded by trial and error. Chemicals were regulated only after negative consequences became apparent. This type of decision process is a well-known, thoroughly analyzed strategy for coping with complex problems. But we had assumed that long delays before obtaining feedback, coupled with severe consequences of error, would make trial and error inappropriate in managing hazardous chemicals. Contrary to our expectations, there proved to be numerous channels for feedback about the effects of chemicals, as demonstrated in detail for pesticides. Regulators were able to take repeated corrective actions in response to the feedback.11
There are many historical examples also of feedback from affe...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Dedication
  6. Table of Contents
  7. Acknowledgments
  8. Introduction: The Jogger’s Dilemma or What Should We Do When The Safe and the Dangerous are Inextricably Intertwined?
  9. Section I: Strategies
  10. Section II: Conditions
  11. Section III: Principles
  12. Notes
  13. Index