
- 340 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Risk In The Technological Society
About this book
In this book, representatives of government, industry, universities, and public interest groups consider the emerging art of risk assessment and discuss the issues and problems involved. They look at two failures in technological risk management–Three Mile Island and Love Canal; examine the dimensions of technological risk; tackle the difficult question of how safe is "safe enough"; and offer a set of research priorities.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Risk In The Technological Society by Chris Hohenemser,Jeanne X Kasperson in PDF and/or ePUB format, as well as other popular books in Social Sciences & Sociology. We have over one million books available in our catalogue for you to explore.
Information
1. Introduction
Christoph Hohenemser, Jeanne X. Kasperson
Flood, drought, famine, and infectious disease, all of them natural hazards, were once the principal hazards faced by society. Today they have been replaced by hazards arising from technology.1 Though the benefits of technology are widely acknowledged, we are bombarded daily by media accounts describing new, previously unsuspected technological threats to human health and well-being. It is believed by some that we have reached a state of crisis, that same technologies threaten survival, and that for many others the benefits no longer outweigh the costs.
Ironically, technology is in many ways its own worst accuser. Some hazards are detected only because of our technical ability to track minute concentrations of toxic substances; and other hazards are recognized because of our technical ability to imagine and describe theoretically a range of technological catastrophes.
Few media accounts fail to put at least some of the blame for technological failures on industry or government. The reader, listener, or viewer thus receives the impression that technology managers have erred, and may have deliberately sought private profit at public expense, or succumbed to powerful special interests. Rarely are the problems described in detail sufficient to indicate that solutions may not be easy, that science provides uncertain answers, or that one person's risk implies another's benefit. In effect "our private capacity to generate hazards to health has outstripped our public ability to evaluate and control hazards."2
The conflicts that underlie the public discussion of hazards include conflicts about facts, conflicts about perception, conflicts about risks versus benefits, and conflicts driven by divergent views about individual versus societal responsibilities. In many cases it is unclear to the very proponents of particular views what their implicit assumptions are.
For example, most members of the American public consider it self-evident that smoking and automobile seatbelt use should be matters of individual choice. Many Americans are also appalled by the extraordinary cost of health care, particularly for sufferers of chronic diseases and permanent major disabilities. Yet few people realize that smoking and auto accidents account for 350,000 deaths annually3,4 and directly consume a major portion of the high cost of health care.
The importance of hidden implicit assumptions comes home in the juxtaposition of two newspaper accounts of two different, yet ironically complementary, citizen protests against the siting of electric power plants.5 One story details the efforts of a middle-aged farming couple, the Shadises, to close down Maine Yankee, an 840-Megawatt nuclear power plant in Wiscasset, Maine, two miles from their dairy herd. The second story describes a neighborhood protest by Bostonians seeking to stop Harvard University's new oil-fired diesel cogeneration plant because of the air pollution it will produce. In each case, the opposition wants to keep power plants out of its "own back yard" all the while showing little concern for the other "back yard" where the power is generated.
Along with a certain myopia in relation to total technological systems, it seems that the public has become more demanding. Thus, public health standards continue to become more stringent. Official assurances of safety or "no immediate danger" are met, especially in the aftermath of Three Mile Island and Love Canal (see Part 2, this volume), with suspicion, skepticism, and outright disbelief. This does not necessarily imply a crisis of public confidence in technology per se, but rather a growing doubt among a significant portion of the general public that technology is going to be managed adequately.6
The issues that confound the handling of technological risk have become the concern of a massive regulatory apparatus that operates at all levels of government and reaches deeply into the activities of industry and consumers. One recent study cited 12 major federal regulatory agencies and 179 laws concerned with the management of technological risk.7 Annual expenditures by government are estimated at more than $30 billion per year, and expenditures by government and the private sector combined may be as high as $130 billion (1979), or about 5% of the gross national product.8
Much of the regulatory process involves narrow issues, hammered out through long, tedious processes which are constrained by the uncertainties and fundamental assumptions of the enterprise. Thus the Environmental Protection Agency (EPA) is involved in a continuing battle to improve air quality, by setting attainment standards, regulating sources, and occasionally ordering specific action by local governments. On one side the logic of its action is constrained by science, according to which nonacute health effects of air pollution are ill-defined at best.9 On the other side, the scope of its action is constrained by politics and law, according to which it can not deal with the ultimate causes of air pollution risks, such as the level of human wants and the choice of technology for achieving them. As a result the EPA and similar agencies seek compromises and narrow technological fixes, which leave both consumers and affected industries dissatisfied, and which in a holistic view of the world may involve a nonoptimal use of resources.
In some cases, government and the public make demands for risk regulation that are close to contradictory. Consider the Food and Drug Administration (FDA), charged with safeguarding food and regulating additives to food. Its actions are subject to the Delaney Amendment, a congressional action that stipulates that additives known to be carcinogens must be excluded from food. "Known carcinogens" are almost wholly defined by high-dose animal experiments, and extrapolation to humans is made on the conservative assumption that high dose animal carcinogenesis implies potential human cancer (see Chapter 11). The logic of absolute prohibition, which sounds quite reasonable at first, begins to unravel when it is realized that some animal carcinogens occur as natural substances in food and food processing. Since the enactment of the Delaney amendment in 1959, the growing technical ability to detect traces of chemicals has made the meaning of a zero threshold increasingly questionable; for if it means anything, "zero threshold" means "not detectable." The logic of absolute prohibition unwinds completely with the realization that some animal carcinogens have beneficial functions for which no real substitutes exist.
In response to this and similar situations, regulators and researchers have begun to address the problem of risk evaluation in ways that are generic rather than case-specific. A common thread of such approaches has been the desire to conduct "comparative risk assessment," a process that implies a broad range of activities, including an evaluation of the scientific basis of risk, the social context of technology choice, the dimensions of benefits arising therefrom, and the social dimensions of risk consequences. Not surprisingly, the risk analysts who are beginning to work in government, industry, and the universities come from many branches of natural and social sciences and frequently bring with them their respective disciplinary traditions. These impinge crucially on the very definition of "risk," which is no mere matter of semantics, since it determines what will be studied, and what will not.
A number of physicists and economists have defined risk simply as the per capita frequency or probability that a particular result (e.g. an untimely death) should occur. They have further proposed that the central question of risk assessment is a compilation that expresses all relevant risks in terms of such numbers.10'11 Although such compilations allow gross scaling of risky technologies and activities and certainly permit risk comparisons, they fail in a fundamental way because they do not reflect other dimensions of hazardousness that may have equal or greater social value (see Chapter 9). In contrast, a number of social scientists have been at work elucidating the complex, multivariate and subjective character of risk judgments. One of their techniques has been to question ordinary people in order to determine the cognitive content of subjective risk. Using this approach, Slovic and his associates (Chapter 10) have found that people rate risks differently when an event kills many people simultaneously rather than one at a time; when activities and technologies are voluntary rather than involuntary; or when they are new rather than old.
Given the divergent views of the meaning of "risk," it should not be surprising that "risk" assessment is a pursuit for which basic goals and definitions are widely debated and not easily agreed upon. Risk assessment is, in short, a field in its infancy.
Beyond defining and measuring risk, an important question is which risks are acceptable? Or equivalently, how safe is safe enough?12 Implicit in this question is the assumption that a risk-free environment is an elusive goal. In deciding on which risks are acceptable, agreement on fundamental approaches is even less well established than in the case of risk definition and measurement. As in the case of defining the concept of "risk" in the first place, the underlying difficulty has to do with incorporation of multiple human values. This includes the value to be placed
Acceptable risk has been approached through a variety of principles and practice, including: (1) setting quantitative risk standards above which risks are deemed unacceptable; (2) comparing risks to benefits in commensurate terms, and demanding that benefit/risk ratios exceed unity; (3) comparing cost-effectiveness of various risk control strategies in units of cost per life saved or cost per year of longevity; and (4) defining rules of aversion for those cases where negligible benefits accrue to risk taking. As discussed in detail in Chapters 12-16, each of these approaches has its own uses, advantages, and problems, not the least of which is that they are all based in one way or another on a rather narrow definition of risk as "conditional probability of harm."
Risk measurement and evaluation thus concoct a rich and often inconsistent brew. Far from resolving problems of risk management by society, particularly government, the difficulties we have mentioned translate into a number of generic issues that block progress at present. In recent work, the groups at Clark University and Decision Research have identified seven such issues:13
Incomplete knowledge. Characteristically, hazard managers would like to assume that causality is or will soon be defined. Yet current knowledge is often insufficient. We know, for example, that the burning of fossil fuel has its effect on climate. We know there are risks, but we cannot say much more and are very far indeed from formulating policy.
Foregoing benefits. Controlling risks has impacts on the benefits of technology. Benefits are often shared by people other than those exposed to risk. Benefits tend to be as clear and tangible as risks are ambiguous and elusive. All this makes for inevitable conflict, rancorous debate, and in the end, ineffective societal action. Examples of this problem are pollution of urban air by automobiles, acid precipitation attributable to the burning of fossil fuels, and the catastrophic risks of nuclear power. For each, effective risk control involves serious impacts on massive benefits, with the result that satisfactory risk management remains an elusive goal.
A limited capacity to react. A myriad of risks confronts us. The list is much longer than our strand of worry...
Table of contents
- Cover
- Half Title
- Series Page
- Title
- Copyright
- Contents
- About the Editors and Authors
- Acknowledgments
- 1 Introduction
- PART 1. FAILURES IN MANAGING TECHNOLOGICAL RISK
- PART 2. THE STRUCTURE OF TECHNOLOGICAL RISK
- PART 3. DEFINING TOLERABLE RISK LEVELS
- PART 4. AGENDA FOR RESEARCH
- Index