Behind Human Error
eBook - ePub

Behind Human Error

David Woods, Sidney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter

Buch teilen
  1. 292 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfĂŒgbar
eBook - ePub

Behind Human Error

David Woods, Sidney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

Human error is cited over and over as a cause of incidents and accidents. The result is a widespread perception of a 'human error problem', and solutions are thought to lie in changing the people or their role in the system. For example, we should reduce the human role with more automation, or regiment human behavior by stricter monitoring, rules or procedures. But in practice, things have proved not to be this simple. The label 'human error' is prejudicial and hides much more than it reveals about how a system functions or malfunctions. This book takes you behind the human error label. Divided into five parts, it begins by summarising the most significant research results. Part 2 explores how systems thinking has radically changed our understanding of how accidents occur. Part 3 explains the role of cognitive system factors - bringing knowledge to bear, changing mindset as situations and priorities change, and managing goal conflicts - in operating safely at the sharp end of systems. Part 4 studies how the clumsy use of computer technology can increase the potential for erroneous actions and assessments in many different fields of practice. And Part 5 tells how the hindsight bias always enters into attributions of error, so that what we label human error actually is the result of a social and psychological judgment process by stakeholders in the system in question to focus on only a facet of a set of interacting contributors. If you think you have a human error problem, recognize that the label itself is no explanation and no guide to countermeasures. The potential for constructive change, for progress on safety, lies behind the human error label.

HĂ€ufig gestellte Fragen

Wie kann ich mein Abo kĂŒndigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kĂŒndigen“ – ganz einfach. Nachdem du gekĂŒndigt hast, bleibt deine Mitgliedschaft fĂŒr den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich BĂŒcher herunterladen?
Derzeit stehen all unsere auf MobilgerĂ€te reagierenden ePub-BĂŒcher zum Download ĂŒber die App zur VerfĂŒgung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die ĂŒbrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den AboplÀnen?
Mit beiden AboplÀnen erhÀltst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst fĂŒr LehrbĂŒcher, bei dem du fĂŒr weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhĂ€ltst. Mit ĂŒber 1 Million BĂŒchern zu ĂŒber 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
UnterstĂŒtzt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nÀchsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Behind Human Error als Online-PDF/ePub verfĂŒgbar?
Ja, du hast Zugang zu Behind Human Error von David Woods, Sidney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter im PDF- und/oder ePub-Format sowie zu anderen beliebten BĂŒchern aus Technology & Engineering & Industrial Health & Safety. Aus unserem Katalog stehen dir ĂŒber 1 Million BĂŒcher zur VerfĂŒgung.

Information

Verlag
CRC Press
Jahr
2017
ISBN
9781317175537

PART I
AN INTRODUCTION TO THE SECOND STORY

There is a widespread perception of a “human error problem.” “Human error” is often cited as a major contributing factor or “cause” of incidents and accidents. Many people accept the term “human error” as one category of potential causes for unsatisfactory activities or outcomes. A belief is that the human element is unreliable, and that solutions to the “human error problem” reside in changing the people or their role in the system.
This book presents the results of an intense examination of the human contribution to safety. It shows that the story of “human error” is remarkably complex. One way to discover this complexity is to make a shift from what we call the “first story,” where human error is the cause, to a second, deeper story, in which the normal, predictable actions and assessments (which some call “human error” after the fact) are the product of systematic processes inside of the cognitive, operational and organizational world in which people work. Second stories show that doing things safely – in the course of meeting other goals – is always part of people’s operational practice. People, in their different roles, are aware of potential paths to failure, and develop failure sensitive strategies to forestall these possibilities. People are a source of adaptability required to cope with the variation inherent in a field of activity.
Another result of the Second Story is the idea that complex systems have a sharp end and a blunt end. At the sharp end, practitioners directly interact with the hazardous process. At the blunt end, regulators, administrators, economic policy makers, and technology suppliers control the resources, constraints, and multiple incentives and demands that sharp end practitioners must integrate and balance. The story of both success and failure consists of how sharp-end practice adapts to cope with the complexities of the processes they monitor, manage and control, and how the strategies of the people at the sharp end are shaped by the resources and constraints provided by the blunt end of the system.
Failure, then, represents breakdowns in adaptations directed at coping with complexity. Indeed, the enemy of safety is not the human: it is complexity. Stories of how people succeed and sometimes fail in their pursuit of success reveal different sources of complexity as the mischief makers – cognitive, organizational, technological. These sources form an important topic of this book.
This first part of the book offers an overview of these and other results of the deeper study of “human error.” It presents 15 premises that recur frequently throughout the book:
1. “Human error” is an attribution after the fact.
2. Erroneous assessments and actions are heterogeneous.
3. Erroneous assessments and actions should be taken as the starting point for an investigation, not an ending.
4. Erroneous actions and assessments are a symptom, not a cause.
5. There is a loose coupling between process and outcome.
6. Knowledge of outcome (hindsight) biases judgments about process.
7. Incidents evolve through the conjunction of several failures/factors.
8. Some of the contributing factors to incidents are always in the system.
9. The same factors govern the expression of expertise and of error.
10. Lawful factors govern the types of erroneous actions or assessments to be expected.
11. Erroneous actions and assessments are context-conditioned.
12. Enhancing error tolerance, error detection, and error recovery together produce safety.
13. Systems fail.
14. Failures involve multiple groups, computers, and people, even at the sharp end.
15. The design of artifacts affects the potential for erroneous actions and paths towards disaster.
The rest of the book explores four main themes that lie behind the label of human error:
image
how systems-thinking is required because there are the multiple factors each necessary but only jointly sufficient to produce accidents in modern systems (Part II);
image
how operating safely at the sharp end depends on cognitive-system factors as situations evolve and cascade – bringing knowledge to bear, shifting mindset in pace with events, and managing goal-conflicts (Part III);
image
how the clumsy use of computer technology can increase the potential for erroneous actions and assessments (Part IV);
image
how what is labeled human error results from social and psychological attribution processes as stakeholders react to failure and how these oversimplifications block learning from accidents and learning before accidents occur (Part V).

1
THE PROBLEM WITH “HUMAN ERROR”

Disasters in complex systems – such as the destruction of the reactor at Three Mile Island, the explosion onboard Apollo 13, the destruction of the space shuttles Challenger and Columbia, the Bhopal chemical plant disaster, the Herald of Free Enterprise ferry capsizing, the Clapham Junction railroad disaster, the grounding of the tanker Exxon Valdez, crashes of highly computerized aircraft at Bangalore and Strasbourg, the explosion at the Chernobyl reactor, AT&T’s Thomas Street outage, as well as more numerous serious incidents which have only captured localized attention – have left many people perplexed. From a narrow, technology-centered point of view, incidents seem more and more to involve mis-operation of otherwise functional engineered systems. Small problems seem to cascade into major incidents. Systems with minor problems are managed into much more severe incidents. What stands out in these cases is the human element.
“Human error” is cited over and over again as a major contributing factor or “cause” of incidents. Most people accept the term human error as one category of potential causes for unsatisfactory activities or outcomes. Human error as a cause of bad outcomes is used in engineering approaches to the reliability of complex systems (probabilistic risk assessment) and is widely cited as a basic category in incident reporting systems in a variety of industries. For example, surveys of anesthetic incidents in the operating room have attributed between 70 and 75 percent of the incidents surveyed to the human element (Cooper, Newbower, and Kitz, 1984; Chopra, Bovill, Spierdijk, and Koornneef, 1992; Wright, Mackenzie, Buchan, Cairns, and Price, 1991). Similar incident surveys in aviation have attributed over 70 percent of incidents to crew error (Boeing, 1993). In general, incident surveys in a variety of industries attribute high percentages of critical events to the category “human error” (see for example, Hollnagel, 1993). The result is the widespread perception of a “human error problem.”
One aviation organization concluded that to make progress on safety:
We must have a better understanding of the so-called human factors which control performance simply because it is these factors which predominate in accident reports. (Aviation Daily, November 6, 1992)
The typical belief is that the human element is separate from the system in question and hence, that problems reside either in the human side or in the engineered side of the equation. Incidents attributed to human error then become indicators that the human element is unreliable. This view implies that solutions to a “human error problem” reside in changing the people or their role in the system. To cope with this perceived unreliability of people, the implication is that one should reduce or regiment the human role in managing the potentially hazardous system. In general, this is attempted by enforcing standard practices and work rules, by exiling culprits, by policing of practitioners, and by using automation to shift activity away from people. Note that this view assumes that the overall tasks and system remain the same regardless of the extent of automation, that is the allocation of tasks to people or to machines, and regardless of the pressures managers or regulators place on the practitioners.
For those who accept human error as a potential cause, the answer to the question, what is human error, seems self-evident. Human error is a specific variety of human performance that is so clearly and significantly substandard and flawed when viewed in retrospect that there is no doubt that it should have been viewed by the practitioner as substandard at the time the act was committed or omitted. The judgment that an outcome was due to human error is an attribution that (a) the human performance immediately preceding the incident was unambiguously flawed and (b) the human performance led directly to the negative outcome.
But in practice, things have proved not to be this simple. The label “human error” is very controversial (e.g., Hollnagel, 1993). When precisely does an act or omission constitute an error? How does labeling some act as a human error advance our understanding of why and how complex systems fail? How should we respond to incidents and errors to improve the performance of complex systems? These are not academic or theoretical questions. They are close to the heart of tremendous bureaucratic, professional, and legal conflicts and are tied directly to issues of safety and responsibility. Much hinges on being able to determine how complex systems have failed and on the human contribution to such outcome failures. Even more depends on judgments about what means will prove effective for increasing system reliability, improving human performance, and reducing or eliminating bad outcomes.
Studies in a variety of fields show that the label “human error” is prejudicial and unspecific. It retards rather than advances our understanding of how complex systems fail and the role of human practitioners in both successful and unsuccessful system operations. The investigation of the cognition and behavior of individuals and groups of people, not the attribution of error in itself, points to useful changes for reducing the potential for disaster in large, complex systems. Labeling actions and assessments as “errors” identifies a symptom, not a cause; the symptom should call forth a more in-depth investigation of how a system comprising people, organizations, and technologies both functions and malfunctions (Rasmussen et al., 1987; Reason, 1990; Hollnagel, 1991b; 1993).
Consider this episode which apparently involved a “human error” and which was the stimulus for one of earliest developments in the history of experimental psychology. In 1796 the astronomer Maskelyne fired his assistant Kinnebrook because the latter’s observations did not match his own. This incident was one stimulus for another astronomer, Bessel, to examine empirically individual differences in astronomical observations. He found that there were wide differences across observers given the methods of the day and developed what was named the personal equation in an attempt to model and account for these variations (see Boring, 1950). The full history of this episode foreshadows the latest results on human error. The problem was not that one person was the source of errors. Rather, Bessel realized that the standard assumptions about inter-observer accuracies were wrong. The techniques for making observations at this time required a combination of auditory and visual judgments. These judgments were heavily shaped by the tools of the day – pendulum clocks and telescope hairlines – in relation to the demands of the task. In the end, the constructive solution was not dismissing Kinnebrook, but rather searching for better methods for making astronomical observations, re-designing the tools that supported astronomers, and re-designing the tasks to change the demands placed on human judgment.
The results of the recent intense examination of the human contribution to safety and to system failure indicate that the story of “human error” is markedly complex. For example:
image
the context in which incidents evolve plays a major...

Inhaltsverzeichnis