Behind Human Error
eBook - ePub

Behind Human Error

David Woods, Sidney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter

Condividi libro
  1. 292 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Behind Human Error

David Woods, Sidney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Human error is cited over and over as a cause of incidents and accidents. The result is a widespread perception of a 'human error problem', and solutions are thought to lie in changing the people or their role in the system. For example, we should reduce the human role with more automation, or regiment human behavior by stricter monitoring, rules or procedures. But in practice, things have proved not to be this simple. The label 'human error' is prejudicial and hides much more than it reveals about how a system functions or malfunctions. This book takes you behind the human error label. Divided into five parts, it begins by summarising the most significant research results. Part 2 explores how systems thinking has radically changed our understanding of how accidents occur. Part 3 explains the role of cognitive system factors - bringing knowledge to bear, changing mindset as situations and priorities change, and managing goal conflicts - in operating safely at the sharp end of systems. Part 4 studies how the clumsy use of computer technology can increase the potential for erroneous actions and assessments in many different fields of practice. And Part 5 tells how the hindsight bias always enters into attributions of error, so that what we label human error actually is the result of a social and psychological judgment process by stakeholders in the system in question to focus on only a facet of a set of interacting contributors. If you think you have a human error problem, recognize that the label itself is no explanation and no guide to countermeasures. The potential for constructive change, for progress on safety, lies behind the human error label.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Behind Human Error è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Behind Human Error di David Woods, Sidney Dekker, Richard Cook, Leila Johannesen, Nadine Sarter in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Technology & Engineering e Industrial Health & Safety. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Editore
CRC Press
Anno
2017
ISBN
9781317175537

PART I
AN INTRODUCTION TO THE SECOND STORY

There is a widespread perception of a “human error problem.” “Human error” is often cited as a major contributing factor or “cause” of incidents and accidents. Many people accept the term “human error” as one category of potential causes for unsatisfactory activities or outcomes. A belief is that the human element is unreliable, and that solutions to the “human error problem” reside in changing the people or their role in the system.
This book presents the results of an intense examination of the human contribution to safety. It shows that the story of “human error” is remarkably complex. One way to discover this complexity is to make a shift from what we call the “first story,” where human error is the cause, to a second, deeper story, in which the normal, predictable actions and assessments (which some call “human error” after the fact) are the product of systematic processes inside of the cognitive, operational and organizational world in which people work. Second stories show that doing things safely – in the course of meeting other goals – is always part of people’s operational practice. People, in their different roles, are aware of potential paths to failure, and develop failure sensitive strategies to forestall these possibilities. People are a source of adaptability required to cope with the variation inherent in a field of activity.
Another result of the Second Story is the idea that complex systems have a sharp end and a blunt end. At the sharp end, practitioners directly interact with the hazardous process. At the blunt end, regulators, administrators, economic policy makers, and technology suppliers control the resources, constraints, and multiple incentives and demands that sharp end practitioners must integrate and balance. The story of both success and failure consists of how sharp-end practice adapts to cope with the complexities of the processes they monitor, manage and control, and how the strategies of the people at the sharp end are shaped by the resources and constraints provided by the blunt end of the system.
Failure, then, represents breakdowns in adaptations directed at coping with complexity. Indeed, the enemy of safety is not the human: it is complexity. Stories of how people succeed and sometimes fail in their pursuit of success reveal different sources of complexity as the mischief makers – cognitive, organizational, technological. These sources form an important topic of this book.
This first part of the book offers an overview of these and other results of the deeper study of “human error.” It presents 15 premises that recur frequently throughout the book:
1. “Human error” is an attribution after the fact.
2. Erroneous assessments and actions are heterogeneous.
3. Erroneous assessments and actions should be taken as the starting point for an investigation, not an ending.
4. Erroneous actions and assessments are a symptom, not a cause.
5. There is a loose coupling between process and outcome.
6. Knowledge of outcome (hindsight) biases judgments about process.
7. Incidents evolve through the conjunction of several failures/factors.
8. Some of the contributing factors to incidents are always in the system.
9. The same factors govern the expression of expertise and of error.
10. Lawful factors govern the types of erroneous actions or assessments to be expected.
11. Erroneous actions and assessments are context-conditioned.
12. Enhancing error tolerance, error detection, and error recovery together produce safety.
13. Systems fail.
14. Failures involve multiple groups, computers, and people, even at the sharp end.
15. The design of artifacts affects the potential for erroneous actions and paths towards disaster.
The rest of the book explores four main themes that lie behind the label of human error:
image
how systems-thinking is required because there are the multiple factors each necessary but only jointly sufficient to produce accidents in modern systems (Part II);
image
how operating safely at the sharp end depends on cognitive-system factors as situations evolve and cascade – bringing knowledge to bear, shifting mindset in pace with events, and managing goal-conflicts (Part III);
image
how the clumsy use of computer technology can increase the potential for erroneous actions and assessments (Part IV);
image
how what is labeled human error results from social and psychological attribution processes as stakeholders react to failure and how these oversimplifications block learning from accidents and learning before accidents occur (Part V).

1
THE PROBLEM WITH “HUMAN ERROR”

Disasters in complex systems – such as the destruction of the reactor at Three Mile Island, the explosion onboard Apollo 13, the destruction of the space shuttles Challenger and Columbia, the Bhopal chemical plant disaster, the Herald of Free Enterprise ferry capsizing, the Clapham Junction railroad disaster, the grounding of the tanker Exxon Valdez, crashes of highly computerized aircraft at Bangalore and Strasbourg, the explosion at the Chernobyl reactor, AT&T’s Thomas Street outage, as well as more numerous serious incidents which have only captured localized attention – have left many people perplexed. From a narrow, technology-centered point of view, incidents seem more and more to involve mis-operation of otherwise functional engineered systems. Small problems seem to cascade into major incidents. Systems with minor problems are managed into much more severe incidents. What stands out in these cases is the human element.
“Human error” is cited over and over again as a major contributing factor or “cause” of incidents. Most people accept the term human error as one category of potential causes for unsatisfactory activities or outcomes. Human error as a cause of bad outcomes is used in engineering approaches to the reliability of complex systems (probabilistic risk assessment) and is widely cited as a basic category in incident reporting systems in a variety of industries. For example, surveys of anesthetic incidents in the operating room have attributed between 70 and 75 percent of the incidents surveyed to the human element (Cooper, Newbower, and Kitz, 1984; Chopra, Bovill, Spierdijk, and Koornneef, 1992; Wright, Mackenzie, Buchan, Cairns, and Price, 1991). Similar incident surveys in aviation have attributed over 70 percent of incidents to crew error (Boeing, 1993). In general, incident surveys in a variety of industries attribute high percentages of critical events to the category “human error” (see for example, Hollnagel, 1993). The result is the widespread perception of a “human error problem.”
One aviation organization concluded that to make progress on safety:
We must have a better understanding of the so-called human factors which control performance simply because it is these factors which predominate in accident reports. (Aviation Daily, November 6, 1992)
The typical belief is that the human element is separate from the system in question and hence, that problems reside either in the human side or in the engineered side of the equation. Incidents attributed to human error then become indicators that the human element is unreliable. This view implies that solutions to a “human error problem” reside in changing the people or their role in the system. To cope with this perceived unreliability of people, the implication is that one should reduce or regiment the human role in managing the potentially hazardous system. In general, this is attempted by enforcing standard practices and work rules, by exiling culprits, by policing of practitioners, and by using automation to shift activity away from people. Note that this view assumes that the overall tasks and system remain the same regardless of the extent of automation, that is the allocation of tasks to people or to machines, and regardless of the pressures managers or regulators place on the practitioners.
For those who accept human error as a potential cause, the answer to the question, what is human error, seems self-evident. Human error is a specific variety of human performance that is so clearly and significantly substandard and flawed when viewed in retrospect that there is no doubt that it should have been viewed by the practitioner as substandard at the time the act was committed or omitted. The judgment that an outcome was due to human error is an attribution that (a) the human performance immediately preceding the incident was unambiguously flawed and (b) the human performance led directly to the negative outcome.
But in practice, things have proved not to be this simple. The label “human error” is very controversial (e.g., Hollnagel, 1993). When precisely does an act or omission constitute an error? How does labeling some act as a human error advance our understanding of why and how complex systems fail? How should we respond to incidents and errors to improve the performance of complex systems? These are not academic or theoretical questions. They are close to the heart of tremendous bureaucratic, professional, and legal conflicts and are tied directly to issues of safety and responsibility. Much hinges on being able to determine how complex systems have failed and on the human contribution to such outcome failures. Even more depends on judgments about what means will prove effective for increasing system reliability, improving human performance, and reducing or eliminating bad outcomes.
Studies in a variety of fields show that the label “human error” is prejudicial and unspecific. It retards rather than advances our understanding of how complex systems fail and the role of human practitioners in both successful and unsuccessful system operations. The investigation of the cognition and behavior of individuals and groups of people, not the attribution of error in itself, points to useful changes for reducing the potential for disaster in large, complex systems. Labeling actions and assessments as “errors” identifies a symptom, not a cause; the symptom should call forth a more in-depth investigation of how a system comprising people, organizations, and technologies both functions and malfunctions (Rasmussen et al., 1987; Reason, 1990; Hollnagel, 1991b; 1993).
Consider this episode which apparently involved a “human error” and which was the stimulus for one of earliest developments in the history of experimental psychology. In 1796 the astronomer Maskelyne fired his assistant Kinnebrook because the latter’s observations did not match his own. This incident was one stimulus for another astronomer, Bessel, to examine empirically individual differences in astronomical observations. He found that there were wide differences across observers given the methods of the day and developed what was named the personal equation in an attempt to model and account for these variations (see Boring, 1950). The full history of this episode foreshadows the latest results on human error. The problem was not that one person was the source of errors. Rather, Bessel realized that the standard assumptions about inter-observer accuracies were wrong. The techniques for making observations at this time required a combination of auditory and visual judgments. These judgments were heavily shaped by the tools of the day – pendulum clocks and telescope hairlines – in relation to the demands of the task. In the end, the constructive solution was not dismissing Kinnebrook, but rather searching for better methods for making astronomical observations, re-designing the tools that supported astronomers, and re-designing the tasks to change the demands placed on human judgment.
The results of the recent intense examination of the human contribution to safety and to system failure indicate that the story of “human error” is markedly complex. For example:
image
the context in which incidents evolve plays a major...

Indice dei contenuti