The Field Guide to Human Error Investigations
eBook - ePub

The Field Guide to Human Error Investigations

  1. 170 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Field Guide to Human Error Investigations

About this book

This title was first published in 2002: This field guide assesses two views of human error - the old view, in which human error becomes the cause of an incident or accident, or the new view, in which human error is merely a symptom of deeper trouble within the system. The two parts of this guide concentrate on each view, leading towards an appreciation of the new view, in which human error is the starting point of an investigation, rather than its conclusion. The second part of this guide focuses on the circumstances which unfold around people, which causes their assessments and actions to change accordingly. It shows how to "reverse engineer" human error, which, like any other componant, needs to be put back together in a mishap investigation.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access The Field Guide to Human Error Investigations by Sidney Dekker in PDF and/or ePUB format, as well as other popular books in Social Sciences & Sociology. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2017
Print ISBN
9781138704299
eBook ISBN
9781351786034
PART I
The Old View of Human Error:
Human error is a cause of accidents
To explain failure, investigations must seek failure
They must find people’s inaccurate assessments, wrong decisions and bad judgments
1. The Bad Apple Theory
There are basically two ways of looking at human error. The first view could be called ā€œthe bad apple theoryā€. It maintains that:
• Complex systems would be fine, were it not for the erratic behavior of some unreliable people (bad apples) in it;
• Human errors cause accidents: humans are the dominant contributor to more than two thirds of them;
• Failures come as unpleasant surprises. They are unexpected and do not belong in the system. Failures are introduced to the system only through the inherent unreliability of people.
This chapter is about the first view, and the following five are about the problems and confusion that he at its root.
Every now and again, nation-wide debates about the death penalty rage in the United States. Studies find a system fraught with vulnerabilities and error. Some states halt proceedings altogether; others scramble to invest more in countermeasures against executions of the innocent.
The debate is a window on people’s beliefs about the sources of error. Says one protagonist: ā€œThe system of protecting the rights of accused is good. It’s the people who are administering it who need improvement: The judges that make mistakes and don’t permit evidence to be introduced. We also need improvement of the defense attorneys.ā€1 The system is basically safe, but it contains bad apples. Countermeasures against miscarriages of justice begin with them. Get rid of them, retrain them, discipline them.
But what is the practice of employing the least experienced, least skilled, least paid public defenders in many death penalty cases other than systemic? What are the rules for judges’ permission of evidence other than systemic? What is the ambiguous nature of evidence other than inherent to a system that often relies on eyewitness accounts to make or break a case?
Each debate about error reveals two possibilities. Error is either the result of a bad apple, where disastrous outcomes could have been avoided if somebody had paid a bit more attention or made a little more effort. In this view, we wonder how we can cope with the unreliability of the human element in our systems.
Or errors are the inevitable by-product of people doing the best they can in systems that themselves contain multiple subtle vulnerabilities; systems where risks and safety threats are not always the same; systems whose conditions shift and change over time. These systems themselves are inherent contradictions between operational efficiency on the one hand and safety (for example: protecting the rights of the accused) on the other. In this view, errors are symptoms of trouble deeper inside a system. Like debates about human error, investigations into human error mishaps face the choice. The choice between the bad apple theory in one of its many versions, or what has become known as the new view of human error.
A Boeing 747 Jumbo Jet crashed upon attempting to take-off from a runway that was under construction and being converted into a taxiway. The weather at the time was terrible—a typhoon was about to hit the particular island: winds were high and visibility low. The runway under construction was close and parallel to the intended runway, and bore all the markings, lights and indications of a real runway. This while it had been used as a taxiway for quite a while and was going to be officially converted at midnight the next day—ironically only hours after the accident. Pilots had complained about potential confusion for years, saying that by not indicating that the runway was not really a runway, the airport authorities were ā€œsetting a trap for a dark and stormy nightā€. The chief of the country’s aviation administration, however, claimed that ā€œrunways, signs and lights were up to international requirementsā€ and that ā€œit was clear that human error had led to the disaster.ā€ Human error, in other words, was simply the cause, and that was that. There was no deeper trouble of which the error was a symptom.
The ultimate goal of an investigation is to learn from failure. The road towards learning—the road taken by most investigations—is paved with intentions to follow the new view. Investigators intend to find the systemic vulnerabilities behind individual errors. They want to address the error-producing conditions that, if left in place, will repeat the same basic pattern of failure.
In practice, however, investigations often return disguised versions of the bad apple theory—in both findings and recommendations. They sort through the rubble of a mishap to:
• Find evidence for erratic, wrong or inappropriate behavior;
• Bring to light people’s bad decisions; inaccurate assessments; deviations from written guidance;
• Single out particularly ill-performing practitioners.
Investigations often end up concluding how front-line operators failed to notice certain data, or did not adhere to procedures that appeared relevant after the fact. They recommend the demotion or retraining of particular The pilot oversights were captured onindividuals; the tightening of procedures or oversight. The reasons for regression into the bad apple theory are many. For example:
• Resource constraints on investigations. Findings may need to be produced in a few months time, and money is limited;
• Reactions to failure, which make it difficult not to be judgmental about seemingly bad performance;
• The hindsight bias, which confuses our reality with the one that surrounded the people we investigate;
• Political distaste of deeper probing into sources of failure, which may de facto limit access to certain data or discourage certain kinds of recommendations;
• Limited human factors knowledge on part of investigators. While wanting to probe the deeper sources behind human errors, investigators may not really know where or how to look.
In one way or another, The Field Guide will try to deal with these reasons. It will then present an approach for how to do a human error investigation—something for which there is no clear guidance today.
UNRELIABLE PEOPLE IN BASICALLY SAFE SYSTEMS
This chapter discusses the bad apple theory of human error. In this view on human error, progress on safety is driven by one unifying idea:
COMPLEX SYSTEMS ARE BASICALLY SAFE
THEY NEED TO BE PROTECTED FROM UNRELIABLE PEOPLE
Charges are brought against the pilots who flew a VIP jet with a malfunction in its pitch control system (which makes the plane go up or down). Severe oscillations during descent killed seven of their unstrapped passengers in the back. Significant in the sequence of events was that the pilots ā€œignoredā€ the relevant alert light in the cockpit as a false alarm, and that they had not switched on the fasten seatbelt sign from the top of descent, as recommended by jet’s procedures. The pilot oversights were captured on video, shot by one of the passengers who died not much later. The pilots, wearing seatbelts, survived the upset.2
To protect safe systems from the vagaries of human behavior, recommendations typically propose to:
• Tighten procedures and close regulatory gaps. This reduces the bandwidth in which people operate. It leaves less room for error.
• Introduce more technology to monitor or replace human work. If machines do the work, then humans can no longer make errors doing it. And if machines monitor human work, they can snuff out any erratic human behavior.
• Make sure that defective practitioners (the bad apples) do not contribute to system breakdown again. Put them on ā€œadministrative leaveā€; demote them to a lower status; educate or pressure them to behave better next time; instill some fear in them and their peers by taking them to court or reprimanding them.
In this view of human error, investigations can safely conclude with the label ā€œhuman errorā€ā€”by whatever name (for example: ignoring a warning light, violating a procedure). Such a conclusion and its implications supposedly get to the causes of system failure.
AN ILLUSION OF PROGRESS ON SAFETY
The shortcomings of the bad apple theory are severe and deep. Progress on safety based on this view is often a short-lived illusion. For example, focusing on individual failures does not take away the underlying problem. Removing ā€œdefectiveā€ practitioners (throwing out the bad apples) fails to remove the potential for the errors they made.
As it turns out, the VIP jet aircraft had been flying for a long time with a malfunctioning pitch feel system (ā€˜Oh that light? Yeah, that’s been on for four months now’). These pilots inherited a systemic problem from the airline that operated the VIP jet, and from the organization charged with its maintenance.
In other words, trying to change your people by setting examples, or changing the make-up of your operational workforce by removing bad apples, has litte long-term effect if the basic conditions that people work under are left unamended.
Adding more procedures
Adding or enforcing existing procedures does not guarantee compliance. A typical reaction to failure is procedural overspecification—patching observed holes in an operation with increasingly detailed or tightly targeted rules, that respond specifically to just the latest incident. Is this a good investment in safety? It may seem like it, but by inserting more, more detailed, or more conditioned rules, procedural overspecification is likely to widen the gap between procedures and practice, rather than narrow it. Rules will increasingly grow at odds with the context-dependent and changing nature of practice.
The reality is that mismatches between written guidance and operational practice always exist. Think about the work-to-rule strike, a form of industrial action historically employed by air traffic controllers, or customs officials, or other professions deeply embedded in rules and regulations. What does it mean? It means that if people don’t want to or cannot go on strike, they say to one another: ā€œLet’s follow all the rules for a change!ā€ Systems come to a grinding halt. Gridlock is the result. Follow the letter of the law, and the work will not get done. It is as good as, or better than, going on strike.
Seatbelt sign on from top of descent in a VIP jet? The layout of furniture in these machines and the way in which their passengers are pressured to make good use of their time by meeting, planning, working, discussing, does everything to discourage people from strapping in any earlier than strictly necessary. Pilots can blink the light all they want, you could understand that over time it may become pointless to switch it on from 41,000 feet on down. And who typically employs the pilot of a VIP jet? The person in the back. So guess who can tell whom what to do. And why have the light on only from the top of descent? This is hypocritical—only in the VIP jet upset discussed here was that relevant because loss of control occurred during descent. But other incidents with in-flight deaths have occurred during cruise. Procedures are insensitive to this kind of natural variability.
New procedures can also get buried in masses of regulatory paperwork. Mismatches between procedures and practice grow not necessarily because of people’s conscious non-adherence but because of the amount and increasingly tight constraints of procedures.
The vice president of a large airline commented recently how he had seen various of his senior colleagues retire over the past few years. Almost all had told him how they had gotten tired of updating their aircraft operating manuals with new procedures that came out—one after the other—often for no other reason than to close just the next gap that had been revealed in the latest little incident. Faced with a growing pile of paper in their mailboxes, they had just not bothered. Yet these captains all retired alive and probably flew very safely during their last few years.
Adding a bit more technology
More technology does not remove the potential for human error, but relocates or changes it.
A warning light does not solve a human error problem, it creates new ones. What is this light for? How do we respond to it? What do we do to make it go away? It lit up yesterday and meant nothing. Why listen to it today?
What is a warning light, really? It is a threshold crossing device: it starts blinking when some electronic or electromechanical threshold is exceeded. If particular values stay below the threshold, the light is out. If they go above, the light comes on. But what is its significance? After all, the aircraft has ...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. Preface
  7. PART I Human error as a cause of mishaps
  8. PART II Human error as symptom of trouble deeper inside the system
  9. Acknowledgements
  10. Subject Index