The Field Guide to Understanding 'Human Error'
eBook - ePub

The Field Guide to Understanding 'Human Error'

Sidney Dekker

Share book
  1. 248 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Field Guide to Understanding 'Human Error'

Sidney Dekker

Book details
Book preview
Table of contents
Citations

About This Book

When faced with a 'human error' problem, you may be tempted to ask 'Why didn't these people watch out better?' Or, 'How can I get my people more engaged in safety?' You might think you can solve your safety problems by telling your people to be more careful, by reprimanding the miscreants, by issuing a new rule or procedure and demanding compliance. These are all expressions of 'The Bad Apple Theory' where you believe your system is basically safe if it were not for those few unreliable people in it. Building on its successful predecessors, the third edition of The Field Guide to Understanding 'Human Error' will help you understand a new way of dealing with a perceived 'human error' problem in your organization. It will help you trace how your organization juggles inherent trade-offs between safety and other pressures and expectations, suggesting that you are not the custodian of an already safe system. It will encourage you to start looking more closely at the performance that others may still call 'human error', allowing you to discover how your people create safety through practice, at all levels of your organization, mostly successfully, under the pressure of resource constraints and multiple conflicting goals. The Field Guide to Understanding 'Human Error' will help you understand how to move beyond 'human error'; how to understand accidents; how to do better investigations; how to understand and improve your safety work. You will be invited to think creatively and differently about the safety issues you and your organization face. In each, you will find possibilities for a new language, for different concepts, and for new leverage points to influence your own thinking and practice, as well as that of your colleagues and organization. If you are faced with a 'human error' problem, abandon the fallacy of a quick fix. Read this book.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Field Guide to Understanding 'Human Error' an online PDF/ePUB?
Yes, you can access The Field Guide to Understanding 'Human Error' by Sidney Dekker in PDF and/or ePUB format, as well as other popular books in Technik & Maschinenbau & Gesundheit & Sicherheit in der Industrie. We have over one million books available in our catalogue for you to explore.

Information

1
Two Views of ā€˜Human Errorā€™
There are basically two ways of looking at ā€˜human error.ā€™ The first view is known as the Old View, or The Bad Apple Theory. It maintains that:
ā€¢ Complex systems would be fine, were it not for the erratic behavior of some unreliable people (Bad Apples) in it.
ā€¢ ā€˜Human errorsā€™ cause accidents: more than two-thirds of them.
ā€¢ Failures come as unpleasant surprises. They are unexpected and do not belong in the system. Failures are introduced to the system through the inherent unreliability of people.
The Old View maintains that safety problems are the result of a few Bad Apples in an otherwise safe system. These Bad Apples donā€™t always follow the rules, they donā€™t always watch out carefully. They undermine the organized and engineered system that other people have put in place. This, according to some, creates safety problems:1
ā€œIt is now generally acknowledged that human frailties lie behind the majority of accidents. Although many of these have been anticipated in safety rules, prescriptive procedures and management treatises, people donā€™t always do what they are supposed to do. Some employees have negative attitudes to safety which adversely affect their behaviors. This undermines the system of multiple defenses that an organization constructsā€ to prevent injury and incidents.
This embodies all of the tenets of the Old View:
ā€¢ Human frailties lie behind the majority of accidents. ā€˜Human errorsā€™ are the dominant cause of trouble.
ā€¢ Safety rules, prescriptive procedures and management treatises are supposed to control erratic human behavior.
ā€¢ But this control is undercut by unreliable, unpredictable people who still donā€™t do what they are supposed to do.
ā€¢ Some Bad Apples have negative attitudes toward safety, which adversely affects their behavior. So not attending to safety is a personal problem, a motivational one, an issue of individual choice.
ā€¢ The basically safe system, of multiple defenses carefully constructed by the organization, is undermined by erratic or unreliable people.
Notice also what solutions are implied here. In order to not have safety problems, people should do as they are told. They should be compliant with what managers and planners have figured out for them. Indeed, managers and others above them are smartā€”they have put in place those treatises, those prescriptive procedures, those safety rules. All the dumb operators or practitioners need to do is follow them, stick to them! How hard can that be? Apparently it can be really hard. But the reason is also clear: it is because of peopleā€™s negative attitudes which adversely affect their behaviors. So more work on their attitudes (with poster campaigns and sanctions, for example) should do the trick.
This view, the Old View, is limited in its usefulness. In fact, it can be deeply counterproductive. It has been tried for decades, without noticeable effect. Safety improvement comes from abandoning the idea that errors are causes, and that people are the major threat to otherwise safe systems. Progress on safety comes from embracing the New View.
A Boeing 747 Jumbo Jet crashed when taking off from a runway that was under construction and being converted into a taxiway. The weather at the time was badā€”a typhoon was about to hit the country: winds were high and visibility low. The runway under construction was close and parallel to the intended runway, and bore all the markings, lights and indications of a real runway. This while it had been used as a taxiway for quite a while and was going to be officially converted at midnight the next dayā€”ironically only hours after the accident.
Pilots had complained about potential confusion for years, saying that not indicating that the runway was not really a runway was ā€œsetting a trap for a dark and stormy night.ā€ Moreover, at the departure end there was no sign that the runway was under construction. The first barrier stood a kilometer down the runway, and behind it a mass of construction equipmentā€”all of it hidden in mist and heavy rain. The chief of the countryā€™s aviation administration, however, claimed that ā€œrunways, signs and lights were up to international requirementsā€ and that ā€œit was clear that ā€˜human errorā€™ had led to the disaster.ā€ So ā€˜human errorā€™ was simply the cause. To him, there was no deeper trouble of which the error was a symptom.
Bad People In Safe Systems, Or Well-Intentioned People In Imperfect Systems?
At first sight, stories of error seem so simple:
ā€¢ somebody did not pay enough attention;
ā€¢ if only somebody had recognized the significance of this indication, or of that piece of data, then nothing would have happened;
ā€¢ somebody should have put in more effort;
ā€¢ somebody thought that making a shortcut was no big deal.
So telling other people to try harder, to watch out more carefully, is thought to deal with the ā€˜human errorā€™ problem:
The ministry of transport in Tokyo issued an order to all air traffic controllers to step up their vigilance after an incident that happened to a JAL flight that ended up injuring 42 people.
Given what you know after the fact, most errors seem so preventable. It might prompt you, or your organization to do the following things:
ā€¢ get rid of Bad Apples;
ā€¢ put in more rules, procedures and compliance demands;
ā€¢ tell people to be more vigilant (with posters, memos, slogans);
ā€¢ get technology to replace unreliable people.
But does that help in the long runā€”or even the short run? It doesnā€™t. In fact, these countermeasures are not just neutral (or useless, if you want to put it that way). They have additional negative consequences:
ā€¢ Getting rid of Bad Apples tends to send a signal to other people to be more careful with what they do, say, report or disclose. It does not make ā€˜human errorsā€™ go away, but does tend to make the evidence of them go away; evidence that might otherwise have been available to you and your organization so that you could learn and improve.
ā€¢ Putting in more rules, procedures and compliance demands runs into the problem that there is always a gap between how work is imagined (in rules or procedures) and how work is done. Pretending that this gap does not exist is like sticking your head in the sand. And trying to force the gap to close with more compliance demands and threats of sanctions will drive real practice from view.
ā€¢ Telling people to be more vigilant (with posters, memos, slogans) does nothing to remove the problem, certainly not in the medium or longer term. What it does do, is put your ignorance about the problem on full display. If all you are seen to be able to do is ask everybody else to try harder, what does that say about you? You obviously have made up your mind about what the source of the problem is (itā€™s those operators or practitioners who donā€™t try hard enough). Such preconceived judgments generally do not help your credibility or your standing among your people. First you should do the hard work to understand why it made sense for your people to do what they did, given the conditions in which they worked. And you need to ask what your role and your organizationā€™s role has been in creating those conditions.
ā€¢ Getting technology to replace unreliable people is an attractive idea, and is wide-spread. But technology introduces new problems as well as new capacities. Rather than replacing human work, it changes human work. New technology may lead to new kinds of ā€˜human errorsā€™ and new pathways to system breakdown.
So the apparent simplicity of ā€˜human errorā€™ is misleading. Underneath every seemingly obvious, simple story of error, there is a second, deeper story. A more complex story.
A most colorful characterization of this comes from James Reason: ā€œRather than being the main instigators of an accident, operators tend to be the inheritors of system defects created by poor design, incorrect installation, faulty maintenance and bad management decisions. Their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking.ā€2
This second story is inevitably an organizational story, a story about the system in which people work, about its management, technology, governance, administration and operation:
ā€¢ Safety is never the only goal. Organizations exist to provide goods or services (and often to make money from it).
ā€¢ People do their best to reconcile different goals simultaneously (for example, service or efficiency versus safety).
ā€¢ A system isnā€™t automatically safe: people actually have to create safety through practice at all levels of the organization.
ā€¢ The tools or technology that people work with create error opportunities and pathways to failure.
Production expectations and pressures to be efficient influence peopleā€™s trade-offs, making normal or acceptable what was previously perhaps seen as irregular or unsafe. In fact, this may include practices or things that you would never have believed your people would do. When you discover such things, be careful not to jump on them and remind your people to comply, to not make shortcuts, to always be careful and vigilant. Such reminders can sound so hollow if you havenā€™t first looked at yourself and your organizationā€”at the many expectations (some communicated very subtly, not written down), the resource constraints and goal conflicts that you help push into peopleā€™s everyday working life. Remember that the shortcuts and adaptations people have introduced into their work often do not serve their own goals, but yours or those of your organization!
Underneath every simple, obvious story about ā€˜human error,ā€™ there is a deeper, more complex story about the organization.
The second story, in other words, is a story of the real complexity in which people work. Not a story about the apparent simplicity. Systems are not basically safe. People have to create safety despite a system that places other (sometimes contradictory) expectations and demands on them.
Two hard disks with classified information went missing from the Los Alamos nuclear laboratory, only to reappear under suspicious circumstances behind a photocopier a few months later. Under pressure to assure that the facility was secure and such lapses extremely uncommon, the Energy Secretary attributed the incident to ā€œhuman error, a mistake.ā€ The hard drives were probably misplaced out of negligence or inattention to security procedures, officials said. The Deputy Energy Secretary added that ā€œthe vast majority are doing their jobs well at the facility, but it probably harbored ā€˜a few Bad Applesā€™ who had compromised security out of negligence.ā€
But this was never about a few bad individuals. Under pressure to perform daily work in a highly cumbersome context of checking, doublechecking and registering the use of sensitive materials, such ā€œnegligenceā€ had become a feature of the entire laboratory. Scientists routinely moved classified material without witnesses or signing logs. Doing so was not a sign of malice, but a way to get the work done given all its constraints, pressures and expectations. The practice had grown over time, accommodating production pressures from which the laboratory owed its existence.
Table 1.1 Contrast between the Old View and New View of ā€˜human errorā€™
Old View
New View
Asks who is responsible for the outcome
Asks what is responsible for the outcome
Sees ā€˜human errorā€™ as the cause of trouble
Sees ā€˜human errorā€™ as a symptom of deeper trouble
ā€˜Human errorā€™ is random, unreliable behavior
ā€˜Human errorā€™ is systematically connected to features of peopleā€™s tools, tasks and operating environment
ā€˜Human errorā€™ is an acceptable conclusion of an investigation
ā€˜Human errorā€™ is only the starting point for further investigation
People who work in these systems learn about the pressures and contradictions, the vulnerabilities and pathways to failure. They develop strategies to not have failures happen. But these strategies may not be completely adapted. They may be outdated. They may be thwarted by the complexity and dynamics of the situation in which they find themselves. Or vexed by their rules, or nudged by the feedback they get from their management about what ā€œreallyā€ is important (often production and efficiency). In this way, safety is made and broken the whole time.
These insights have led to the New View of ā€˜human error.ā€™ In this view, errors are symptoms of trouble deeper inside a system. Errors are the other side of people pursuing success in an uncertain, resource-constrained world. The Old View, or the Bad Apple Theory, sees systems as basically safe and people as the major source of trouble. The New View, in contrast, understands that systems are not basically safe. It understands that safety needs to be created through practice, by people.
People Do Not Come To Work To Do A Bad Job
The psychological basis for the New View is the ā€œlocal rationality principle.ā€ This is based on a lot of research in cognitive science.3 It says that what people do makes sense to them at the timeā€”g...

Table of contents