The Field Guide to Understanding 'Human Error'
eBook - ePub

The Field Guide to Understanding 'Human Error'

Sidney Dekker

Compartir libro
  1. 248 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

The Field Guide to Understanding 'Human Error'

Sidney Dekker

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

When faced with a 'human error' problem, you may be tempted to ask 'Why didn't these people watch out better?' Or, 'How can I get my people more engaged in safety?' You might think you can solve your safety problems by telling your people to be more careful, by reprimanding the miscreants, by issuing a new rule or procedure and demanding compliance. These are all expressions of 'The Bad Apple Theory' where you believe your system is basically safe if it were not for those few unreliable people in it. Building on its successful predecessors, the third edition of The Field Guide to Understanding 'Human Error' will help you understand a new way of dealing with a perceived 'human error' problem in your organization. It will help you trace how your organization juggles inherent trade-offs between safety and other pressures and expectations, suggesting that you are not the custodian of an already safe system. It will encourage you to start looking more closely at the performance that others may still call 'human error', allowing you to discover how your people create safety through practice, at all levels of your organization, mostly successfully, under the pressure of resource constraints and multiple conflicting goals. The Field Guide to Understanding 'Human Error' will help you understand how to move beyond 'human error'; how to understand accidents; how to do better investigations; how to understand and improve your safety work. You will be invited to think creatively and differently about the safety issues you and your organization face. In each, you will find possibilities for a new language, for different concepts, and for new leverage points to influence your own thinking and practice, as well as that of your colleagues and organization. If you are faced with a 'human error' problem, abandon the fallacy of a quick fix. Read this book.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es The Field Guide to Understanding 'Human Error' un PDF/ePUB en línea?
Sí, puedes acceder a The Field Guide to Understanding 'Human Error' de Sidney Dekker en formato PDF o ePUB, así como a otros libros populares de Technik & Maschinenbau y Gesundheit & Sicherheit in der Industrie. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Editorial
CRC Press
Año
2017
ISBN
9781317031833
1
Two Views of ‘Human Error’
There are basically two ways of looking at ‘human error.’ The first view is known as the Old View, or The Bad Apple Theory. It maintains that:
• Complex systems would be fine, were it not for the erratic behavior of some unreliable people (Bad Apples) in it.
• ‘Human errors’ cause accidents: more than two-thirds of them.
• Failures come as unpleasant surprises. They are unexpected and do not belong in the system. Failures are introduced to the system through the inherent unreliability of people.
The Old View maintains that safety problems are the result of a few Bad Apples in an otherwise safe system. These Bad Apples don’t always follow the rules, they don’t always watch out carefully. They undermine the organized and engineered system that other people have put in place. This, according to some, creates safety problems:1
“It is now generally acknowledged that human frailties lie behind the majority of accidents. Although many of these have been anticipated in safety rules, prescriptive procedures and management treatises, people don’t always do what they are supposed to do. Some employees have negative attitudes to safety which adversely affect their behaviors. This undermines the system of multiple defenses that an organization constructs” to prevent injury and incidents.
This embodies all of the tenets of the Old View:
• Human frailties lie behind the majority of accidents. ‘Human errors’ are the dominant cause of trouble.
• Safety rules, prescriptive procedures and management treatises are supposed to control erratic human behavior.
• But this control is undercut by unreliable, unpredictable people who still don’t do what they are supposed to do.
Some Bad Apples have negative attitudes toward safety, which adversely affects their behavior. So not attending to safety is a personal problem, a motivational one, an issue of individual choice.
• The basically safe system, of multiple defenses carefully constructed by the organization, is undermined by erratic or unreliable people.
Notice also what solutions are implied here. In order to not have safety problems, people should do as they are told. They should be compliant with what managers and planners have figured out for them. Indeed, managers and others above them are smart—they have put in place those treatises, those prescriptive procedures, those safety rules. All the dumb operators or practitioners need to do is follow them, stick to them! How hard can that be? Apparently it can be really hard. But the reason is also clear: it is because of people’s negative attitudes which adversely affect their behaviors. So more work on their attitudes (with poster campaigns and sanctions, for example) should do the trick.
This view, the Old View, is limited in its usefulness. In fact, it can be deeply counterproductive. It has been tried for decades, without noticeable effect. Safety improvement comes from abandoning the idea that errors are causes, and that people are the major threat to otherwise safe systems. Progress on safety comes from embracing the New View.
A Boeing 747 Jumbo Jet crashed when taking off from a runway that was under construction and being converted into a taxiway. The weather at the time was bad—a typhoon was about to hit the country: winds were high and visibility low. The runway under construction was close and parallel to the intended runway, and bore all the markings, lights and indications of a real runway. This while it had been used as a taxiway for quite a while and was going to be officially converted at midnight the next day—ironically only hours after the accident.
Pilots had complained about potential confusion for years, saying that not indicating that the runway was not really a runway was “setting a trap for a dark and stormy night.” Moreover, at the departure end there was no sign that the runway was under construction. The first barrier stood a kilometer down the runway, and behind it a mass of construction equipment—all of it hidden in mist and heavy rain. The chief of the country’s aviation administration, however, claimed that “runways, signs and lights were up to international requirements” and that “it was clear that ‘human error’ had led to the disaster.” So ‘human error’ was simply the cause. To him, there was no deeper trouble of which the error was a symptom.
Bad People In Safe Systems, Or Well-Intentioned People In Imperfect Systems?
At first sight, stories of error seem so simple:
• somebody did not pay enough attention;
• if only somebody had recognized the significance of this indication, or of that piece of data, then nothing would have happened;
• somebody should have put in more effort;
• somebody thought that making a shortcut was no big deal.
So telling other people to try harder, to watch out more carefully, is thought to deal with the ‘human error’ problem:
The ministry of transport in Tokyo issued an order to all air traffic controllers to step up their vigilance after an incident that happened to a JAL flight that ended up injuring 42 people.
Given what you know after the fact, most errors seem so preventable. It might prompt you, or your organization to do the following things:
• get rid of Bad Apples;
• put in more rules, procedures and compliance demands;
• tell people to be more vigilant (with posters, memos, slogans);
• get technology to replace unreliable people.
But does that help in the long run—or even the short run? It doesn’t. In fact, these countermeasures are not just neutral (or useless, if you want to put it that way). They have additional negative consequences:
• Getting rid of Bad Apples tends to send a signal to other people to be more careful with what they do, say, report or disclose. It does not make ‘human errors’ go away, but does tend to make the evidence of them go away; evidence that might otherwise have been available to you and your organization so that you could learn and improve.
• Putting in more rules, procedures and compliance demands runs into the problem that there is always a gap between how work is imagined (in rules or procedures) and how work is done. Pretending that this gap does not exist is like sticking your head in the sand. And trying to force the gap to close with more compliance demands and threats of sanctions will drive real practice from view.
Telling people to be more vigilant (with posters, memos, slogans) does nothing to remove the problem, certainly not in the medium or longer term. What it does do, is put your ignorance about the problem on full display. If all you are seen to be able to do is ask everybody else to try harder, what does that say about you? You obviously have made up your mind about what the source of the problem is (it’s those operators or practitioners who don’t try hard enough). Such preconceived judgments generally do not help your credibility or your standing among your people. First you should do the hard work to understand why it made sense for your people to do what they did, given the conditions in which they worked. And you need to ask what your role and your organization’s role has been in creating those conditions.
• Getting technology to replace unreliable people is an attractive idea, and is wide-spread. But technology introduces new problems as well as new capacities. Rather than replacing human work, it changes human work. New technology may lead to new kinds of ‘human errors’ and new pathways to system breakdown.
So the apparent simplicity of ‘human error’ is misleading. Underneath every seemingly obvious, simple story of error, there is a second, deeper story. A more complex story.
A most colorful characterization of this comes from James Reason: “Rather than being the main instigators of an accident, operators tend to be the inheritors of system defects created by poor design, incorrect installation, faulty maintenance and bad management decisions. Their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking.”2
This second story is inevitably an organizational story, a story about the system in which people work, about its management, technology, governance, administration and operation:
• Safety is never the only goal. Organizations exist to provide goods or services (and often to make money from it).
• People do their best to reconcile different goals simultaneously (for example, service or efficiency versus safety).
• A system isn’t automatically safe: people actually have to create safety through practice at all levels of the organization.
• The tools or technology that people work with create error opportunities and pathways to failure.
Production expectations and pressures to be efficient influence people’s trade-offs, making normal or acceptable what was previously perhaps seen as irregular or unsafe. In fact, this may include practices or things that you would never have believed your people would do. When you discover such things, be careful not to jump on them and remind your people to comply, to not make shortcuts, to always be careful and vigilant. Such reminders can sound so hollow if you haven’t first looked at yourself and your organization—at the many expectations (some communicated very subtly, not written down), the resource constraints and goal conflicts that you help push into people’s everyday working life. Remember that the shortcuts and adaptations people have introduced into their work often do not serve their own goals, but yours or those of your organization!
Underneath every simple, obvious story about ‘human error,’ there is a deeper, more complex story about the organization.
The second story, in other words, is a story of the real complexity in which people work. Not a story about the apparent simplicity. Systems are not basically safe. People have to create safety despite a system that places other (sometimes contradictory) expectations and demands on them.
Two hard disks with classified information went missing from the Los Alamos nuclear laboratory, only to reappear under suspicious circumstances behind a photocopier a few months later. Under pressure to assure that the facility was secure and such lapses extremely uncommon, the Energy Secretary attributed the incident to “human error, a mistake.” The hard drives were probably misplaced out of negligence or inattention to security procedures, officials said. The Deputy Energy Secretary added that “the vast majority are doing their jobs well at the facility, but it probably harbored ‘a few Bad Apples’ who had compromised security out of negligence.”
But this was never about a few bad individuals. Under pressure to perform daily work in a highly cumbersome context of checking, doublechecking and registering the use of sensitive materials, such “negligence” had become a feature of the entire laboratory. Scientists routinely moved classified material without witnesses or signing logs. Doing so was not a sign of malice, but a way to get the work done given all its constraints, pressures and expectations. The practice had grown over time, accommodating production pressures from which the laboratory owed its existence.
Table 1.1 Contrast between the Old View and New View of ‘human error’
Old View
New View
Asks who is responsible for the outcome
Asks what is responsible for the outcome
Sees ‘human error’ as the cause of trouble
Sees ‘human error’ as a symptom of deeper trouble
‘Human error’ is random, unreliable behavior
‘Human error’ is systematically connected to features of people’s tools, tasks and operating environment
‘Human error’ is an acceptable conclusion of an investigation
‘Human error’ is only the starting point for further investigation
People who work in these systems learn about the pressures and contradictions, the vulnerabilities and pathways to failure. They develop strategies to not have failures happen. But these strategies may not be completely adapted. They may be outdated. They may be thwarted by the complexity and dynamics of the situation in which they find themselves. Or vexed by their rules, or nudged by the feedback they get from their management about what “really” is important (often production and efficiency). In this way, safety is made and broken the whole time.
These insights have led to the New View of ‘human error.’ In this view, errors are symptoms of trouble deeper inside a system. Errors are the other side of people pursuing success in an uncertain, resource-constrained world. The Old View, or the Bad Apple Theory, sees systems as basically safe and people as the major source of trouble. The New View, in contrast, understands that systems are not basically safe. It understands that safety needs to be created through practice, by people.
People Do Not Come To Work To Do A Bad Job
The psychological basis for the New View is the “local rationality principle.” This is based on a lot of research in cognitive science.3 It says that what people do makes sense to them at the time—g...

Índice