Reliability and Safety In Hazardous Work Systems
eBook - ePub

Reliability and Safety In Hazardous Work Systems

Approaches To Analysis And Design

Bernhard Wilpert, Qvale Thoralf, Bernhard Wilpert, Qvale Thoralf

Share book
  1. 272 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Reliability and Safety In Hazardous Work Systems

Approaches To Analysis And Design

Bernhard Wilpert, Qvale Thoralf, Bernhard Wilpert, Qvale Thoralf

Book details
Book preview
Table of contents
Citations

About This Book

This volume contains a selection of original contributions from internationally reputed scholars in the field of risk management in socio?technical systems with high hazard potential. Its first major section addresses fundamental psychological and socio?technical concepts in the field of risk perception, risk management and learning systems for safety improvement. The second section deals with the variety of procedures for system safety analysis. It covers strategies of analyzing automation problems and of safety culture as well as the analysis of social dynamics in field settings and of field experiments. Its third part then illustrates the utilization of basic concepts and analytic approaches by way of case studies of designing man?machine systems and in various industrial sectors such as intensive care wards, aviation, offfshore oil drilling and chemical industry. In linking basic theoretical conceptual notions and analytic strategies to detailed case studies in the area of hazardous work organizations the volume differs from and complements more theoretical works such as Human Error (J. Reason, 1990) and more general approaches such as New Technologies and Human Error (J. Rasmussen, K. Duncan, J. Leplat, Eds.)

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Reliability and Safety In Hazardous Work Systems an online PDF/ePUB?
Yes, you can access Reliability and Safety In Hazardous Work Systems by Bernhard Wilpert, Qvale Thoralf, Bernhard Wilpert, Qvale Thoralf in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2013
ISBN
9781134833214
Edition
1
Part I
Conceptual Issues in Risk Management
Introduction
Conceptual Issues of Risk Management
Although the three chapters in this part of the book reflect the human factors of specialists’ perspectives on risk and safety, they begin from different research positions. However, all three reflect general developments in recent thinking about safety and productivity in developed industries. Thus, while analysis of accidents per se cannot be the sole approach to identifying risks and for initiating control measures, safety and reliability issues are seen as inseparable from organisational economic performance indicators.
In different ways each author argues in favour of bringing safety and reliability issues out of the realm of specialised, single discipline-based research into a cross-disciplinary, integrated general organisational approach. They all deal, explicitly or implicitly, with safety in large, modern production facilities and with the role of people in maintaining safety and reliability.
Reason elaborates on his concept of the healthy organisation. His biological model of the organisation, analogous with general systems theory (Bertalanffy, 1950) emphasises organisations as whole systems, or gestalts, in which there are dynamic interrelationships between people across all disciplines, levels, and functions. However, a traditional medical approach of studying illness in Reason’s terms “resident pathogens”, i.e. pathological or dysfunctional aspects of the organisation provides only partial understanding of its health and is of limited utility for proactive or remedial use. An important feature of Reason’s chapter on “Managing the management risk” is to create a bridge between safety research and contemporary organisational notions such as “total quality management” and “high performance systems”.
Brehmer begins from the individual’s role in safety of complex production systems. In reviewing various theories about unsafe acts, “human error”, accident proneness, and human characteristics that have sought to establish causality between individuals and system safety, he contends that this path has borne little fruit. Although many accidents have been caused by operator errors, little can be done to improve safety by changing decision making at this level. As far as shop-floor attitudes and motivation are concerned, group norms and habits are probably more important than those of individuals. To achieve systematic improvements one has to move up the organisation and study changes in decision making there.
In a work organisation whose main purpose is to produce something, achieving control of safety does not necessarily lead to high performance, or does it? Control theory specifies four prerequisites for control: clear goals, observability, action possibilities, and an adequate model of the system. These prerequisites are the same whether the objectives are those of safety or productivity. In explaining these concepts, Brehmer gets very close to linking safety research to the current “empowerment” trend in management thinking, which begins to compete with the traditional “hierarchical control” paradigm in modern industry.
The third chapter “Learning from Experience”, by Rasmussen, develops from an analysis of some current trends in our technological development. Although public attention tends to be focused on industrial installations and associated hazards, there are parallel developments in most other sectors, for example, integrated large scale systems appearing in aerospace, air traffic control, consumer goods distribution, information, power supply, and financial operations. There is growing potential for loss and damage in case of technical faults in equipment and of human errors made during operation and maintenance. Therefore, a “defence in depth” design philosophy has evolved. The nuclear power generating industry is an example of the application of this philosophy because here the approach is particularly well developed.
The “defence in depth” approach to system design implies a priori detailed expert analysis and design. The general idea is one of “total design”—once the technology and organisation are in place, the system should operate in a completely prescribed and predictable mode. Several fallacies, however, are present. The probabilistic risk analysis and associated analysis of chain of events cannot be complete and are based on numerous unverifiable assumptions. Further, the organisational philosophy tends to remain unaffected by the safety philosophy and learning in work and adaptation by individuals and organisations are not taken into account. Hence, the production system may have or obtain a number of inherent errors that jeopardise safety.
As learning from experience and modification of work procedures seem mandatory in order to maintain safety, there is a need to organise such ongoing redesign in a systematic and safe way. Different methods for analysis can be used and the limited value of any single method is demonstrated. The need for cross-disciplinary safety and risk studies, linkage to higher level strategical planning, and system design and working with “healthy” organisations is stressed in order to obtain sustained high degrees of safety and reliability in large, complex production systems.
Reference
Bertalanffy, L. von (1950). The theory of open systems in physics and biology. Science, 3, 23–29.
Chapter One
Managing the Management Risk: New Approaches to Organisational Safety
James Reason
University of Manchester, Manchester, UK
Introduction
This chapter needs a rather more personal introduction than is customary. Over the past few years, my interests have shifted away from an academic study of human error towards a far more practical concern with the safety of complex technological systems. In some ways, this is a perfectly natural progression. Human errors of one kind or another lie at the roots of most, if not all, major disasters. But in other respects, the move has demanded radical changes in both thinking and methodology. Whereas research into human error for its own sake could be carried on within the familiar confines of cognitive psychology, the practical study of organisational safety is a multidisciplinary venture that takes me well beyond my traditional professional boundaries. My only comfort is that the same is true for most people now working in this challenging area.
Stated very simply, it could be said that there have been three overlapping ages of safety concerns. The first was the technical age in which the main focus was upon operational and engineering methods for combating hazards. Then came the human error age, which had its origins in the 1930s when it became apparent that human beings are capable of circumventing even the most advanced engineered safety devices. This age has continued up to the 1980s, fuelled by such incidents as Brown’s Ferry, Tenerife, and Three Mile Island. But over the past few years we have moved into a third age, the sociotechnical age. This is mainly the product of a series of major accidents occurring within a wide range of complex, well-defended technologies: Bhopal, Chernobyl, Zeebrugge, King’s Cross, Piper Alpha, and Clapham Junction, to name but a few. Although general systems theory and the notion of sociotechnical systems have been with us for quite some time, decades passed before most of us began fully to realise their implications for accident prevention and safety, namely to recognise that the major residual safety problems do not belong exclusively to either the technical or the human domains. Rather, they emerge from as yet little understood interactions between technical and social aspects of systems.
Most accidents have their origins within the managerial and organisational spheres. Unsafe acts are shaped and provoked by fallible decisions of those removed in both time and space from the human–system interface. The main questions facing the organisational reliability community are these: Where and how within the system are unsound and deficient decisions translated into unsafe acts capable of breaching the system’s defences? How can we identify these latent failures before they turn into serious accidents? By what means may we thwart potential accident pathways by neutralising the effects of delayed-action failures? In sharp contrast to both the technical and the human error eras, the sociotechnical age has neither established theories nor well-tried methods at its disposal. Finding answers to these questions constitutes our most pressing challenge, particularly as the risks of serious consequences of organisational accidents continue to increase and multiply with the rapid development of modern technology.
This chapter describes a personal progress of ideas regarding the nature and remediation of organisational accidents. It also discusses the influences that have shaped these developments and touches upon some of the issues yet to be resolved. Inevitably, given the limitations upon length and the idiosyncratic perspective, this will be a very fragmented and incomplete account. But, as the gestalt psychologists have told us, lack of closure serves as a spur rather than as a curb to further thought.
The Theory-Building Ground
On the face of it, recent major disasters appear to be singular events, each involving a different technology and having a unique set of causes and consequences. It could be said at a purely definitional level that they all shared the properties of an accident: the unintended release of mass and/or energy in the presence ofvictims. But this vacuous commonality offers little prospect of conceptual advancement. In order to develop a useful theoretical framework, we need to work in the middle ground between unique surface details and the very general characteristics common to all accidents.
At this intermediate level, we can note a number of interesting similarities between widely differing events: (1) all of the accidents listed above occurred within complex systems with considerable efforts and devices for their defence in depth (see Chapter 3 of this volume); (2) each arose from the adverse conjunction of several human failures, the most significant of which were committed long before an accident sequence was apparent; (3) they were all organisational accidents whose origins had more to do with the character of the sociotechnical system as a whole than with the erroneous actions of individual operators; (4) perhaps the greatest threat now facing hazardous technologies stems not so much from the breakdown of a major component or from isolated operator errors as from the insidious accumulation of delayed-action human failures within organisations.
The Resident Pathogen Metaphor
It has been suggested (Reason, 1988; Reason, Shotton, Wagenaar, Hudson, & Groeneweg, 1989) that latent failures in technical systems are analogous to resident pathogens in the human body, which combine with local triggering factors (e.g. life stresses or toxic chemicals) to overcome the immune system and produce disease. Like cancers and cardiovascular disorders, accidents in defended systems do not arise from single causes. They occur because of the adverse conjunction of several factors, each one necessary, but none sufficient to breach the defences alone. As in the case of the human body, no technical system can ever be entirely free of pathogens.
This view leads to a number of very general assertions about accident causation:
1. The likelihood of an accident is a function of the number of pathogens within the system. The more abundant they are, the greater is the probability that some of these pathogens will encounter just that combination of local triggers necessary to complete a latent accident sequence. Note that this view demands quite a different calculus than that employed in conventional probabilistic risk assessment.
2. The more complex and opaque the system, the more pathogens it will contain.
3. Simpler, less well-defended systems need fewer pathogens to bring about an accident.
4. The higher a person’s position within the decision-making structure of the organisation, the greater is his or her potential for spawning pathogens.
5. Local triggers are hard to anticipate. Who, for example, could have anticipated that the Assistant Bosun of the Herald of Free Enterprise would be asleep at the time he was required to close the bow doors, or that the Chief Officer would have mistaken someone else walking towards the bow doors on the car deck just before sailing time as the Assistant Bosun? But it would have been possible to establish in advance that the ship was undermanned and the crew poorly tasked.
6. The key assumption, then, is that resident pathogens can be identified proactively, given adequate access and system knowledge.
7. It also follows that efforts directed at identifying and neutralising pathogens (latent failures) are likely to have more and wider ranging safety benefits than those directed at minimising active failures.
8. A practical consequence of this view is that it directs researchers to establish diagnostic organisational signs, analogous to white cell counts and blood pressure, that give general indications of the health or “morbidity” of a high-hazard technical system.
Criticisms of The Pathogen View
The resident pathogen metaphor has a number of interesting features, but it is far from being a workable theory of organisational accidents. Its terms are still unacceptably vague and its substance has been criticised on at least two grounds.
The resident pathogen metaphor was derived in the first instance from a case study analysis of five major accidents (Reason, 1988). Some critics have argued that latent failures can always be identified with hindsight. But can they be established before the event? It is indeed true that any major accident investigation will uncover a large number of system pathogens. Accident investigators are, naturally enough, biased to discover prior deviations from some optimal state. (For a more detailed discussion of opportunities and limits of various analytic approaches see Chapters 3 and 8.) It is my contention, however, that pathogens may be identified proactively as well as retrospectively. If this were not the case, then the metaphor would have little or no value. At a later point, I will discuss some of the methods available to pathogen hunters.
A second criticism suggests that the resident pathogen idea shares many features with the now largely discredited accident proneness theory. Accident proneness theory failed because it was found that unequal accident liability was a “club” with a rapidly changing membership. Moreover, attempts to identify a clearly definable, accident-prone personality proved largely fruitless.
For the pathogen metaphor to have any remedial value—and that is the only real test of theory in this field—it is necessary to establish an a priori set of signs and indicators relating to system “morbidity”, and then to demonstrate clear causal linkages between these “symptoms” and accident liability across a wide range of complex technological systems.
An Elaborated Model Based on Productive Elements
What follows is a model that was worked out in collaboration with John Wreathall and seeks to establish the basic structural elements of a productive system (see Reason et al., 1989; Reason, 1990 for a more complete description). These elements can be considered to be the benign components of a technical system upon which pathogens exert their malignant effects. The model also identifies a direction for accident-causing influences.
All complex technologies are involved in some form of production, be it energy, services, chemical substance...

Table of contents