A Guide To Practical Human Reliability Assessment
eBook - ePub

A Guide To Practical Human Reliability Assessment

B. Kirwan

Share book
  1. 587 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

A Guide To Practical Human Reliability Assessment

B. Kirwan

Book details
Book preview
Table of contents
Citations

About This Book

Human error is here to stay. This perhaps obvious statement has a profound implication for society when faced with the types of hazardous system accidents that have occurred over the past three decades. Such accidents have been strongly influenced by human error, yet many system designs in existence or being planned and built do not take human error into consideration.; "A Guide to Practical Human Reliability Assessment" is a practical and pragmatic guide to the techniques and approaches of human reliability assessment HRA. lt offers the reader explanatory and practical methods which have been applied and have worked in high technology and high risk assessments - particularly but not exclusively to potentially hazardous industries such as exist in process control, nuclear power, chemical and petrochemical industries.
A Guide to Practical Human Reliability Assessment offers the practitioner a comprehensive tool-kit of different approaches along with guidance on selecting different methods for different applications. It covers the risk assessment and the HRA process, as well as methods of task analysis, error identification, quantification, representation of errors in the risk analysis, followed by error reduction analysis, quality assurance and documentation. There are also a number of detailed case studies from nuclear, chemical, offshore, and marine HRA'S, exemplfying the image of techniques and the impact of HRA in existing and design-stage systems.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is A Guide To Practical Human Reliability Assessment an online PDF/ePUB?
Yes, you can access A Guide To Practical Human Reliability Assessment by B. Kirwan in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Industrial Health & Safety. We have over one million books available in our catalogue for you to explore.

Information

1
Introduction
Human error is here to stay. This perhaps obvious statement has a more profound implication for society when we consider the types of hazardous system accidents that have occurred over the past three decades, such as Three Mile Island, Chernobyl and Bhopal. Such system accidents, and others such as those reviewed in Appendix I, have all been strongly influenced by human error. Yet many similar systems are in existence, or are being built, or are being planned. Since human error is endemic to our species, there are really only two alternatives for modern society: either get rid of such systems (as advocated on risk grounds by Perrow (1984)) or try to deal with and minimise the human error problems as far as is possible.
Some of the accidents that have occurred were almost impossible to predict prior to the event, whereas others undoubtedly could have been predicted and prevented by techniques dealing with human error assessment. This book does not attempt to decide whether complex, hazardous systems should be allowed to continue, but rather attempts to define a set of useful tools for the analysis and reduction of those errors which could lead to system accidents. This book therefore deals with the subject of Human Reliability Assessment (HRA), as used in the study and assessment of risks involved in large, complex and hazardous systems.
The term human error has been pragmatically defined by Swain (1989) as follows: ‘any member of a set of human actions or activities that exceeds some limit of acceptability, i.e. an out of tolerance action [or failure to act] where the limits of performance are defined by the system.’ The effects of human error on system performance have been demonstrated most vividly by large-scale accidents (e.g. see Appendix I), and current accident experience suggests that the so-called high-risk industries (and some so-called low-risk ones too) are still not particularly well-protected from human error. This in turn suggests the need both for a means of properly assessing the risks attributable to human error and for ways of reducing system vulnerability to human error impact. These are the primary goals of Human Reliability Assessment (HRA), achieved by its three principal functions of identifying what errors can occur (Human Error Identification), deciding how likely the errors are to occur (Human Error Quantification), and, if appropriate, enhancing human reliability by reducing this error likelihood (Human Error Reduction).
HRA can also enhance the profitability and availability of systems via human error reduction/avoidance, although the main drive for the development and application of HRA techniques has so far come from the risk assessment and reduction domain. This book therefore concentrates primarily on HRA in the risk assessment context, and is aimed at both the practitioner and the student, as well as managers who may wish to understand what HRA has to offer. It attempts to present a practical and unbiased framework approach to HRA, called the HRA process, which encompasses a range of existing HRA tools of error identification, quantification, etc. These tools are documented, along with some practitioner insights and additional useful data, to aid would be assessors in understanding the techniques’ different rationales, advantages and disadvantages – as well as how the various tools all fit together within the HRA process.
As this book is primarily practical in nature, a theoretical discussion and introduction on the nature and genesis of human error is left to other more competent texts on this subject (e.g. Reason, 1990), although the practical implications of such theory are embedded explicitly in appropriate sections of this book (e.g. the section on Human Error Identification). The following sections of the introduction therefore merely aim to set the terms of reference for what follows. This entails a brief discussion of the role of human error in complex systems, and how the functions of HRA have developed over the past three decades, leading to the current definition of the HRA process. Following these brief introductory sections, and two sections on the scope of the book and how to use it, the remainder of the book is concerned with practical approaches to HRA, set in the framework of the HRA process, backed up by a number of appendices containing both relevant data and real case studies.
1.1 Human errors in complex systems
Human error is extremely commonplace, with almost everyone committing at least some errors every day – whether ‘small’ ones such as mispronouncing a word, or larger errors or mistakes such as deciding to invest in a financial institution which later goes bankrupt. Most human errors in everyday life are recoverable, or else have a relatively small impact on our lives. However, in the work situation, and especially in complex systems, this may not be the case. A human operator in the central control room of a chemical or nuclear power plant, or the pilot of a large commercial airplane, cannot afford to make certain errors, or else accidents involving fatalities, possibly including the life of the operators themselves, may be the result.
Human error in complex and potentially hazardous systems therefore involves human action (or inaction) in unforgiving systems. For human error to have a negative system impact, there must first be an opportunity for an error, or a requirement for reliable human performance – often in response to some event. The error must then occur and fail to be corrected or compensated for by the system, and it must have negative consequences, either alone or in conjunction with another human, ‘hardware’, or environmental event. As a simple example, if the main (foot-operated) brakes on a car fail while the car is in motion, the operator (the driver) must bring the vehicle to a safe stop by using the handbrake or the gears (or both), or an inclined surface, or, if necessary, an object of sufficient inertia to stop the vehicle’s motion. The ‘demanding event’ requiring reliable human performance is the brake (hardware) failure. The ‘error’ is in this case a failure to achieve a safe stop, and the consequences of such an error will be dependent on local population density, and the speed of the vehicle at the time of the incident, etc. The event could be compounded by other simultaneous hardware failures (e.g. the handbrake cable snapping under the sudden load), as well as by environmental circumstances (e.g. rain, ice, fog, etc.).
Two main facets of the above example are worth expanding upon. Firstly, in the event of this incident occurring, very few people would ‘blame’ the driver if the error occurred (assuming that the brake failure was not caused by the driver failing to have the car regularly serviced, etc.), since a sudden spontaneous brake failure is something few people are prepared or trained for. In HRA, the concept of blame is usually counter-productive and obscures the real reasons why an accident occurred. To pass a complex series of events off as a mere operator error, and then actually to replace the operator (or else just admonish the operator for his or her actions or inactions), not only crudely simplifies the causes of the event but in many cases actually allows the event (or a similar one) to recur.
Unfortunately, in large-scale accidents involving multiple fatalities, legal and natural social processes tend to encourage the desire to attribute blame to particular individual parties. Whilst there is no intention in this book of embarking on the difficult area of the ethics of culpability, it is important that the reader understand that the term ‘human error’, in this book as in the greater part, if not the whole, of the field of Human Reliability Assessment, has nothing to do with the concept of blame. Blame therefore does not occur within the HRA glossary. It is simply not a useful concept.
A second aspect of the above example is that the events in it are all very immediately apparent to the operator (driver). In most complex systems, however, the complexity of the system itself means that what is actually happening in the system, as a consequence both of human errors and of other failures or events, may be relatively opaque to the operator; or, at the least, this information may be delayed or reduced – or both. This is one major cost associated with the use of advanced technology, and it is one of the major factors preventing the achievement of high human reliability in complex systems.
It can be argued that human error, or rather human variability, a natural and adaptive process essential to our evolution, is so endemic to the species that high-risk systems which are vulnerable to human error should not be allowed to exist. This argument is certainly strengthened by the knowledge that many industries today are highly complex in the ways in which their potentially hazardous processes are controlled. Although this creates a very high degree of efficiency in such processes, this same level of complexity also means that estimating the ways in which systems can fail becomes difficult because there are so many interfacing and interacting components, the human operator being the most sophisticated and significant one. The argument against high-risk complex systems has been most powerfully presented by Perrow in his (1984) study of many so-called ‘normal accidents’, which were a natural outcome of human error in non-benign systems, and hence were very difficult to predict prior to their occurrence.
For the very reason that this book concerns itself with Human Reliability Assessment in complex, high-risk industries as mentioned above, the reader will surmise that it does not necessarily advocate the actual elimination of all such industries. Instead, it advocates a detailed assessment of risks due to human error, using the best tools available at this time – and the better ones that will develop in the future. The reason this brief discussion on errors in high-risk systems is included is to make a very simple but important point: human behaviour is intrinsically complicated and difficult to predict accurately. HRA is therefore conceptually a rather ambitious approach, particularly since it deals with the already-complex subject of human error in the additionally complex setting of large-scale systems. HRA must therefore not be used complacently, and cannot afford to be shallow in its approach to assessment. Complex systems often require correspondingly complex assessment procedures.
1.2 The roots of HRA
The study of HRA is approximately 30 years old and has always been a hybrid discipline, involving reliability engineers (HRA first arose within the field of system-reliability assessment), engineers and human-factors specialists or psychologists. HRA is inherently inter-disciplinary for two reasons. Firstly, it requires an appreciation of the nature of human error, both in terms of its underlying psychological basis and mechanisms, and in terms of the various human factors, such as training and the design of the interface, affecting performance. Secondly, it requires both some understanding of the engineering of the system, so that the intended and unintended human-system interactions can be explored for error potential and error impact, and an appreciation of reliability- and risk-estimation methods, so that HRA can be integrated into the ‘risk picture’ associated with a system. This risk picture can then act as a summary of the impacts of human error and hardware failure on system risk, and can be used to decide which aspects of risk are most important. The matter of how HRA is integrated into risk assessment is dealt with in more detail in Chapter 4 (see also Cox & Tait, 1991).
The development of HRA tools has arguably been somewhat slow and sporadic, although since the Three Mile Island incident (1979) efforts have been more sustained and better directed, largely in the nuclear-power domain. This has led to the existence of a number of practical HRA tools (see Kirwan et al, 1988; and Swain, 1989). There remains, however, a relative paucity of texts which attempt to bring together the more useful tools and place them in a coherent and practicable framework. This is partly because over the past three decades the focus in HRA has been purely on the quantification of human error probabilities. The human error probability (HEP) is simply defined as follows:
HEP=Number of errors occurredNumber of opportunities for error
Thus, when buying a cup of coffee from a vending machine, if, in 1 time in 100, tea is inadvertently purchased, the HEP is 0.01.
The focus on quantification has occurred because HRA is used within risk assessments, which are themselves probabilistic – i.e. involve defining the probabilities of accidents, of different consequence-severities, associated with a particular system design (e.g. a probability of 1 public fatality per 1,000,000 years). Such estimates are then compared against governmental criteria for that industry, and the risks are deemed either acceptable or not acceptable. If not, then either the risk factor must be reduced in some way or else the proposed or existing plant must be cancelled or shut down (see Cox & Tait, 1991). Risk assessment, therefore, is profoundly important for any system. And because it is firmly quantitative in nature, HRA, therefore, to fit into the probabilistic safety or risk assessment (PSA/PRA) framework, must also be quantitative, or else the impact of human error will be excluded from the ‘risk picture’. This has all thus led to the need for human error probabilities above all else.
In early HRAs it appeared that what could go wrong (i.e. what errors could occur) was fairly easy to predict (e.g. operators could fail to do something with enough precision, or fail to do it at all), whereas what was difficult to predict was the human error probability. The most obvious approach to HRA in the 1960s was to copy the approach used in reliability assessment, namely to collect data on the failure rates of operators in carrying out particular tasks, much as reliability engineers collected data on, for example, how often a valve or pump failed. This led to the attempted creation of various data banks (see Topmiller et al, 1984, for a comprehensive review), none of which, however, remain in use today. This is largely for the now-obvious reason that whereas hardware components such as valves and pumps have very specific inputs and outputs, and very limited functions, humans do not. Humans are autonomous. They can decide what to do from a vast array of potential outputs, and can interpret inputs in many different ways according to the goals they are trying to achieve. Human performance is also influenced by a very large number of interacting factors in the work environment, and human behaviour relies on skills, knowledge and strategies stored in the memory. In short, humans are not, and never will be, the same as simple components, and should never be treated as such.
The early data-bank drive was therefore effectively a failure, and with a few exceptions it has, until recently, remained a relatively unfruitful pursuit for various reasons (see Williams, 1983; and Kirwan et al, 1990). This failure led to two alternative approaches: firstly, the use of semi-judgemental databases (i.e. ones partly based on what scant data were available and partly tempered by experienced practitioners’ interpretations), which had little empirical justification but were held by practitioners to be reasonable; and secondly, the use of techniques of expert-judgement elicitation and aggregation (e.g. using plant operatives with 30 years’ experience of their own and other’s errors) to generate HEPs (as described later in Section 5.5). Both approaches therefore ultimately relied on expert judgement, their main product being the HEP.
With Three Mile Island and other more recent accidents, however, the earlier assumpt...

Table of contents