Risk Management
eBook - ePub

Risk Management

Volume II: Management and Control

  1. 598 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Risk Management

Volume II: Management and Control

About this book

First published in 2000, Risk Management is a two volume set, comprised of the most significant and influential articles by the leading authorities in the studies of risk management. The volumes includes a full-length introduction from the editor, an internationally recognized expert, and provides an authoritative guide to the selection of essays chosen, and to the wider field itself. The collections of essays are both international and interdisciplinary in scope and provide an entry point for investigating the myriad of study within the discipline.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Risk Management by Gerald Mars,David T. H. Weir in PDF and/or ePUB format, as well as other popular books in Social Sciences & Business General. We have over one million books available in our catalogue for you to explore.

Information

[1]
Estimating Engineering Risk

Sir Bernard Crossland, F.R.S. (Chairman), Dr P.A. Bennett, Dr A.F. Ellis, Dr F.R. Farmer, Dr J. Gittus, P.S. Godfrey, Esq., Dr E.C. Hambly, Dr T.A. Kletz, Professor F.P. Lees

2.1 The Approach to Risk Estimation

Risks may be classified as falling into at least three classes, in a way similar to that suggested by Cohen and Pritchard (1980): (a) Risks for which statistics of identified casualties are available.
(b) Risks for which there may be some evidence, but where the connection between suspected cause and injury to any one individual cannot be traced (e.g. cancer long after exposure to radiation or a chemical).
(c) Experts' best estimates of probabilities of events that have not yet happened.
Additionally, there are risks that were not foreseen, for which causal connections are sought after new effects or casualties appear.
Many examples of substantial risk have an engineering content. Engineers are involved in the design and construction of systems and the components or sub-systems that form part of the systems. They additionally have professional responsibility for the safety of these systems. Ail systems have a probability of failure and the complete avoidance of all risk of calamitous failure is not possible, but the objective of engineers must be to reduce the probability to an acceptable individual and societal risk. Engineers attempt to quantify the risk by a physical appreciation of possible failure mechanisms or modes and their analysis. This requires quantification of the reliability of the components and the examination of the systematic failure of software in Programmable Electronic Systems used, for instance, in the control of processes to establish the overall reliability of the complete system, based on experience verified by analysis, testing and inspection.
An example of the examination of past events in building up an understanding of failure modes is provided by the investigation of box-girder bridges (Merrison 1971). In contrast to this, individual engineering projects involving new techniques give rise to the problem of setting and achieving suitable target levels of risk. Flint (1981) discusses this in the context of civil engineering, pointing out that it has for some time been the practice to express design criteria in terms of events having a prescribed probability of occurrence in the lifetime of the structure, e.g. wave-loading for off-shore platforms, and flood level for the Thames barrier. In view of the potential for major consequences involved in engineering failures, it is not acceptable to wait for disasters to occur so as to build up a body of case histories as a basis for policy decisions. An anticipatory approach based on judgement and experience is required, and risk estimation attempts to provide this by methods based on the systematic analysis of complex plant into its component sub-systems, and the use of predictive techniques and modelling. Further analysis of failure mechanisms follows, and then the risk is synthesized by drawing together models of the individual sub-systems. This procedure requires access to a wide range of data on failures that have occurred in the past, a substantial body of scientific knowledge about the various processes that are intended to occur or that could occur in the system, and a similar breadth of knowledge concerning the behaviour in the environment of materials that could be released and the response of people, structures, etc. suffering exposure to those materials.
It is clear that the results of such a procedure will be subject to substantial uncertainties arising from inadequacies in the data and from insufficient depth or accuracy of the scientific knowledge applied. It is therefore important to recognize that there are other more traditional methods in wide-spread use which are essentially deterministic in nature.
The essentially deterministic approach can be illustrated by the factor of safety treatment in the design of a loaded structure. Such a structure will be deemed to have failed if the load or stress to which it is subjected exceeds the yield strength of the materials. Various factors such as wear, corrosion and misuse may increase the stress, and quality variations in manufacture, defects and fatigue may reduce the strength. Similarly the loadings will vary according to the use and environmental conditions that apply at the time. The traditional method of allowing for such variations is the application of a design safety margin, which is the difference between the design strength and design stress; the design safety factor being their ratio. This approach depends on estimation of mean values for strength and stress.
In practice there will be a distribution of stresses and of strengths, both having mean values with a spread about those means. On the reasonable assumption that the mean stress will be smaller than the mean strength it is clear that where the upper end of the stress distribution encounters the lower end of the strength distribution there will be structural failure. This leads to definitions of safety factors and safety margins in probabilistic terms (Lees 1980, pp. 114—116).
The deterministic approach incorporates the concept of variability of stress and strength, but implies that there is a level of probability of failure that is acceptable for design purposes, without quantifying that level. The probabilistic approach, in contrast, includes the low-probability events in the overall assessment. However, as an approach it is necessary for sufficient relevant data to be available, which is by no means always practical or economic.
In terms of decision making this means that the deterministic approach incorporates implicit value judgements as to what is an acceptable standard of practice, and is largely derived from an extension of past practice and experience, which may not be entirely adequate to deal with rapidly changing technology. In contrast, the probabilistic approach describes the hazard in terms of the risks of failure and their associated consequences, thereby enabling the decision as to acceptability to be externalized from the design process which assists in making the judgements needed.
The introduction of new technology has led to increasing use of computers that include software and which fulfil protection and/or control functions in safety-related or safety-critical systems. The term Programmable Electronic System (PES) is used to describe such systems (Health and Safety Executive 1987). The nature of software is such that it is not subject to random failure, but only to systematic failure. PESs, particularly their software, are therefore not readily amenable to the demonstration of risk reduction based on quantified reliability values. Wider, qualitative, arguments need to be applied, and this has led to the development of the concept of Safety integrity Levels.
Software is a relatively new area of engineering which is still developing rapidly. Software has an immense potential to fulfil existing functions better or more cheaply or to fulfil otherwise impractical functions. Software is used for control and/or protection in a multitude of applications such as nuclear reactors (Health and Safety Executive 1992), oil production platforms, anti-lock braking systems, fly-by-wire aircraft, train control and central heating boilers.
The use of PESs in safety related systems and the risks associated with such use are introduced and discussed by the Institution of Electrical Engineers (1992) and by Bennett (1991), while Metz (1991) notes the additional human problems posed by computer control in process plants.
During the life-time of systems the risk must be minimized by maintenance, inspection and re-appraisal, such as the regular inspection of bridges, dams, components of nuclear power plants, airframes and engines, and PESs and their associated transducers. In the estimation of risk it is necessary to examine the reproducibility and practicability of inspection systems. An example is the non-destructive examination of the components of the pressure circuits of nuclear power plant, where human access is greatly restricted or impossible so reliance must be placed on remotely operated inspection techniques. The validity and reliability of these procedures must be established. It should be recognized that excessively complex safety systems may be self defeating as they cannot be adequately validated and updated.
The risk of failure and the calamitous consequences that may result will be greatly influenced by management. The absence or lack of adequate management and auditing of safety were seen as an important contributory factor in several recent major disasters including the Herald of Free Enterprise (Steel 1987), the King's Cross Underground fire (Fennell 1988), the Clapham Junction railway accident (Hidden 1989) and the Piper Alpha disaster (Cullen 1990). Effective management and auditing of safety involves many of the principles of Total Quality Management (TQM) to ensure the maintenance of safe practices laid down in the safety case. Management of safety also involves the training of staff to observe, record and report, and as importantly to react to the onset of a potential disaster, and to organize evacuation and rescue procedures in the event of a disaster. The importance of effective management and auditing of safety in reducing risk cannot be too strongly emphasized.
One of the dilemmas for engineers in risk assessment is in defining what is an acceptable risk. As can be seen from Chapter 5, Risk Perception, and Chapter 6, Risk Management, there is a problem with this concept. For example, from the statistics in Chapter 4, it appears that the individuals involved in rock climbing or hang gliding or motor cycling accept a very high risk, whereas they would probably expect a much lower risk when travelling on public transport and yet an even lower risk for nuclear power plant. Though engineers involved in the assessment of risk are sensitive to the public perception of risk, it is necessary for them to quantify what is an acceptable risk in particular circumstances to have a target for their risk assessment exercise. This does not imply that the engineer will not try to reduce the risk further if this can be achieved at an acceptable cost. However, it is necessary to be realistic and allocate limited financial resources on the basis of cost-benefit assessment.

2.2 The Techniques of Risk Estimation

2.2.1 Hazard Identification

A vital component of risk estimation is the identification of hazards. The effectiveness of this requires a thorough understanding of the process or system which is clearly dependent on the knowledge, experience, engineering judgement and imagination of the study team to whom the task is assigned. It must include the human element - cultural, organizational, group and individual which is frequently a contributing cause of disasters. A substantial body of experience has been accumulated and documented, for instance in codes of practice and procedures adopted. Reference to this literature reduces the likelihood of omitting significant hazards, and many of the techniques have been systematized to a useful degree. Lees (1980), and Kietz (1992) review hazard identification techniques, including safety audits, hazard survey, hazard indices and hazard and operability studies. These are briefly described in the annex to this chapter.
The exercise of hazard identification is a useful discipline in its own right in drawing attention to some areas of unacceptable risk, which can be eliminated or greatly reduced by modification of the design or the safety system. Hazard identification is a potential source of error as a consequence of a failure to identify all the potential hazards or the way in which they can arise. Having identified the hazards, it is necessary to quantify the risk; the process or quantification may be considered as falling into two phases, namely, reliability and failure analysis, and consequence modelling.

2.2.2 Reliability and Failure Analysis

Reliability can be defined as the probability that a component will perform a required specified function. This may depend on the component's success in commencing to operate when required, continuing to operate subsequently, not operating before demand, and not continuing after the demand has ceased. The reliability of a multi-component system depends on the incidence of failures of its components. Data on such failures and their precursors may usefully be fitted to statistical distribution functions for use in reliability analysis. The choice of appropriate distribution functions is discussed by Lees (1...

Table of contents

  1. Cover
  2. Half Title
  3. Title
  4. Copyright
  5. Series Page
  6. Original Title
  7. Original Copyright
  8. Contents
  9. Acknowledgements
  10. Series Preface
  11. Introduction
  12. 1 The Royal Society (1992), ‘Estimating Engineering Risk’, RISK - Analysis, Perception, Management, Report of a Royal Society Study Group, London: The Royal Society, pp. 13-34.
  13. 2 T. Horlick-Jones and G. Peters (1991), ‘Measuring Disaster Trends Part One: Some Observations on the Bradford Fatality Scale’, Disaster Management, 3, pp. 144-48.
  14. 3 T. Horlick-Jones, J. Fortune and G. Peters (1991), ‘Measuring Disaster Trends Part Two: Statistics and Underlying Processes’, Disaster Management, 4, pp. 41-45.
  15. 4 Kevin Keasey and Robert Watson (1991), ‘Financial Distress Prediction Models: A Review of Their Usefulness’, British Journal of Management, 2, pp. 89-102.
  16. 5 Zachary Sheaffer, Bill Richardson and Zehava Rosenblatt (1998), ‘Early-Warning-Signals Management: A Lesson from the Barings Crisis’, Journal of Contingencies and Crisis Management, 6, pp. 1-22.
  17. 6 Thierry C. Pauchant, Ian I. Mitroff and Patrick Lagadec (1991), ‘Toward a Systemic Crisis Management Strategy: Learning from the Best Examples in the US, Canada and France’, Industrial Crisis Quarterly, 5, pp. 209-32.
  18. 7 Jaak Jurison (1995), ‘The Role of Risk and Return in Information Technology Outsourcing Decisions’, Journal of Information Technology, 10, pp. 239-47.
  19. 8 Neil Ritson (1998), ‘Close-Coupled Disasters How Oil Majors are De-integrating and then Managing Contractors’, Proceedings, 3rd International Conference Managing Innovative Manufacturing, pp. 183-91.
  20. 9 Diane Vaughan (1990), ‘Autonomy, Interdependence, and Social Control: NASA and the Space Shuttle Challenger’, Administrative Science Quarterly, 35, pp. 225-57.
  21. 10 Jos A. Rijpma (1997), ‘Complexity, Tight-Coupling and Reliability: Connecting Normal Accidents Theory and High Reliability Theory’, Journal of Contingencies and Crisis Management, 5, pp. 15-23.
  22. 11 Clive Smallman and D.T.H. Weir (1995), ‘Culture and Communications: Countering Conspiracies in Organisational Risk Management’, New Avenues in Crisis Management, pp. 147-55.
  23. 12 William Richardson (1993), ‘Identifying the Cultural Causes of Disasters: An Analysis of the Hillsborough Football Stadium Disaster’, Journal of Contingencies and Crisis Management, 1, pp. 27-35.
  24. 13 Bill Keepin and Brian Wynne (1984), 'Technical Analysis of IIASA Energy Scenarios’, Nature, 312, pp. 691-95.
  25. 14 Christine M. Pearson and Ian I. Mitroff (1993), ‘From Crisis Prone to Crisis Prepared: A Framework for Crisis Management’, Academy of Management Executive, 7, pp. 48-59.
  26. 15 Peter Nijkamp (1994), ‘Global Environmental Change: Management Under Long-range Uncertainty’, Journal of Contingencies and Crisis Management, 2, pp. 1-9.
  27. 16 Gerald Mars and Steve Frosdick (1997), ‘Operationalising the Theory of Cultural Complexity: A Practical Approach to Risk Perceptions and Workplace Behaviours’, International Journal of Risk, Security and Crime Prevention, 2, pp. 115-29.
  28. 17 Michael P. Hottenstein and James W. Dean Jr (1992), ‘Managing Risk in Advanced Manufacturing Technology’, California Management Review, 34, pp. 112-26.
  29. 18 Karlene H. Roberts, Denise M. Rousseau and Todd R. La Porte (1994), ‘The Culture of High Reliability: Quantitative and Qualitative Assessment Aboard Nuclear-Powered Aircraft Carriers’, The Journal of High Technology Management Research, 5, pp. 141-61.
  30. 19 John Robertson and Roger W. Mills (1988), ‘Company Failure or Company Health? - Techniques for Measuring Company Health’, Long Range Planning, 21, pp. 70-77.
  31. 20 Matthew Bishop (1996), ‘Corporate Risk Management: A New Nightmare in the Boardroom’, The Economist, 10 February, pp. 3-6, 9-10, 15-22.
  32. 21 Steve Frosdick (1995), ‘“Safety Cultures” in British Stadia and Sporting Venues: Understanding Cross-organizational Collaboration for Managing Public Safety in British Sports Grounds’, Disaster Prevention and Management, 4, pp. 13-21.
  33. 22 Katarina Svensson Kling, Michael J. Driver and Rikard Larsson (1999), ‘The Human Side of the Banks’ Credit Management of Small Firms - A Cognitive Approach to Corporate Evaluation’, Conference Proceedings of the 3rd International Stockholm Seminar on Risk Behaviour and Risk Management, Stockholm, Sweden, pp. 115-54.
  34. 23 Katarina Svensson and Per-Ola Ulvenblad (1995),‘Management of Bank Loans to Small Firms in a Market with Asymmetric Information - An Integrated Concept’, Scandinavian Institute for Research in Entrepreneurship (SIRE), Working Paper 1995:2, pp. ii, 1-23.
  35. 24 Michael Regester (1987), ‘Prevention is Better than Cure’ in ‘Crisis Management: How to Turn a Crisis into an Opportunity’, Hutchinson Business, pp. 143-53.
  36. 25 Alexander Goulielmos and Ernestos Tzannatos (1997), ‘The Man-Machine Interface and its Impact on Shipping Safety’, Disaster Prevention and Management, 6, pp. 107-17.
  37. 26 Erik H. Bax (1995), ‘Organization and the Management of Safety Risks in the Chemical Process Industry’, Journal of Contingencies and Crisis Management, 3, pp. 165-80.
  38. 27 Barry A. Turner (1991), ‘The Development of a Safety Culture’, Chemistry and Industry, 1, pp. 241-43.
  39. 28 Nick Pidgeon (1997), ‘The Limits to Safety? Culture, Politics, Learning and Man-Made Disasters’, Journal of Contingencies and Crisis Management, 5, pp. 1-14.
  40. 29 Brian Wynne (1983), ‘Redefining the Issues of Risk and Public Acceptance. The Social Viability of Technology’, Futures, 15, pp. 13-32.
  41. 30 B. Toft and B.A. Turner (1987), ‘The Schematic Report Analysis Diagram: A Simple Aid to Learning from Large-scale Failures’, International CIS Journal, 1, pp. 12-23.
  42. 31 Michael Thompson and Michael Warburton (1985), ‘Decision Making Under Contradictory Certainties: Horow to Save the Himalayas When You Can’t Find Out What’s Wrong With Them’, Journal of Applied Systems Analysis, 12, pp. 3-34.
  43. 32 David A. Bella (1987), ‘Organizations and Systematic Distortion of Information, Journal of Professional Issues in Engineering, 113, pp. 360-70.
  44. 33 Heather Hüpfl (1994), ‘Safety Culture, Corporate Culture, Organizational Transformation and the Commitment to Safety’, Disaster Prevention and Management, 3, pp. 49-58.
  45. 34 B. Toft (1992), ‘The Failure of Hindsight’, Disaster Prevention and Management, 1, pp. 48-60.
  46. 35 Jon Elster (1979), ‘Risk, Uncertainty and Nuclear Power’, Social Science Information, 18, pp. 371-400.
  47. 36 Ian I. Mitroff (1994), ‘Crisis Management and Environmentalism: A Natural Fit’, California Management Review, 36, pp. 101-13.
  48. Name Index