The Failure of Risk Management
eBook - ePub

The Failure of Risk Management

Why It's Broken and How to Fix It

Douglas W. Hubbard

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

The Failure of Risk Management

Why It's Broken and How to Fix It

Douglas W. Hubbard

Book details
Book preview
Table of contents
Citations

About This Book

A practical guide to adopting an accurate risk analysis methodology

The Failure of Risk Management provides effective solutionstosignificantfaults in current risk analysis methods. Conventional approaches to managing risk lack accurate quantitative analysis methods, yielding strategies that can actually make things worse. Many widely used methods have no systems to measure performance, resulting in inaccurate selection and ineffective application of risk management strategies. These fundamental flaws propagate unrealistic perceptions of risk in business, government, and the general public. This book provides expert examination of essential areas of risk management, including risk assessment and evaluation methods, risk mitigation strategies, common errors in quantitative models, and more. Guidance on topics such as probability modelling and empirical inputs emphasizes the efficacy of appropriate risk methodology in practical applications.

Recognized as a leader in the field of risk management, author Douglas W. Hubbard combines science-based analysis with real-world examples to present a detailed investigation of risk management practices. This revised and updated second edition includes updated data sets and checklists, expanded coverage of innovative statistical methods, and new cases of current risk management issues such as data breaches and natural disasters.

  • Identify deficiencies in your current risk management strategy and take appropriate corrective measures
  • Adopt a calibrated approach to risk analysis using up-to-date statistical tools
  • Employ accurate quantitative risk analysis and modelling methods
  • Keep pace with new developments in the rapidly expanding risk analysis industry

Risk analysis is a vital component of government policy, public safety, banking and finance, and many other public and private institutions. The Failure of Risk Management: Why It's Broken and How to Fix It is a valuable resource for business leaders, policy makers, managers, consultants, and practitioners across industries.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Failure of Risk Management an online PDF/ePUB?
Yes, you can access The Failure of Risk Management by Douglas W. Hubbard in PDF and/or ePUB format, as well as other popular books in Business & Financial Risk Management. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Wiley
Year
2020
ISBN
9781119522041
Edition
2

PART ONE
An Introduction to the Crisis

CHAPTER 1
Healthy Skepticism for Risk Management

It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.
—CARL SAGAN
Everything's fine today, that is our illusion.
—VOLTAIRE
What is your single biggest risk? How do you know? These are critical questions for any organization regardless of industry, size, structure, environment, political pressures, or changes in technology. Any attempt to manage risk in these organizations should involve answering these questions.
We need to ask hard questions about new and rapidly growing trends in management methods, especially when those methods are meant to help direct and protect major investments and inform key public policy. The application of healthy skepticism to risk management methods was long past due when I wrote the first edition of this book more than a decade ago.
The first edition of this book came out on the tail end of the Great Recession in 2008 and 2009. Since then, several major events have resulted in extraordinary losses both financially and in terms of human health and safety. Here are just a few:
  • Deepwater Horizon offshore oil spill (2010)
  • Fukushima Daiichi nuclear disaster (2011)
  • Flint Michigan water system contamination (starting 2012)
  • Samsung Galaxy Note 7 battery failures (2016)
  • Multiple large data breaches (Equifax, Anthem, Target, etc.)
  • Amtrak derailments/collisions (2018)
Events such as these and other natural, geopolitical, technological, and financial disasters in the beginning of the twenty-first century periodically accelerate (maybe only temporarily) interest in risk management among the public, businesses, and lawmakers. This continues to spur the development of several risk management methods.
The methods to determine risks vary greatly among organizations. Some of these methods—used to assess and mitigate risks of all sorts and sizes—are recent additions in the history of risk management and are growing in popularity. Some are well-established and highly regarded. Some take a very soft, qualitative approach and others are rigorously quantitative. If some of these are better, if some are fundamentally flawed, then we should want to know.
Actually, there is very convincing evidence about the effectiveness of different methods and this evidence is not just anecdotal. As we will see in this book, this evidence is based on detailed measurements in large controlled experiments. Some points about what works are even based on mathematical proofs. This will all be reviewed in much detail but, for now, I will skip ahead to the conclusion. Unfortunately, it is not good news.
I will make the case that most of the widely used methods are not based on any proven theories of risk analysis, and there is no real, scientific evidence that they result in a measurable improvement in decisions to manage risks. Where scientific data does exist, the data show that many of these methods fail to account for known sources of error in the analysis of risk or, worse yet, add error of their own.
Most managers would not know what they need to look for to evaluate a risk management method and, more likely than not, can be fooled by a kind of “analysis placebo effect” (more to come on that).1 Even under the best circumstances, where the effectiveness of the risk management method itself was tracked closely and measured objectively, adequate evidence may not be available for some time.
A more typical circumstance, however, is that the risk management method itself has no performance measures at all, even in the most diligent, metrics-oriented organizations. This widespread inability to make the sometimes-difficult differentiation between methods that work and methods that don't work means that ineffectual methods are likely to spread. Once certain methods are adopted, institutional inertia cements them in place with the assistance of standards and vendors that refer to them as “best practices.” Sometimes they are even codified into law. Like a dangerous virus with a long incubation period, methods are passed from company to company with no early indicators of ill effects until it's too late.
The consequences of flawed but widely adopted methods are inevitably severe for organizations making critical decisions. Decisions regarding not only the financial security of a business but also the entire economy and even human lives are supported in large part by our assessment and management of risks. The reader may already start to see the answer to the first question at the beginning of this chapter, “What is your biggest risk?”

A “COMMON MODE FAILURE”

The year 2017 was remarkable for safety in commercial air travel. There was not a single fatality worldwide from an accident. Air travel had already been the safest form of travel for decades. Even so, luck had some part to play in the 2017 record, but that luck would not last. That same year, a new variation of the Boeing 737 MAX series passenger aircraft was introduced: the 737 MAX 8. Within twelve months of the initial roll out, well over one hundred MAX 8s were in service.
In 2018 and 2019, two crashes with the MAX 8, totaling 339 fatalities, showed that a particular category of failure was still very possible in air travel. Although the details of the two 737 crashes were still emerging as this book was written, it appears that it is an example of a common mode failure. In other words, the two crashes may be linked to the same cause. This is a term familiar to systems risk analysis in some areas of engineering, where several failures can have a common cause. This would be like a weak link in a chain, but where the weak link was part of multiple chains.
I had an indirect connection to another common mode failure in air travel forty years before this book came out. In July 1989, I was the commander of the Army Reserve unit in Sioux City, Iowa. It was the first day of our two-week annual training and I had already left for Fort McCoy, Wisconsin with a small group of support staff. The convoy of the rest of the unit was going to leave that afternoon, about five hours behind us. But just before the main body was ready to leave for annual training, the rest of my unit was deployed for a major local emergency.
United Airlines flight 232 to Philadelphia was being redirected to the small Sioux City airport because of serious mechanical difficulties. It crashed, killing 111 passengers and crew. Fortunately, the large number of emergency workers available and the heroic airmanship of the crew helped make it possible to save 185 onboard. Most of my unit spent the first day of our annual training collecting the dead from the tarmac and the nearby cornfields.
During the flight, the DC-10's tail-mounted engine failed catastrophically, causing the fast-spinning turbine blades to fly out like shrapnel in all directions. The debris from the turbine managed to cut the lines to all three redundant hydraulic systems, making the aircraft nearly uncontrollable. Although the crew was able to guide the aircraft in the direction of the airport by varying the thrust to the two remaining wing-mounted engines, the lack of tail control made a normal landing impossible.
Aviation officials would refer to this as a “one-in-a-billion” event2 and the media repeated this claim. But because mathematical misconceptions are much more common than one in a billion, if someone tells you that something that had just occurred had merely a one-in-a-billion chance of occurrence, you should consider the possibility that they calculated the odds incorrectly.
This event, as may be the case with the recent 737 MAX 8 crashes, was an example of a common mode failure because a single source caused multiple failures. If the failures of three hydraulic systems were entirely independent of each other, then the failure of all three hydraulic systems in the DC-10 would be extremely unlikely. But because all three hydraulic systems had lines near the tail engine, a single event could damage all of them. The common mode failure wiped out the benefits of redundancy. Likewise, a single software problem may cause problems on multiple 737 crashes.
Now consider that the cracks in the turbine blades of the DC-10 would have been detected except for what the National Transportation Safety Board (NTSB) called “inadequate consideration given to human factors” in the turbine blade inspection process. Is human error more likely than one in a billion? Absolutely. And human error in large complex software systems like those used on the 737 MAX 8 is almost inevitable and takes significant quality control to avoid. In a way, human error was an even-more-common common mode failure in the system.
But the common mode failure hierarchy could be taken even further. Suppose that the risk management method itself was fundamentally flawed. If that were the case, then perhaps problems in design and inspection procedures, whether it is hydraulics or software, would be very hard to discover and much more likely to materialize. In effect, a flawed risk management is the ultimate common mode failure.
And suppose they are flawed not just in one airline but in most organizations. The effects of disasters like Katrina, the financial crisis of 2008/2009, Deepwater Horizon, Fukashima, or even the 737 MAX 8 could be inadequately planned for ...

Table of contents