The Black Swan Problem
eBook - ePub

The Black Swan Problem

Risk Management Strategies for a World of Wild Uncertainty

Hakan Jankensgard

Partager le livre
  1. English
  2. ePUB (adapté aux mobiles)
  3. Disponible sur iOS et Android
eBook - ePub

The Black Swan Problem

Risk Management Strategies for a World of Wild Uncertainty

Hakan Jankensgard

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

An incisive framework for companies seeking to increase their resilience

In The Black Swan Problem: Risk Management Strategies for a World of Wild Uncertainty, renowned risk and finance expert HÄkan JankensgÄrd delivers an extraordinary and startling discussion of how firms should navigate a world of uncertainty and unexpected events. It examines three fundamental, high-level strategies for creating resilience in the face of "black swan" risks, highly unlikely but devastating events: insurance, buffering, and flexibility:

The author also presents:

  • Detailed case studies, stories, and examples of major firms that failed to anticipate Black Swan Problems and, as a result, were either wiped out or experienced a major strategy disruption
  • Extending the usual academic focus on individual biases to analyze Swans from an organizational perspective and prime organizations to proactive rather than reactive action
  • Practical applications and tactics to mitigate Black Swan risks and protect corporate strategies against catastrophic losses and the collateral damage that they cause
  • Strategies and tools for turning Black Swan events into opportunities, reflecting the fact that resilience can be used for strategic advantage

An expert blueprint for companies seeking to anticipate, mitigate, and process tail risks, The Black Swan Problem is a must-read for students and practitioners of risk management, executives, founders, managers, and other business leaders.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que The Black Swan Problem est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  The Black Swan Problem par Hakan Jankensgard en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Business et Financial Risk Management. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Éditeur
Wiley
Année
2022
ISBN
9781119868163
Édition
1

CHAPTER ONE
The Swans Revisited

THE BLACK SWAN OF the popular imagination is one that swoops down from a clear blue sky, creating massive disorder in a very short amount of time. We expect it to be sudden and dramatic. The archetypical Black Swan is perhaps the 9/11 attack on the twin towers in New York in 2001. Virtually nobody could have been able to imagine such a thing. It was simply not on the mental map that something like that should even exist. Yet it happened, and in a single stroke, the world was a different place. The path we were on changed. The attack led to a whole new security apparatus, the war on terror, and the war in Iraq, to mention but a few of its consequences.
Actually, the ‘out of the blue’ aspect is not part of the original framework. Some Swans cited by Taleb take years if not centuries to play out. According to Taleb, Black Swans have just three attributes, none of which refers to suddenness. First, they are highly improbable. Second, they are highly consequential. Third, they make perfect sense after the fact.1 When people talk about Black Swans it is usually the first two aspects they focus on, as if the term were essentially shorthand for low probability high impact risks. Simplifying in this way is wholly consistent with the reason that the Black Swan problem exists in the first place, reflecting as it does our tendency to reduce the number of dimensions of the phenomenon before us down to something more tractable and convenient.
Equating Black Swans with ‘mere’ low probability high impact risk, however, is to do the concept significant injustice. In reality, the Black Swan framework is valuable because it represents an altogether different way of approaching the world. Taleb asks us to reconsider some of our core assumptions about the very nature of the randomness we face as decision‐makers and the inferences we make based on what we can observe. Furthermore, he brings our attention to the crucial role of expectations and attitudes in dealing with uncertainty. The problem, Taleb explains, is one of not being humble enough with respect to the limitations of our knowledge. If we believe the world consists of a certain kind of randomness and that we can have mastery over it, we may be in for some pretty bad surprises if those beliefs do not conform with reality. We can try to impose crisp and stylized ideas that appeal to our aesthetic sensibilities as much as we want, but the chaotic world we live in refuses to bend. This insistence on abstract beauty is what Taleb has in mind when he labels something as ‘Platonic’, after the famed Greek philosopher who saw loveliness in order and maintained that it could be superimposed on the messy reality we can observe with our senses (Taleb, 2007, p.19).

THE NATURE OF RANDOMNESS

Randomness refers to unpredictability. It applies whenever the outcome for some variable, such as the number of visitors to the Louvre on a given weekday, cannot be known with certainty beforehand. It is a function of our inability to know and predict the future. Try as we might, we never seem able to build those perfect forecasting algorithms that get it right all the time. In fact, as Taleb is at pains to point out, our overall track record in forecasting is awful (more on this later).
Why is there a general failure to predict what the future will bring? To answer this question, first consider that one very basic source of randomness is the physical world itself, which is constantly changing through processes that we do not fully comprehend. Science marches on, chipping away at the ignorance that produces apparent randomness. But despite the many laws of nature that have been uncovered, we never know where the next lightning will strike or how ocean currents will respond to changes in melting ice sheets. In the end, there are too many variables and too many complicated feedback loops in these highly dynamic systems. On top of that there is human civilization itself. While once rudimentary and mostly local, over time society has become complex beyond imagination. Technical innovations have made possible advanced systems that increasingly connect people across different parts of the globe. It is fundamentally unknowable what outcomes these vast and interconnected systems of interacting people and technologies will produce. Human agency by itself ensures why the future keeps bringing so many surprises, as the 9/11 attack illustrates. It should be clear that we are up against a complexity that is beyond our ability to predict successfully.
The difficulties we face in predicting the future is related to the problem of induction, a classic problem in philosophy. While data can certainly teach us a great deal about the workings of the world, the philosopher and sceptic David Hume made us realize that we cannot arrive at secure knowledge on the basis of empirical observations. The problem of induction says that no matter how many observations you obtain, you cannot know for sure that the observed pattern is going to hold in the future. This inherent limitation is at the heart of the Black Swan concept. Any knowledge obtained through observation, Taleb says, is fragile. It is what the Black Swan metaphor itself is meant to convey. Recall that millions of observations on white swans had seemingly verified the notion that all swans are white, and it only took one observation of a black one to falsify it. Along the same lines, Peter Bernstein (1996) observed in his epic story about risk that: ‘
 history repeats itself, but only for the most part’2 (emphasis added). This sentence really sums it all up and explains why induction is treacherous ground for making assumptions about the future.
Once we capitulate to the fact that we cannot predict the future, the next best thing would be to be able to characterize randomness itself, i.e. describe it. In that way, we would have some idea about the scope for deviations from what we expect. A description of randomness would involve some degree of quantification of things like the range within which the values of a variable can be assumed to fall and how the outcomes are distributed within that range (frequencies). We might occasionally find such descriptions of random processes to be practically relevant insofar as they help us make informed decisions and our future wellbeing depends on the outcome of the variable in question. They are potentially helpful, for example, in coming up with a reasonable analysis of the trade‐off between risk and return in different kinds of investment situations.
When characterizing randomness, a useful first distinction is between uncertainty and known odds.3 Uncertainty simply means that the odds are not known, indeed cannot be known. When randomness is of this sort, there is no way of knowing with certainty the range of outcomes and their respective probabilities. Known odds, in contrast, means that we have fixed the range of outcomes and the associated probabilities. The go‐to example is the roll of a dice, in which the six possible outcomes have equal probabilities. Drawing balls with different colours out of an urn is another favourite textbook example of controlled randomness.
Uncertainty, it turns out, is what the world has to offer. In fact, known odds hardly exist outside man‐made games. This is the case for exactly the same reasons that forecasting is generally unsuccessful: there are some hard limits to our theoretical knowledge of the world.4 There is ample data, for sure, which partly makes up for it. But the world generates only one observable outcome at a time, out of an infinite number of possibilities, through mechanisms and interactions that are beyond our grasp. There is nothing to say that we should be able to objectively pinpoint the odds of real‐world phenomena. Whenever a bookie, for example, offers you odds on the outcome of the next presidential election, it is a highly subjective estimate (tweaked in favour of the bookie).
Whenever data exists, it is of course possible to try to use it to come up with descriptions of the randomness in a stochastic process. Chances are that we can ‘fit’ the data to one of the many options available in our library of theoretical probability distributions. Once we have, we have seemingly succeeded in our quest to describe randomness, or to turn it into something resembling known odds. This is the frequentist approach to statistical inference, in which observed frequencies in the data provide the basis for probability approximations. Failure rates for a certain kind of manufacturing process, for example, can serve as a reasonably reliable indication of the probability of failure in the future.
It is important to see, however, that even when we are able to work with large quantities of data, we are still in the realm of uncertainty. The data frequencies typically only approximate one of the theoretical distributions. What is more, the way we collect, structure, and analyse these data points determines how we end up characterizing the random process and therefore the probabilities we assign to different outcomes. To the untrained eye, they might seem like objective and neutral probabilities because they are data‐driven and obtained by ‘scientists’. However, there is always some degree of subjectivity involved in the parameterization. The model used to describe the process could end up looking different depending on who designs it. Hand a large dataset over to ten scientists and ask them what the probability of a certain outcome is, and you may well get ten different answers. Because of the problem of induction, as discussed, there is always the possibility that the dataset, i.e. history, is a completely misleading guide to the future. Whenever we approximate probabilities using data, we assume that the data points we use are representative for describing the future.

THE MOVING TAIL

At this point, we are ready to conclude that the basic nature of randomness is uncertainty. Known odds, probabilities in the purest sense of the word, are an interesting man‐made exception to that rule. If we accept that uncertainty is what we are dealing with, a natural follow‐up question is: What is uncertainty like? A distinction we will make in this regard is between ‘benign’ and ‘wild’ uncertainty.5 Benign uncertainty means that we do not have perfect knowledge of the underlying process that generates the outcomes we observe, but the observations nonetheless behave as if they conform to some statistical process that we are able to recognize. Classic examples of this are the distribution of things like height and IQ in a population, which the normal distribution seems to approximate quite well.
While the normal di...

Table des matiĂšres