Evidence-Based Decision-Making
eBook - ePub

Evidence-Based Decision-Making

How to Leverage Available Data and Avoid Cognitive Biases

Andrew D. Banasiewicz

Share book
  1. 270 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Evidence-Based Decision-Making

How to Leverage Available Data and Avoid Cognitive Biases

Andrew D. Banasiewicz

Book details
Book preview
Table of contents
Citations

About This Book

Evidence-Based Decision-Making: How to Leverage Available Data and Avoid Cognitive Biases examines how a wide range of factual evidence, primarily derived from a variety of data available to organizations, can be used to improve the quality of business decision-making, by helping decision makers circumvent the various cognitive biases that adversely impact how we all think.

The book is built on the following premise: During the past decade, the new 'data world' emerged, in which the rush to develop competencies around business analytics and data science can be characterized as nothing less than the new commercial arms race. The ever-expanding volume and variety of data are well known, as are the great advances in data processing/analytics, data visualization, and related information production-focused capabilities. Yet, comparatively little effort has been devoted to how the informational products of business analytics and data science are 'consumed' or used in the organizational decision-making processes, as the available evidence shows that only some of that information is used to drive some business decisions some of the time.

Evidence-Based Decision-Making details an explicit process describing how the universe of available and applicable evidence, which includes organizational and other data, industry benchmarks, scientific studies, and professional experience, can be assessed, amalgamated, and funneled into an objective driver of key business decisions.

Introducing key concepts in relation to data and evidence, and the history of evidence-based management, this new and extremely topical book will be essential reading for researchers and students of data analytics as well as those working in the private and public sectors, and in the voluntary sector.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Evidence-Based Decision-Making an online PDF/ePUB?
Yes, you can access Evidence-Based Decision-Making by Andrew D. Banasiewicz in PDF and/or ePUB format, as well as other popular books in Business & Gestione. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2019
ISBN
9781351050050
Edition
1
Subtopic
Gestione

PART I

Decision-Making Challenges

1

Subjective Evaluations

It is well known that the human brain has essentially the same basic structure as other mammalian brains; yet, somehow, it gives rise to capabilities that enable humans to do so much more. In addition, although manifestations of those capabilities span the spectrum ranging from tragic to triumphant, the intellectual prowess that emanates from the roughly three pounds of squidgy matter that is the human brain seem limitless. From breathtaking works of art to astounding scientific and technological achievements, the sense of purpose, the need to understand and the need to believe, the ability to admire, to marvel, to dream, to imagine, are all borne out of this seemingly unremarkable structure.
Yet, brilliance is not omnipotence. When confronted with a task of quickly making sense of a large and diverse set of situational stimuli, the brain often makes use of sensemaking heuristics, or shortcuts, producing what is commonly referred to as intuition. Defined as the ability to understand something immediately and without the need for conscious reasoning, those nearly instantaneous conclusions feel very natural, and typically very ‘right’, but can ultimately turn out to be unwarranted or outright incorrect. In the course of the past few decades, psychologists and neuroscientists documented and described numerous manifestations of cognitive bias, or instances of sensemaking conclusions that deviate from rational judgment. And yet in spite of voluminous and convincing evidence pointing out more and more potential reasoning pitfalls, the deference to intuition-inspired decision-making shows few signs of relenting.
The focus of this chapter is on the root causes, mechanics, and ultimately the impact of cognitive biases, in their many forms, on individual decision-making. Set in the illustrative context of machine vs. human information processing, the mechanics of human learning and remembering are examined, followed by an in-depth analysis of intuitive choice-making. The goal is to show how our – that is, human – information processing ‘mechanics’ can be a source of a persistent evaluative bias, ultimately leading to suboptimal choices and decisions.

Thinking and Games

Card counting is a common casino strategy used primarily in the blackjack family of games to help the player decide whether the next hand is likely to give advantage to the player or the dealer. The most basic variation of card counting in blackjack is rooted in the idea that high cards, most notably aces and 10s, benefit the player more than the dealer, whereas the low cards, particularly 5s, but also 3s, 4s, and 6s, benefit the dealer more than the player. The strategy takes advantage of basic statistics: A high concentration of aces and 10s tends to diminish the inherent house advantage by increasing the player’s chances of hitting a natural blackjack, whereas a high concentration of 5s and other low cards further compounds the house advantage. Those who are able to count cards, typically characterized as skilled players, can therefore modulate their betting by altering bet sizes based on the composition of remaining cards. Thus even though, on average, a blackjack player can expect to win only about 48% of the hands dealt and lose the remaining 52% (ignoring ties which can be expected in about 9% of the hands), varying the size of bets based on card count-adjusted outcome probability – that is, placing higher bets when counts are advantageous to the player and placing lower bets otherwise – can result in a player beating the house, as measured by monetary outcomes. As famously portrayed in the 2008 movie ‘21’,1 card counting blackjack teams have been known to win big – millions of dollars big. Although legal in all major gaming jurisdictions, card counting with the mind2 is, not surprisingly, frown upon by casinos, with many investing considerable resources in technological and human countermeasures.
At its core, counting cards offers blackjack players the ability to reduce decision ambiguity by relying on objective, in this case statistical, evidence. An unskilled player is likely to make betting choices based on his intuition alone – as such, that player is depending almost entirely on chance, which as noted earlier favors the house. A skilled player, on the other hand, enhances his intuition by taking into account empirical evidence which chips away at the house advantage. We certainly cannot dismiss the possibility of an unskilled player winning, and possibly even winning big – after all, virtually all lottery jackpot winners are just lucky pickers of essentially random numbers. However, there is a considerable difference between picking a favorable outcome in a single-trial event, such as a lottery drawing, and winning a game comprised of series of somewhat sequentially dependent decisions. The truly interesting point here is that decision-guiding precision of card counting-derived information is relatively low, ultimately just enabling players to develop more refined expectations regarding the likely composition of the mix of cards in the shoe3 – in essence, it just tightens the estimated probability ranges. And yet that seemingly small amount of information is enough to deliver impressive and repeated player wins, and certainly enough to compel casinos to invest in a variety of countermeasures including decreasing deck penetration, preferential shuffling, and large wager increase-triggered shuffling, to name just a few. In short, a skilled player’s mind can translate relatively small infusions of objective insights into disproportionately large benefits.
Although the exact mechanics of how that happens are still shrouded in mystery, some possible clues might be offered by considering a different game, but this time not a game of chance, but what might be considered the ultimate game of strategy – chess. This centuries-old game (it is believed to have originated in India around 6th century AD) is a true test of cerebral fitness – to prevail, a player needs to consider a wide range of available strategies and tactical moves, all while recognizing and adapting to the opponent’s moves. Given its highly analytical, zero-sum (one player’s gain is the other player’s loss), perfect information (all positions are perfectly visible to both players), and perhaps most importantly combinatorial (each successive move generates a typically large set of possibilities) nature, chess naturally lends itself to machine learning in the form of computer-based chess-playing systems. Thus not surprisingly, the history of those systems roughly parallels the history of what is known as ‘artificial intelligence’ (AI).
AI is a scientific endeavor of growing interest (and controversy) that aims to answer the basic question: Can a machine be made to think like a person? Almost from the start, that question was tied to the question of whether a machine could be made to play chess, as the strategic nature of that game embodies, in many regards, what we tend to view as uniquely human combination of reason and creativity. Building on the work of Alan Turing,4 John Von Neumann, Claude Shannon, and other early 20th-century information theory pioneers, computerized chess-playing system designers forecasted that machines would come to dominate humans as early as the 1960s,5 but it took three more decades of algorithmic design and computing power advances for that forecast to come true (now computerized chess systems routinely beat the best human players, so much so that those systems play other such systems for the ‘best of the best’ bragging rights). That tipping point was reached in the famous 1997 match, which pitted the then reining chess world champion Garry Kasparov against IBM’s supercomputer known as Deep Blue. That event marked the first time a computer defeated a human chess champion,6 under regular time controls (on average 3 min per move). While the news of Deep Blue’s victory stirred worldwide sensation, looking back at those events we should have been more astounded that it took that long for a computer to better the best human player. After all, the game of chess is combinatorial in nature7; moreover, given the ever-increasing computer processing power and advances in evaluation algorithms, the eventual dominance of computerized chess-playing systems was an inescapable consequence of technological and scientific progress – the only true question was ‘when’.
Thus, the truly fascinating aspect of Kasparov’s duel with Deep Blue was the astounding evidentiary asymmetry: Capable of processing more than 200 million instructions per second, in a timed match, on turn-by-turn basis, Deep Blue was able to evaluate millions of possible move sequences, whereas its human opponent was only able to carefully consider a small handful, likely fewer than ten, sequences. Further adding to that disparity was the fact that although the designers of Deep Blue had access to hundreds of Kasparov’s games, Kasparov himself was denied access to recent Deep Blue’s games. Thus, the contest ultimately pitted a human chess player relying primarily on ends-and-means heuristic to intuitively determine optimal outcomes of few move sequences, against a computer programmed with the best strategies human chess players – as a group – devised, lightning fast access to dizzying number of alternatives, and an equally lightning fast decision engine powered by the most advanced algorithms devised and programmed by leading scientists. And let’s not lose sight of the fact that Deep Blue, like all machines, was not hindered by factors such as fatigue or recall decay (forgetting). In view of the enormous informational and computational disparity, it is nothing short of amazing that the human champion convincingly won the initial (1996) bout, 4-2, and was only narrowly edged out in the (1997) rematch, where in six games there were three draws, two games were won by Deep Blue and one by Kasparov. Or at least that is how it would appear to a casual observer, one curious enough to ponder those matters, but not necessarily knowledgeable enough to grasp the true essence of the answer. Let us take a closer look at the storage and processing speed aspects of human thinking, as those two considerations are at the core of perceived machine processing superiority.

Mind vs. Machine

Most of us take for granted that even the slowest electronic computers are orders of magnitude faster than our own ‘mental computing’. While that is indeed the case when it comes to comparing the speed with which even a computationally gifted human can, for instance, find the product of two large numbers, the opposite is actually true when we compare the speed of our brain’s computational functions with those of an electronic computer – it turns out that even today’s fastest supercomputers lag far behind our brain’s computational prowess.
In a physical sense, the human brain can be described as approximately three pounds of very soft and highly fatty (at least 60% – the most of any human organ) tissue, made up of some 80–100 billion nerve cells known as neurons, the totality of which comprises what scientists refer to as ‘gray matter’. Individual neurons are networked together via exons, which are wire-like connectors numbering in trillions (it is believed that each neuron can form several thousand connections, which in aggregate translates into a staggering 160+ trillion synaptic connections) and jointly referred to as ‘white matter’. Functionally, gray matter performs the brain’s computational work, whereas white matter enables communication among different regions of the brain which are responsible for different functions (as in physical and mental processes) and where different types of information are stored; together, this exons-connected network of neurons forms a single-functioning storage, analysis, and command center, which can be thought of as our biological computer. It is also where the earlier mentioned ends-and-means heuristic, along with countless other processes, is executed, and thus to understand the efficacy of that seemingly simple process it is instructive to consider the two distinct closely intertwined aspects of human brain: storage and processing speed.
Although billions of cells linked by trillions of connections make for a very large network (160+ trillion synaptic connections, as noted earlier), if each neuron was only capable of storing a single memory, the entire human brain network would only be capable of a few gigabytes of storage space – about the size of a small capacity flash drive. However, research suggests that individual neurons ‘collaborate’ with each other, or combine so that each individual cell helps with many memories at a time, which exponentially increases the brain’s storage capacity, bringing it to around 2.5 petabytes, or about a million gigabytes. Mechanics of that ‘collaboration’ are complex and not yet fully understood, but they appear to be rooted in neurons’ geometrically complex structure characterized by multiple receptive mechanisms, known as dendrites, and a single, though highly branched outflow (an axon) that can extend over relatively long distances. To put all of that in less abstract terms, the effective amount of the resultant storage makes it possible for brain to pack enough footage to record roughly 300 years of nonstop TV programming, or about 3 million individual shows, which is more than enough space to retain every second of one’s life (including countless chess strategies). Moreover, according to the emerging neuroscientific research consensus, the brain’s storage capacity can grow, but it does not decrease. What drops off, at times precipitously, is the retrieval strength, especially when memories – including semantic and procedural knowledge critical to abstract thinking – are not reinforced (more on that later). Although electronic computers have, in principle and in practice, infinite amount of storage as more and more external storage can continue to be added, brain’s storage capacity is limited, but at the same time it is sufficient.
When expressed in terms of raw computing power, as measured in terms of the number of calculations performed in a unit of time, electronic computing devices appear to be orders of magnitude faster than humans – however, that conclusion is drawn from a biased comparison of the speed of rudimentary machine operations compared to the speed of higher-order human thinking. Abstracting away from deep learning-related applications (which entail layered, multidimensional machine learning), machine-executed computational steps can be characterized as one-dimensional and explicit, which is to say they tend to follow a specific step-by-step logic built around sequential input-output processes. In contrast to that, brain’s computation is dominantly non-explicit and multidimensional, which is to say that we are unaware of the bulk of computations running in our mental background, and our cognitive problem solving takes place at a higher level of abstraction. And so while it is tempting to compare the speed with which a human can consciously execute a specific computational task to the speed with which the same task can be accomplished by a machine – as was the case with the earlier Kasparov vs. Deep Blue chess move evaluation comparison – doing so effectively compares the speed of high-order human reasoning with rudimentary machine-based computation. It is a bit like comparing the amount of time required for an author to write a captivating novel to the amount of time required by a skilled typist to retype the content of that novel.
Let us then take a closer look at how the rudimentary speed of our biological computer stacks up against an electronic computer. First, some important qualitative considerations: When Deep Blue was evaluating chess move sequences, it could devote close to 100% of its computational resources to the task at hand. When Kasparov was pondering his next move, his brain had to allocate considerable resources to a myriad of physiological functions such as maintaining of appropriate body temperature and blood pressure, controlling the heart rate and breathing, controlling the entire musculoskeletal structure to allow Kasparov to remain in a particular position and engage in specific movements, and, of course, the mental activities such as thinking. And though we rarely consciously think about it, just a single one of those functions entails a staggering amount of computational work on the part of our brain. To that end, in a 2014 experiment, a group of clever Japanese and German researchers managed to simulate a single second of human brain activity using what was then the fourth fastest supercomputer in the world (the K Computer powered to nearly 83,000 processors)...

Table of contents