Chapter 1
Better, Not Perfect
In April 2018, I was scheduled to be interviewed at an Effective Altruism conference at the Massachusetts Institute of Technology, about three miles from my home in Cambridge, Massachusetts.1 Unable to attend the whole conference, I arrived about an hour before my interview. I entered a large room filled with a few hundred attendees, most of them under the age of thirty, and had the somewhat random, and definitely lucky, opportunity of hearing the speaker before me, Bruce Friedrich. I had not met Bruce before, but his talk rocked my worldâpersonally and academically. A lawyer and the CEO of the Good Food Institute (gfi.org), Bruce introduced me to a new way of thinking about reducing animal suffering. He noted in his talk that the growth of vegetarianismâa commitment to eating no meat or fishâhas been very limited. One clear reason for this is that preaching to your friends about the virtues of vegetarianism is not an effective way to change their behavior or maintain your relationships with them. So, what can a vegetarian do to help others also leverage the benefits of lower consumption of animals and improve society (by improving the environment and human health, making our food production more efficient so that we can feed the worldâs hungry, and reducing the risks of a growing antibiotic crisis)?
Bruce answered this question by introducing a world of entrepreneurs, investors (some amazingly wealthy), and scientists who are working with the Good Food Institute to create and encourage the consumption of new âmeatsâ that taste very similar to meat, without requiring the pain, suffering, or death of any animals. These alternative meats included new plant-based products already on the market (such as Beyond Meat and the Impossible Burger), as well as âcultivatedâ (also called âcleanâ or âcell-basedâ) meat that will be grown from the cells of real animals in a lab and produced without the need for more animal deaths. Bruce argued that producing meat alternatives that are tasty, affordable, and readily available in grocery stores and restaurants is a much more fruitful means of reducing animal suffering than preaching about the negative effects of meat consumption. Itâs a profitable enterprise, too: within a year of Bruceâs talk, at its initial public offering, the relatively new company Beyond Meat was worth $3.77 billion. Months later, the companyâs value soared billions higher.
Many management scholars define leadership as the ability to change the hearts and minds of their followers. But note that Bruceâs strategy had little to do with changing peopleâs values and everything to do with motivating them to change their behavior, with little or no sacrifice required. This is just one example of how we can adjust our own behaviorâand encourage others to do the sameâin ways that will create more net good. Weâll explore many more of them in this book.
THE SPACE BETWEEN
I have spent my career as a business school professor. Business schools aim to offer practical research and instruction on how to do things better. I often offer my students prescriptions for how to do better, from making better decisions to negotiating more effectively to being better more broadly. By contrast, ethicists tend to either be philosophers who highlight how they think people should behave, or behavioral scientists who describe how people actually behave. We will aim to carve out a space between the philosophical and behavioral science approaches where we can prescribe action to be better. First, we need a clear understanding of the foundations on which we are building.
Philosophyâs Normative Approach
Scholars from a range of disciplines have written about ethical decision making, but by far the most dominant influence has come from philosophers. For many centuries, philosophers have debated what constitutes moral action, offering alternative normative theories of what people should do. These normative theories generally differ on whether they argue for the maximization of aggregate good (utilitarianism), the protection of human rights and basic autonomy (deontologists), or the protection of individual freedom (libertarianism). More broadly, moral philosophies differ in the trade-offs they make between creating value versus respecting peopleâs rights and freedoms. However, they share an orientation toward recommending norms of behaviorâa âshouldâ focus. That is, philosophical theories tend to have very clear standards for what constitutes moral behavior. I am confident that I fail to achieve the standards of ethical behavior for most moral philosophies (particularly utilitarianism) on a regular basis and that if I attempted to be purely ethical from a philosophical perspective, I would still fail.
Psychologyâs Descriptive Approach
In recent decades, particularly after the collapse of Enron at the beginning of the millennium, behavioral scientists entered the ethical arena to create the field of behavioral ethics, which documents how people behaveâthat is, it offers descriptive accounts of what we actually do.2 For example, psychologists have documented how we engage in unethical acts based on our self-interest, without being aware that weâre doing so. People think they contribute more than they actually do, and see their organization and those close to them as more worthy than reality dictates. More broadly, behavioral ethics identifies how our surroundings and our psychological processes cause us to engage in ethically questionable behavior that is inconsistent with our own values and preferences. The focus on descriptive research has not been on the truly bad guys that we read about in the newspaper (such as Madoff, Skilling, or Epstein), but on research evidence showing that most good people do some bad things on a pretty regular basis.3
Better: Toward a Prescriptive Approach
Weâll depart from both philosophy and psychology to chart a course that is prescriptive. We can do better than the real-world, intuition-based behavior observed and described by behavioral scientists, without requiring ourselves or others to achieve the unreasonably high standards demanded by utilitarian philosophers. We will go beyond diagnosing what is ethical from a philosophical perspective and where we go wrong from a psychological perspective to finding ways to be more ethical and do more good, given our own preferences. Rather than focusing on what a purely ethical decision would be, we can change our day-to-day decisions and behavior to ensure they add up to a more rewarding life. As we move toward being better, weâll lean on both philosophy and psychology for insights. A carefully orchestrated mix of the two yields a down-to-earth, practical approach to help us do more good with our limited time on this planet, while offering insight into how to be more satisfied with our lifeâs accomplishments in the process. Philosophy will provide us with a goal state; psychology will help us understand why we remain so far from it. By navigating the space between, we can each be better in the world we actually inhabit.
ROAD MAPS FROM OTHER FIELDS
Using normative and descriptive accounts to generate a new prescriptive approach aimed at improving decisions and behavior is novel in the realm of ethics, but weâve seen this evolution play out in other fields, namely negotiation and decision making.
Better Negotiations
For decades, research and theory in the field of negotiation was divided into two parts: normative (how people should behave) and descriptive (how people actually behave). Game theorists from the world of economics offered a normative account of how humans should behave in a world where all parties were completely rational and had the ability to anticipate full rationality in others. In contrast, behavioral scientists offered descriptive accounts of how people actually behave in real life. These two worlds had little interaction. Then Harvard professor Howard Raiffa came along with a brilliant (but terribly titled) concept that merged the two: an asymmetrically prescriptive/descriptive approach to negotiation.4 Raiffaâs core insight was to offer the best advice possible to negotiators, without assuming that their counterparts would act completely rationally. Stanford professor Margaret Neale and I, along with a cohort of excellent colleagues, went on to augment Raiffaâs prescriptions by describing how negotiators who are trying to behave more rationally can better anticipate the behavior of the other less-than-fully-rational parties.5 By adopting the goal of helping negotiators make the very best possible decisions, but accepting more accurate descriptions of how people behave, Raiffa, Neale, myself, and our colleagues were able to pave a useful path that has changed how negotiation is taught at universities and practiced the world over.
Better Decisions
A similar breakthrough occurred in the field of decision making. Until the start of the new millennium, economists studying decision making offered a normative account of how rational actors should behave, while the emerging area of behavioral decision research described peopleâs actual behavior. Implicit in the work of behavioral decision researchers was the assumption that if we can figure out what people do wrong and tell them, we can âdebiasâ their judgment and prompt them to make better decisions. Unfortunately, this assumption turned out to be wrong; research has shown time and again that we do not know how to debias human intuition.6 For example, no matter how many times people are shown the tendency to be overconfident, they continue to make overconfident choices.7
Luckily, we have managed to develop approaches that help people make better decisions despite their biases. To take one example, the distinction between System 1 and System 2 cognitive functioning, beautifully illuminated in Daniel Kahnemanâs book Thinking, Fast and Slow, presents a useful distinction between the two main modes of human decision making.8 System 1 refers to our intuitive system, which is typically fast, automatic, effortless, implicit, and emotional. We make most decisions in life using System 1 thinkingâwhich brand of bread to buy at the supermarket, when to hit the brakes while driving, what to say to someone weâve just met. In contrast, System 2 refers to reasoning that is slower, conscious, effortful, explicit, and logical, such as when we think about costs and benefits, use a formula, or talk to some smart friends. Lots of evidence supports the conclusion that System 2, on average, leads to wiser and more moral ethical decisions than System 1. While System 2 doesnât guarantee wise decisions, showing people the benefits of moving from System 1 to System 2 when making important decisions, and encouraging them to do so, moves us in the direction of better, more ethical decisions.9
Another prescriptive approach to decision making came from Richard Thaler and Cass Sunsteinâs influential 2008 book, Nudge.10 While we do not know how to fix peopleâs intuition, Thaler and Sunstein argued that we can redesign the decision-making environment so that wiser decisions will result by anticipating when gut instincts might cause a problemâan intervention strategy known as choice architecture. For example, to address the problem of people undersaving for retirement, many employers now enroll employees automatically in 401(k) programs and allow them to opt out of the plan. Changing the decision-making default from requiring people to enroll to automatic enrollment has been shown to dramatically improve savings rates.
These fruitful developments in the fields of negotiation and decision making offer a road map, borrowing the idea of identifying a useful goal from the normative tool kit (such as making more rational decisions), and combining it with descriptive research that clarifies the limits to optimal behavior. This prescriptive perspective has the potential to transform the way we think about whatâs right, just, and moral, which will lead us to be better.
A NORTH STAR FOR ETHICS
Our journey seeks to identify what better decisions would look like and chart a path to lead us in that direction. Much of moral philosophy is built on arguments that stipulate what would constitute the most moral behavior in various ethical dilemmas. Through the use of these hypotheticals, philosophers stake out general rules that they believe people should follow when making decisions that have an ethical component.
The most commonly used dilemma to highlight different views of moral behavior is known as the âtrolley problem.â In the classic form of the problem, youâre asked to imagine that you are watching a runaway trolley that is bounding down a track. If you fail to intervene, the trolley will kill five people. You have the power to save these people by hitting a switch that will turn the trolley onto a side track, where it will run over and kill one workman instead. Setting aside potential legal concerns, would it be moral for you to turn the trolley by hitting the switch?11
THE TROLLEY PROBLEM
© 2019 Robert C. Shonk
Most people say yes, since the death of five people is obviously worse than the death of one person.12 In this problem, the popular choice corresponds to utilitarian logic. Utilitarianism, a philosophy rooted in work of scholars such as Jeremy Bentham, John Stuart Mill, Henry Sidgwick, Peter Singer, and Joshua Greene, argues that moral action should be based on what will maximize utility in the world. This translates into what will create the most value across all sentient beings. Of course, it is very difficult to assess which action will maximize utility across people. But for utilitarians, having this goal in mind provides clarity in lots of decisionsâincluding the trolley problem.
For now, we use utilitarianism as a clear touchstone to help us navigate new terrain. Interestingly, many of us already endorse many of the basic moral constructs of utilitarianism:
- Creating as much value as possible across all sentient beings
- Behaving efficiently in the pursuit of the good that we can create
- Making moral decisions independent of our own wealth or status in society
- Valuing the interests of all equally
Most of my advice will hold up to criticisms of utilitarianism and be relevant even to readers who reject certain aspects of utilitarianism.
For practical purposes, maximizing aggregate value creation across all sentient beings will be the North Star of et...