p.1
1
INTRODUCTION
Jean-François Bonnefon and Bastien Trémolière
In recent years, research on moral cognition has witnessed tremendous growth. To give only one example, research on morality increased eightfold in the pages of the journal Cognition during the last decade, arguably the most impressive topic shift in the history of the journal (Cohen Priva & Austerweil, 2015; Greene, 2015). This rapid growth is not without consequences for the psychology of reasoning. Indeed, new-wave moral psychology creates both a challenge and an opportunity for the psychology of reasoning. The challenge is to establish the relevance of reasoning research in the context of moral cognition, whose theories tend to make token mentions of reasoning, or to ignore current developments in reasoning research. The opportunity is to leverage the current impact of moral judgement research into reaching a new and vast audience.
As it turns out, reasoning is often invoked in current research of morality, if only to underplay its role in the formation of moral judgement. Jonathan Haidt, for example, famously asserted that moral judgement is not rooted in reasoning, but first and foremost driven by non-reflective intuitions (Haidt, 2001, 2007, 2012). According to this view, reasoning is hardly involved when we reach a moral judgement or a moral decision – reasoning is rather deployed to rationalize our moral positions ex post, in order to reassure ourselves and others that we are in the right. Under this characterization, reasoning would not be the most interesting aspect of moral cognition, center-stage being given instead to the emotions and intuitions that supposedly drive our decisions.
The dual-process approach to moral cognition (Greene, 2013; Greene, Sommerville, Nystrom, Darley, & Cohen, 2001) gives a larger role to reasoning, but it does so at the cost of a restricted definition of what counts as reasoning. In this model, intuition and reasoning compete to drive moral judgements. This approach would seem to give equal footing to intuition and reasoning, but its definition of reasoning is narrower than that which specialists of reasoning may have come to expect. Indeed, reasoning in the dual-process model of moral cognition amounts to consciously applying one or several decision rules – or ensuring that moral conclusions are consistent with one’s conscious commitment to one or several decision rules (Greene, 2013; Paxton & Greene, 2010).
p.2
In other words, the dual process-model of moral judgement restricts the use of the term ‘reasoning’ to the conscious manipulation of rules. This can be a source of confusion between the dual-process approach to reasoning and the dual-process approach to moral judgement. Indeed, the conscious manipulation of rules is typically included in the deliberative component of dual-process models of reasoning, also known as System 2. But dual-process models of reasoning also use the term (System 1) ‘reasoning’ for the fast, automatic processing of beliefs via affective or heuristic shortcuts (De Neys & Bonnefon, 2013; Evans, 2008; Sloman, 1996). In sum, dual-process models of reasoning and moral judgement share the view that the mind can process information either deliberately or intuitively – but they differ in what they call ‘reasoning’ under this view: only the deliberative processes for specialists of moral judgement, or the whole set of processes for specialists of reasoning.
This is not a deep theoretical chasm, only an inconsistent use of labels – but one that can create unnecessary confusion. Consider the case of ‘dyadic morality’ (Gray, Waytz, & Young, 2012). Dyadic morality theory postulates that the fundamental psychological template of moral transgressions is that of a wrongdoer harming a victim – that is, an agent inflicting suffering to a patient. According to the theory, people fill out the template (or force the situation into the template) when they encounter an action that seems wrong. This process requires people to construe relevant others as agents or patients, and to identify what harm has been inflicted, which can be easy (murder) or hard (masturbation).
Should we call this template-completion process ‘reasoning’? Schein, Goranson, and Gray (2015) forcefully respond (emphasis added): ‘As dyadic morality embraces the power of harm, some have assumed that it also embraces the reign of reason. Nothing could be further from the truth. The role of templates [. . .] progresses intuitively and automatically’. Although people may use considerations of harm in later moral reasoning, the authors tell us, people rapidly, automatically and effortlessly see and process harm in their initial moral judgements (Gray, Schein, & Ward, 2014). Ergo, they do not reason at this initial stage. This is an example where the term ‘reasoning’ is meant to exclude System 1 intuitive, automatic processes. Again, this is only a matter of labels – but this restricted use of the term ‘reasoning’ can give the impression that the psychology of reasoning has little to contribute to the study of dyadic morality.
We believe that the psychology of reasoning does have a lot to contribute to contemporary theories of morality, independently of the weights that these theories put on intuition or deliberation. And yet, we observe that even though morality and reasoning are increasingly studied by the same scientists tackling the same issues, the two fields do not take full advantage of each other. It is at this juncture that we offer this collective volume on Moral Inferences, which we hope to serve as a checkpoint between the psychology of reasoning and the psychology of moral judgement.
p.3
This book is organized in three parts, which consider in turn the input, the processes, and the output of moral reasoning. The first three chapters are concerned with the premises of moral inferences, that is, the basic ingredients of moral reasoning. Minimally, these premises involve agents, acts, and the outcomes of these acts. In this regard, Goodwin summarizes several strands of debate that all tackle the same central issue: is it necessary and/or sufficient for outcomes to be harmful, for reasoning about these outcomes to be moral? Furthermore, Goodwin argues that a positive response to this question may allow for a more optimistic view of the role of reasoning in morality. Next, Waldmann, Wiegmann, and Nagel argue that the premises of moral reasoning do not just include agents, acts and outcomes, but also the causal structure that link these acts and these outcomes. They offer three illustrations of the importance of this causal structure – showing, for example, that different causal structures highlight the fate of different victims even when acts and outcomes are kept constant. Finally, Royzman and Hagan offer several striking examples of all the inferences that reasoners make when they engage with a moral dilemma and elaborate its contents, even before they attempt to issue a moral judgement. Researchers who neglect these preparatory inferences, Royzman and Hagan show, run the risk of misinterpreting the theoretical implications of their results.
The second part of the book comprises four chapters exploring the mental processes that support moral inferences. Two chapters offer a nuanced perspective on the extent to which people process reasons for or against a moral decision. Among other issues, Koralus and Alfano show that framing effects generated by how reasons are considered when options are asked to be chosen or rejected (Shafir, 1993; Shafir, Simonson, & Tversky, 1993) can be found in the moral domain. In parallel, Mercier, Castelain, Hamid, and Marín-Picado identify boundary conditions to the use or reasons in moral decisions. In particular, they observe that although people can change their mind on a moral issue when presented with a strong argument, group discussions do not typically or consistently converge when they tackle a moral dilemma, the way they do when they tackle a reasoning problem. Two other chapters discuss provocative evidence on individual differences in moral reasoning, and specifically on individual differences in the propensity to accept ‘utilitarian’ acts that promote the greater good by inflicting harm on some individuals; or to endorse instead the so-called ‘deontic’ view according to which it is morally impermissible to harm one, even when it results in a greater good for many. De Neys and Bialek adapt the conflict detection protocols used in reasoning research (De Neys, 2012) to show that people who give the deontic response experience a cognitive conflict, which rules out the possibility that (a substantial share of) deontic responders simply follow a cognitive shortcut and never consider matters of greater good. Finally, Baron engages in a thoughtful review of the available literature on individual differences in utilitarianism, concluding that such individual differences may not reflect differences in online processing of the dilemma, but rather result from the lifelong, cumulated effect of cognitive style – in other words, people who tend to give utilitarian responses in the lab do not necessarily engage in more reflective processing in the lab, but they are more likely to have spent some time pondering about similar dilemmas before coming to the lab.
p.4
At the end of his chapter, Baron notes that biases in moral judgements have special significance because they affect others to a degree that biases in non-moral reasoning do not – especially when people find ways to rationalize the pursuit of their own selfish interests. This leads to the question addressed in the third part of this book, that is, whether research on moral inferences can help to identify good moral reasoning. The three chapters in the last part of the book focus on the output of moral reasoning, and offer three perspectives on whether cognitive psychology can help label these outputs as rational or irrational. Rini and Bruni critically examine the following idea: if we can show that some patterns of moral reasoning resemble patterns of non-moral reasoning that we know to be bad, can we say we have identified bad moral reasoning? Their response is essentially pessimistic: we cannot identify bad moral reasoning this way because there is no consensual evidence about bad patterns of non-moral reasoning. Schwitzgebel and Ellis offer a different perspective. They argue that rationalization is bad – when reasoners favor a conclusion because of epistemically irrelevant factors, and engage in the biased search and assessment of justifications for this conclusion. Furthermore, they argue that high levels of knowledge and introspection do not protect against rationalization, and may in fact increase the false sense of confidence in one’s preferred conclusions. Finally, Rai suggests that seemingly irrational patterns of moral reasoning become rational when taking into account one primary function of moral reasoning; that is, making inferences about an actor’s potential as a social partner in the future. In particular, this perspective allows one to understand when and why considerations of intentionality may be discarded from moral reasoning without loss of function.
In sum, the ten chapters in this book offer new and complementary perspectives on the premises, the processes, and the conclusions of moral inferences. They nicely illustrate how the concepts and the methods used in the psychology of reasoning can refine and extend our understanding of morality. This, we believe, is a timely contribution at this juncture where the psychology of morality is ready to reconnect with the psychology of reasoning, and to fully incorporate recent developments in our understanding of moral and non-moral inferences.
References
Cohen Priva, U., & Austerweil, J. L. (2015). Analyzing the history of Cognition using Topic Models. Cognition, 135, 4–9.
De Neys, W. (2012). Bias and conflict: A case for logical intuitions. Perspectives on Psychological Science, 7, 128–138.
De Neys, W., & Bonnefon, J. F. (2013). The ‘whys’ and ‘whens’ of individual differences in thinking biases. Trends in Cognitive Sciences, 17, 172–178.
Evans, J. S. B. T. (2008). Dual-processing accounts of reasoning. Annual Review of Psychology, 59, 255–278.
p.5
Gray, K., Schein, C., & Ward, A. F. (2014). The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology: General, 143, 1600–1615.
Gray, K., Waytz, A., & Young, L. (2012). The moral dyad: A fundamental template unifying moral judgment. Psychological Inquiry, 23, 206–215.
Greene, J. D. (2013). Moral tribes: Emotion, reason, and the gap between us and them. London: Penguin Press.
Greene, J. D. (2015). The rise of moral cognition. Cognition, 135, 39–42.
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834.
Haidt, J. (2007). The new synthesis in moral psychology. Science, 316, 998–1002.
Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York: Pantheon.
Paxton, J. M., & Greene, J. D. (2010). Moral reasoning: Hints and allegations. Topics in Cognitive Science, 2, 511–527.
Schein, C., Goranson, A., & Gray, K. (2015). The uncensored truth about morality. The Psychologist, 28, 982–985.
Shafir, E. (1993). Choosing versus rejecting: Why some options are both better and worse than others. Memory and Cognition, 21, 546–556.
Shafir, E., Simonson, I., & Tversky, A. (1993). Reason-based choice. Cognition, 49, 11–36.
Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119, 3–22.
p.7
PART I
Inputs
p.9
2
IS MORALITY UNIFIED, AND DOES THIS MATTER FOR MORAL REASONING?
Geoffrey P. Goodwin
Abstract
Several recent debates within the literature on moral judgment hinge on whether the moral domain is unified....