Public Policy in an Uncertain World
eBook - ePub

Public Policy in an Uncertain World

Analysis and Decisions

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Public Policy in an Uncertain World

Analysis and Decisions

About this book

Public policy advocates routinely assert that "research has shown" a particular policy to be desirable. But how reliable is the analysis in the research they invoke? And how does that analysis affect the way policy is made, on issues ranging from vaccination to minimum wage to FDA drug approval? Charles Manski argues here that current policy is based on untrustworthy analysis. By failing to account for uncertainty in an unpredictable world, policy analysis misleads policy makers with expressions of certitude. Public Policy in an Uncertain World critiques the status quo and offers an innovation to improve how policy research is conducted and how policy makers use research.

Consumers of policy analysis, whether civil servants, journalists, or concerned citizens, need to understand research methodology well enough to properly assess reported findings. In the current model, policy researchers base their predictions on strong assumptions. But as Manski demonstrates, strong assumptions lead to less credible predictions than weaker ones. His alternative approach takes account of uncertainty and thereby moves policy analysis away from incredible certitude and toward honest portrayal of partial knowledge. Manski describes analysis of research on such topics as the effect of the death penalty on homicide, of unemployment insurance on job-seeking, and of preschooling on high school graduation. And he uses other real-world scenarios to illustrate the course he recommends, in which policy makers form reasonable decisions based on partial knowledge of outcomes, and journalists evaluate research claims more closely, with a skeptical eye toward expressions of certitude.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Public Policy in an Uncertain World by Charles F. Manski in PDF and/or ePUB format, as well as other popular books in Politics & International Relations & Economic Theory. We have over one million books available in our catalogue for you to explore.
I
POLICY ANALYSIS
1
Policy Analysis with Incredible Certitude
TO BEGIN, I distinguish the logic and the credibility of policy analysis (Section 1.1) and cite arguments made for certitude (Section 1.2). I then develop a typology of practices that contribute to incredible certitude. I call these practices conventional certitudes (Section 1.3), dueling certitudes (Section 1.4), conflating science and advocacy (Section 1.5), wishful extrapolation (Section 1.6), illogical certitude (Section 1.7), and media overreach (Section 1.8). In each case, I provide illustrations.
1.1. The Logic and Credibility of Policy Analysis
Policy analysis, like all empirical research, combines assumptions and data to draw conclusions about a population of interest. The logic of empirical inference is summarized by the relationship:
assumptions + data conclusions.
Data alone do not suffice to draw conclusions. Inference requires assumptions that relate the data to the population of interest. (One may ask what role theory plays in the logic of inference. Theory and assumptions are synonyms. I mainly use the latter term, reserving the former for broad systems of assumptions. Other synonyms for assumption are hypothesis, premise, and supposition.)
Holding fixed the available data, and presuming avoidance of errors in logic, stronger assumptions yield stronger conclusions. At the extreme, one may achieve certitude by posing sufficiently strong assumptions. The fundamental difficulty of empirical research is to decide what assumptions to maintain.
Given that strong conclusions are desirable, why not maintain strong assumptions? There is a tension between the strength of assumptions and their credibility. I have called this (Manski 2003, p. 1):
The Law of Decreasing Credibility: The credibility of inference decreases with the strength of the assumptions maintained.
This “law” implies that analysts face a dilemma as they decide what assumptions to maintain: Stronger assumptions yield conclusions that are more powerful but less credible.
I will use the word credibility throughout this book, but I will have to take it as a primitive concept that defies deep definition. The second edition of the Oxford English Dictionary (OED) defines credibility as “the quality of being credible.” The OED defines credible as “capable of being believed; believable.” It defines believable as “able to be believed; credible.” And so we come full circle.
Whatever credibility may be, it is a subjective concept. Each person assesses credibility on his or her own terms. When researchers largely agree on the credibility of certain assumptions or conclusions, they may refer to this agreement as “scientific consensus.” Persons sometimes push the envelope and refer to a scientific consensus as a “fact” or a “scientific truth.” This is overreach. Consensus does not imply truth. Premature scientific consensus sometimes inhibits researchers from exploring fruitful ideas.
Disagreements occur often. Indeed, they may persist without resolution. Persistent disagreements are particularly common when assumptions are nonrefutable—that is, when alternative assumptions are consistent with the available data. As a matter of logic alone, disregarding credibility, an analyst can pose a nonrefutable assumption and adhere to it forever in the absence of disproof. Indeed, he can displace the burden of proof, stating “I will maintain this assumption until it is proved wrong.” Analysts often do just this. An observer may question the credibility of a nonrefutable assumption, but not the logic of holding on to it.
To illustrate, American society has long debated the deterrent effect of the death penalty as a punishment for murder. Disagreement persists in part because empirical research based on available data has not been able to settle the question. With this background, persons find it tempting to pose their personal beliefs as a hypothesis, observe that this hypothesis cannot be rejected empirically, and conclude that society should act as if their personal belief is correct. Thus, a person who believes that there is no deterrent effect may state that, in the absence of credible evidence for deterrence, society should act as if there is no deterrence. Contrariwise, someone who believes that the death penalty does deter may state that, in the absence of credible evidence for no deterrence, society should act as if capital punishment does deter. I will discuss deterrence and the death penalty further in Chapter 2.
1.2. Incentives for Certitude
A researcher can illuminate the tension between the credibility and power of assumptions by posing alternative assumptions of varying credibility and determining the conclusions that follow in each case. In practice, policy analysis tends to sacrifice credibility in return for strong conclusions. Why so?
A proximate answer is that analysts respond to incentives. I have earlier put it this way (Manski 2007a, 7–8):
The scientific community rewards those who produce strong novel findings. The public, impatient for solutions to its pressing concerns, rewards those who offer simple analyses leading to unequivocal policy recommendations. These incentives make it tempting for researchers to maintain assumptions far stronger than they can persuasively defend, in order to draw strong conclusions.
The pressure to produce an answer, without qualifications, seems particularly intense in the environs of Washington, D.C. A perhaps apocryphal, but quite believable, story circulates about an economist’s attempt to describe his uncertainty about a forecast to President Lyndon B. Johnson. The economist presented his forecast as a likely range of values for the quantity under discussion. Johnson is said to have replied, ‘Ranges are for cattle. Give me a number.’
When a president as forceful as Johnson seeks a numerical prediction with no expression of uncertainty, it is understandable that his advisers feel compelled to comply.
Jerry Hausman, a longtime econometrics colleague, stated the incentive argument this way at a conference in 1988, when I presented in public my initial findings on policy analysis with credible assumptions: “You can’t give the client a bound. The client needs a point.” (A bound is synonymous with a range or an interval. A point is an exact prediction.)
Hausman’s comment reflects a perception that I have found to be common among economic consultants. They contend that policy makers are either psychologically unwilling or cognitively unable to cope with uncertainty. Hence, they argue that pragmatism dictates provision of point predictions, even though these predictions may not be credible.
This psychological-cognitive argument for certitude begins from the reasonable premise that policy makers, like other humans, have limited willingness and ability to embrace the unknown. However, I think it too strong to draw the general conclusion that “the client needs a point.” It may be that some persons think in purely deterministic terms. However, a considerable body of research measuring expectations shows that most make sensible probabilistic predictions when asked to do so; see Chapter 3 for further discussion and references. I see no reason to expect that policy makers are less capable than ordinary people.
Support for Certitude in Philosophy of Science
The view that analysts should offer point predictions is not confined to U.S. presidents and economic consultants. It has a long history in the philosophy of science.
Over fifty years ago, Milton Friedman expressed this perspective in an influential methodological essay. Friedman (1953) placed prediction as the central objective of science, writing (p. 5): “The ultimate goal of a positive science is the development of a ‘theory’ or ‘hypothesis’ that yields valid and meaningful (i.e. not truistic) predictions about phenomena not yet observed.” He went on to say (p. 10):
The choice among alternative hypotheses equally consistent with the available evidence must to some extent be arbitrary, though there is general agreement that relevant considerations are suggested by the criteria ‘simplicity’ and ‘fruitfulness,’ themselves notions that defy completely objective specification.
Thus, Friedman counseled scientists to choose one hypothesis (that is, make a strong assumption), even though this may require the use of “to some extent … arbitrary” criteria. He did not explain why scientists should choose a single hypothesis out of many. He did not entertain the idea that scientists might offer predictions under the range of plausible hypotheses that are consistent with the available evidence.
The idea that a scientist should choose one hypothesis among those consistent with the data is not peculiar to Friedman. Researchers wanting to justify adherence to a particular hypothesis sometime refer to Ockham’s Razor, the medieval philosophical declaration that “plurality should not be posited without necessity.” The Encyclopaedia Britannica Online (2010) gives the usual modern interpretation of this cryptic statement, remarking that “the principle gives precedence to simplicity; of two competing theories, the simplest explanation of an entity is to be preferred.” The philosopher Richard Swinburne writes (1997, 1):
I seek … to show that—other things being equal—the simplest hypothesis proposed as an explanation of phenomena is more likely to be the true one than is any other available hypothesis, that its predictions are more likely to be true than those of any other available hypothesis, and that it is an ultimate a priori epistemic principle that simplicity is evidence for truth.
The choice criterion offered here is as imprecise as the one given by Friedman. What do Britannica and Swinburne mean by “simplicity”?
However one may operationalize the various philosophical dicta for choosing a single hypothesis, the relevance of philosophical thinking to policy analysis is not evident. In policy analysis, knowledge is instrumental to the objective of making good decisions. When philosophers discuss the logical foundations and human construction of knowledge, they do so without posing this or another explicit objective. Does use of criteria such as “simplicity” to choose one hypothesis among those consistent with the data promote good policy making? This is the relevant question for policy analysis. As far as I am aware, philosophers have not addressed it.
1.3. Conventional Certitudes
John Kenneth Galbraith popularized the term conventional wisdom, writing (1958, chap. 2): “It will be convenient to have a name for the ideas which are esteemed at any time for their acceptability, and it should be a term that emphasizes this predictability. I shall refer to these ideas henceforth as the conventional wisdom.” The entry in Wikipedia (2010) nicely put it this way:
Conventional wisdom (CW) is a term used to describe ideas or explanations that are generally accepted as true by the public or by experts in a field. The term implies that the ideas or explanations, though widely held, are unexamined and, hence, may be reevaluated upon further examination or as events unfold.… Conventional wisdom is not necessarily true.
I shall similarly use the term conventional certitude to describe predictions that are generally accepted as true, but that are not necessarily true.
CBO Scoring of Pending Legislation
In the United States today, conventional certitude is exemplified by Congressional Budget Office (CBO) scoring of pending federal legislation. I will use CBO scoring as an extended case study.
The CBO was established in the Congressional Budget Act of 1974. Section 402 states (Committee on the Budget, U.S. House of Representatives, 2008, 39–40):
The Director of the Congressional Budget Office shall, to the extent practicable, prepare for each bill or resolution of a public character reported by any committee of the House of Representatives or the Senate (except the Committee on Appropriations of each House), and submit to such committee—(1) an estimate of the costs which would be incurred in carrying out such bill or resolution in the fiscal year in which it is to become effective...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright
  5. Dedication
  6. Contents
  7. Preface
  8. Introduction
  9. I: Policy Analysis
  10. II: Policy Decisions
  11. Appendix A: Derivations for Criteria to Treat X-Pox
  12. Appendix B: The Minimax-Regret Allocation to a Status Quo Treatment and an Innovation
  13. Appendix C: Treatment Choice with Partial Knowledge of Response to Both Treatments
  14. References
  15. Index