Statistical Evidence
eBook - ePub

Statistical Evidence

A Likelihood Paradigm

  1. 191 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Statistical Evidence

A Likelihood Paradigm

About this book

Interpreting statistical data as evidence, Statistical Evidence: A Likelihood Paradigm focuses on the law of likelihood, fundamental to solving many of the problems associated with interpreting data in this way. Statistics has long neglected this principle, resulting in a seriously defective methodology. This book redresses the balance, explaining why science has clung to a defective methodology despite its well-known defects. After examining the strengths and weaknesses of the work of Neyman and Pearson and the Fisher paradigm, the author proposes an alternative paradigm which provides, in the law of likelihood, the explicit concept of evidence missing from the other paradigms. At the same time, this new paradigm retains the elements of objective measurement and control of the frequency of misleading results, features which made the old paradigms so important to science. The likelihood paradigm leads to statistical methods that have a compelling rationale and an elegant simplicity, no longer forcing the reader to choose between frequentist and Bayesian statistics.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Statistical Evidence by Richard Royall in PDF and/or ePUB format, as well as other popular books in Mathematics & Probability & Statistics. We have over one million books available in our catalogue for you to explore.

Information

Publisher
CRC Press
Year
2017
Print ISBN
9781032359311
eBook ISBN
9781351414555

CHAPTER 1

The first principle

1.1 Introduction

In this chapter we distinguish between the specific question whose answer we seek and other important statistical questions that are closely related to it. We find the answer to our question in the simplest possible case, where the proper interpretation of statistical evidence is transparent. And we begin to test that answer with respect to intuition, or face-validity; consistency with other aspects of reasoning in the face of uncertainty (specifically, with the way new evidence changes probabilities); and operational consequences. We also examine some of the common examples that have been cited as proof that the answer we advocate is wrong. We observe two general and profound implications of accepting the proposed answer. These suggest that a radical reconstruction of statistical methodology is needed. Finally, to define the concept of statistical evidence more precisely, we illustrate the distinction between degrees of uncertainty, measured by probabilities, and strength of evidence, which is measured by likelihood ratios.

1.2 The law of likelihood

Consider a physician’s diagnostic test for the presence or absence of some disease, D. Suppose that experience has shown the test to be a good one, rarely producing misleading results. Specifically, the performance of the test is described by the probabilities shown in Table 1.1. The first row shows that when D is actually present, the test detects it with probability 0.95, giving an erroneous negative result with probability 0.05. The second row shows that when D is absent, the test correctly produces a negative result with probability 0.98, leaving a false positive probability of only 0.02.
Now suppose that a patient, Mr Doe, is given the test. On learning that the result is positive, his physician might draw one of the following conclusions:
Table 1.1 A physician’s diagnostic test for the presence or absence of disease D
Test result
Positive
Negative
Present
0.95
0.05
Disease D
Absent
0.02
0.98
  1. Mr Doe probably does not have D.
  2. Mr Doe should be treated for D.
  3. The test result is evidence that Mr Doe has D.
Which, if any, of these conclusions is appropriate? Can any of them be justified? It is easy to see that under the right circumstances all three might be simultaneously correct.
Consider conclusion 1. It can be restated in terms of the probability that Mr Doe has D, given the positive test, Pr(D|+); it says that Pr(D|+)<12. Whether this is true or not depends in part on the result (+) and the characteristics of the test (Table 1.1). But it also depends on the prior (before the test) probability of the condition, Pr(D). Bayes’s theorem shows that
Pr(D|+)=Pr(+|D)Pr(D)Pr(+|D)Pr(D)+Pr(+|not-D)Pr(not-D)=0.95 Pr(D)0.95 Pr(D)+0.02(1Pr(D)).
If D is a rare disease, so that Pr(D) is very small, then it will be true that Pr(D|+) is small and conclusion 1 is correct (as, for example, if Pr(D) = 0.001, so that Pr(D)|+) = 0.045). On the other hand, if D were more common – say, with a prior probability of Pr(D) = 0.20 – then Pr(D|+) would be 0.92, and conclusion 1 would be quite wrong. The validity of conclusion 1 depends critically on the prior probability.
Even if conclusion 1 is correct – say, Pr(D|+) = 0.045 – conclusion 2 might also be correct, and the physician might appropriately decide to treat for D even though it is unlikely that D is present. This might be the case when the treatment is effective if D is present but harmless otherwise, and when failure to treat a patient who actually has D is disastrous. But conclusion 2 would be wrong under different assumptions about the risks associated with the treatment, about the consequences of failure to treat when D is actually present, etc. It is clear that to evaluate conclusion 2 we need, in addition to the information required to evaluate conclusion 1, to know what are the various possible actions and what are their consequences in the presence of D and in its absence.
But how about conclusion 3? The rule we will consider implies that it is valid, independently of prior probabilities, and without reference to what actions might be available or their consequences: the positive test result is evidence that Mr Doe has the disease. Furthermore the rule provides an objective numerical measure of the strength of that evidence.
We are concerned here with the interpretation of a certain kind of observation as evidence in relation to a certain kind of hypothesis. The observation is of the form X = x, where A’ is a random variable and x is one of the possible values of X. We begin with hypotheses which, like the two in the example of Mr...

Table of contents

  1. Cover Page
  2. Half title
  3. Title Page
  4. Copyright Page
  5. Contents
  6. Preface
  7. 1 The first principle
  8. 2 Neyman-Pearson theory
  9. 3 Fisherian theory
  10. 4 Paradigms for statistics
  11. 5 Resolving the paradoxes from the old paradigms
  12. 6 Looking at likelihoods
  13. 7 Nuisance parameters
  14. 8 Bayesian statistical inference
  15. Appendix: The paradox of the ravens
  16. References
  17. Index