Mathematics
Type II Error
Type II Error occurs in hypothesis testing when the null hypothesis is incorrectly not rejected, despite it being false. This error occurs when the test fails to detect a real effect or difference. It is also known as a false negative, and its probability is denoted by beta.
Written by Perlego with AI-assistance
Related key terms
1 of 5
11 Key excerpts on "Type II Error"
- No longer available |Learn more
- (Author)
- 2014(Publication Date)
- Orange Apple(Publisher)
An example of this would be if a test shows that a woman is not pregnant, when in reality, she is. Type II Error can be viewed as the error of excessive skepticism. In other words, a Type II Error means that a negative inference is actually false. A table as follows can be useful in understanding the concept. Null Hypothesis (H 0 ) is true Alternative Hypothesis (H 1 ) is true Fail to Reject Null Hypothesis Right decision Wrong decision Type II Error False Negative Reject Null Hypothesis Wrong decision Type I Error False Positive Right decision Understanding Type I and Type II Errors When an observer makes a Type I error in evaluating a sample against its parent population, he or she is mistakenly thinking that a statistical difference exists when in truth there is no statistical difference (or, to put another way, the null hypothesis should not be rejected but was mistakenly rejected). For example, imagine that a pregnancy test has produced a positive result (indicating that the woman taking the test is pregnant); if the woman is actually not pregnant though, then we say the test produced a false positive (assuming the null hypothesis, Ho, was that she is not pregnant). A Type II Error, or a false negative, is the error of failing to reject a null hypothesis when the alternative hypothesis is the true state of nature. For example, a Type II Error occurs if a pregnancy test reports negative when the woman is, in fact, pregnant. From the Bayesian point of view, a type one error is one that looks at information that should not substantially change one's prior estimate of probability, but does. A type two error is that one looks at information which should change one's estimate, but does not. (Though the null hypothesis is not quite the same thing as one's prior estimate, it is, rather, one's pro forma prior estimate.) - eBook - PDF
Statistics
Unlocking the Power of Data
- Robin H. Lock, Patti Frazer Lock, Kari Lock Morgan, Eric F. Lock, Dennis F. Lock(Authors)
- 2016(Publication Date)
- Wiley(Publisher)
When we make a formal decision to “reject H 0 ,” we generally are accepting some risk that H 0 might actually be true. For example, we may have been unlucky and stumbled upon one of those “1 in a 1000” samples that are very rare to see when H 0 holds but still are not impossible. This is an example of what we call a Type I error: rejecting a true H 0 . The other possible error to make in a statistical test is to fail to reject H 0 when it is false and the alternative H a is actually true. We call this a Type II Error: failing to reject a false H 0 . See Table 4.12. In medical terms we often think of a Type I error as a “false positive” – a test that indicates a patient has an illness when actually none is present, and a Type II Error as a “false negative” – a test that fails to detect an actual illness. Example 4.23 Describe the consequences of making Type I and Type II Errors in each case. (a) In the light at night experiment where we test H 0 ∶ L = D vs H a ∶ L > D (b) In Example 4.20 where we have a mystery animal named X and test H 0 ∶ X is an elephant vs H a ∶ X is not an elephant Solution (a) A Type I error is to reject a true H 0 . In the light at night study, a Type I error is to conclude that light at night increases weight gain when actually there is no effect. A Type II Error is to fail to reject a false H 0 . In this case, a Type II Error means the test based on our sample data does not convince us that light increases weight gain when it actually does. (b) If we see evidence (perhaps that X walks on two legs) that is so rare we conclude that X is not an elephant and it turns out that X is an elephant (perhaps trained in a circus), we have made a Type I error. For a Type II Error, we might find evidence (perhaps having four legs) that is not unusual for an elephant, so we do not reject H 0 and then discover that X is actually a giraffe. - eBook - ePub
- Donncha Hanna, Martin Dempster(Authors)
- 2012(Publication Date)
- For Dummies(Publisher)
p value from a test is less than 0.05 (5 per cent). Therefore, the maximum acceptable chance of making a Type I error is 0.05 (5 per cent).Considering the Type II ErrorA Type II Error occurs when you fail to reject the null hypothesis (because the p value for the test is 0.05 or more), but the null hypothesis is false in reality. In this case you have concluded that there was no statistically significant effect when in fact one does exist but you failed to find it!You fail to reject the null hypothesis because the p value associated with your statistical test indicates that results from your sample would be found at least 5 per cent of the time if the null hypothesis was true. There are two possible reasons for this:The null hypothesis is true.The null hypothesis is not true but your sample did not allow you to detect this.If the null hypothesis is true, then you have made the correct conclusion. If the null hypothesis is not true, then you have made a Type II Error. But, why might your sample not allow you to detect a false null hypothesis? There are many reasons for this. For example, if your null hypothesis is wrong, but only slightly wrong, then this will be difficult to detect and you need a large sample size to help you find this small difference (see Chapter 11 for a discussion of sample size and rejecting the null hypothesis).You won’t know whether your null hypothesis is actually true or not, so, as with the Type I error, you won’t know when you have made a Type II Error. Again, you just have to work with the knowledge that every time you conduct a statistical test, there is a chance that you will make a Type II Error and we can work out what this chance is. - Lise DeShea, Larry E. Toothaker(Authors)
- 2015(Publication Date)
- Chapman and Hall/CRC(Publisher)
This decision could have been a Type I error, which occurs when we reject the null hypothesis when actually the null hypothesis is true in the popula-tion. This question says “significantly,” which means a null hypothesis was rejected. The only kind of error possible when we reject a null hypothesis is a Type I error. Table 9.1 Two Possible Realities, Two Possible Decisions True State of the World H 0 is True H 0 is False Decision on a Given Hypothesis Test Reject H 0 This combination is a Type I error This combination is a correct decision Retain H 0 This combination is a correct decision This combination is a Type II Error 245 Probability of a Type I Error It may be alarming to you to think that we could be making a mistake and drawing erroneous conclusions whenever we perform a hypothesis test. It is pos-sible because we depend on probability to test hypotheses. The next section will begin our discussion of the probabilities of errors and correct decisions, includ-ing the ways that researchers try to limit the chances of errors and at the same time try to increase the chances of correct decisions. Probability of a Type I Error It may surprise you to learn that we already have talked about the probability of committing a Type I error in a hypothesis test. Let’s take a look at an earlier figure, reproduced here as Figure 9.1. Figure 9.1 shows a standard normal distribution reflecting a reality in which the null hypothesis is true. To refresh your memory: we used this figure in the rat shipment example, where the null hypothesis said the population mean maze completion time was less than or equal to 33 seconds. We believed that we were sampling from a population of rats that would take longer than 33 seconds on average to complete the maze, so we had written an alternative hypothesis that said H 1 : µ > 33.- eBook - ePub
- Aviva Petrie, Paul Watson(Authors)
- 2013(Publication Date)
- Wiley-Blackwell(Publisher)
Appendix E will guide you through the process.6.4 Type I and Type II Errors
6.4.1 Making the wrong decision in a hypothesis test
You must recognize that the final decision whether or not to reject the null hypothesis may be incorrect. As a frame of reference, we discuss the common situation in which we are interested in comparing two population means using independent samples selected from these populations (see Section 7.4.1 ). The null hypothesis is that the two population means are equal or, equivalently, that the difference between these two population means is zero. Consider the example introduced in Section 6.3.1; we have the plasma magnesium levels for two groups of cattle, one kept indoors and the other put out on spring grass for the past week. A lower level of plasma magnesium in the outdoor cattle would suggest a risk of grass staggers. Our null hypothesis is that the mean values of plasma magnesium do not differ in the two populations from which we have taken our samples.- We may find that the result of the test is significant. In this case, we reject the null hypothesis at the stated level of significance, and infer that the two population means differ. If this inference is incorrect, and in reality the two means are equal, then we have rejected the null hypothesis when we should not have rejected it (i.e. when it is true). We are making a Type I error (Table 6.1 ).
- Alternatively, we may find that the result of the test is not significant. Then, we do not reject the null hypothesis at the stated level of significance, so we cannot infer that the two population means are different. If this is incorrect, and in reality the two means differ, then we have not rejected the null hypothesis when we should have rejected it (i.e. when it is false). We are making a Type II Error (Table 6.1
- eBook - ePub
Introductory Statistics
A Conceptual Approach Using R
- William B. Ware, John M. Ferron, Barbara M. Miller(Authors)
- 2013(Publication Date)
- Routledge(Publisher)
n). When you set the level of significance to α, the probability of making a Type I error is whatever you set as α; you have total control over the probability of making a Type I error. If making a Type I error 5% of the time is unacceptable to you, then you can set α at 0.01, knowing that you will make Type I errors about 1% of the time. Note that in making these statements we are assuming you will conduct your tests in an appropriate manner. If you compute things incorrectly or use a test that assumes a random sample when you don't have a random sample the actual Type I error rate may not be equal to α.Based on the conditions established for testing H0 in this example, we derived a decision rule as depicted in Figure 11.1 . If we were to observe a sample mean between 94.12 and 105.88, we would not reject H0 . However, if the sample mean we observed was either ≤94.12 or ≥105.88, we would reject H0 . If we were to reject H0 , we would have to consider the possibility that we might be making a Type I error.However, what if we found a sample mean somewhere between 94.12 and 105.88? Although we would not reject H0 , that does not necessarily mean that H0 is true; we might be making a Type II Error. Determining the probability of a Type II Error (β) is not quite as simple as determining the probability of a Type I error. To determine the actual value of β, we need to consider several things simultaneously. To take you through the logic of the process, we have constructed Figure 11.3 .There are four panels in Figure 11.3 . The first panel (a) is actually a repeat of Figure 11.1 ; it shows the distribution of the sample mean under the assumption of the null hypothesis, along with the two-tailed decision rule for rejecting H0 . We will reject H0 if the observed sample mean is ≤94.12 or ≥105.88. When thinking about a Type II Error, it is important to understand that two things must happen. First, H0 must be false, and second, the decision must be not to reject H0 . Let's deal first with H0 being false. In panel (b), we have removed the distribution under H0 , but we still show the decision rule derived under H0 - Bruce M. King, Patrick J. Rosopa, Edward W. Minium(Authors)
- 2018(Publication Date)
- Wiley(Publisher)
What risk should be accepted for the two taken together? In general, one should translate the error into concrete terms and then decide what level of risk is tolerable. 212 CHAPTER 14. INTERPRETING THE RESULTS OF HYPOTHESIS TESTING In the same manner, it is useful to translate the abstract conception of a Type II Error into practical consequences. Consider Dr. Brown’s problem again. A Type II Error would occur if she had retained the null hypothesis that the mean of the population of sixth-grade test scores was 85 when in fact they were really above standard or below standard. How important would it be to avoid this error? Once that judgment is made, it can be taken into account in designing the study. You will learn how to do that in Section 14.11. For general use, = .05 and = .01 make quite good sense. They tend to give reasonable assurance that the null hypothesis will not be rejected unless it really should be. At the same time, they are not so stringent as to raise unnecessarily the likelihood of retaining false hypotheses. Whatever the level of significance adopted, the decision should be made in advance of collecting the data. 14.4 The Power of a Test In Dr. Brown’s problem (Chapter 12), she would be making a correct decision (i.e., rejecting a false null hypothesis) if she claimed that the mean mathematics achievement score of sixth graders in her school district was superior to the national norm; and in fact it really was superior. Because the probability of retaining a false null hypothesis is , the probability of correctly rejecting a false null hypothesis is (1 − ): (1 − ) = Pr (rejecting H 0 when H 0 is false) The value of (1 − ) is called the power of the test. Among several ways of conducting a test, power of the test the probability of rejecting a false null hypothesis; 1 − the most powerful one is that offering the greatest probability of rejecting H 0 when it should be rejected.- Lawrence Kupper, Brian Neelon, Sean M. O'Brien(Authors)
- 2010(Publication Date)
- Chapman and Hall/CRC(Publisher)
308 Hypothesis Testing Theory H 1 when, in fact, H 0 is true; the probability of a Type I error is denoted as α = pr ( test rejects H 0 | H 0 true ) . A “Type II” error occurs when the decision is made not to reject H 0 when, in fact, H 0 is false and H 1 is true; the probability of a Type II Error is denoted as β = pr ( test does not reject H 0 | H 0 false ) . 5.1.1.5 Power The power of a statistical test is the probability of rejecting H 0 when, in fact, H 0 is false and H 1 is true; in particular, POWER = pr ( test rejects H 0 | H 0 false ) = ( 1 − β ) . Type I error rate α is controllable and is typically assigned a value satisfying the inequality 0 < α ≤ 0.10. For a given value of α , Type II Error rate β , and hence the power ( 1 − β ) , will generally vary as a function of the values of population parameters allowable under a composite alternative hypothesis H 1 . In general, for a specified value of α , the power of any reasonable statistical testing procedure should increase as the sample size increases. Power is typi-cally used as a very important criterion for choosing among several statistical testing procedures in any given situation. 5.1.1.6 Test Statistics and Rejection Regions A statistical test of H 0 versus H 1 is typically carried out by using a test statistic . A test statistic is a random variable with the following properties: (i) its dis-tribution, assuming the null hypothesis H 0 is true, is known either exactly or to a close approximation (i.e., for large sample sizes); (ii) its numerical value can be computed using the information in a sample; and, (iii) its computed numerical value leads to a decision either to reject, or not to reject, H 0 in favor of H 1 . More specifically, for a given statistical test and associated test statistic, the set of all possible numerical values of the test statistic under H 0 is divided into two disjoint subsets (or “regions”), the rejection region R and the non-rejection region ¯ R .- eBook - ePub
Statistical Misconceptions
Classic Edition
- Schuyler Huck, Schuyler W. Huck(Authors)
- 2015(Publication Date)
- Routledge(Publisher)
Chapter 8 and click on the link called “The Relationship Between Alpha and Beta Errors.” Then, follow the detailed instructions (prepared by this book’s author) on how to use the Java applet. By doing this assignment, you will demonstrate in a convincing fashion that the risks of Type I and Type II Errors can be decreased at the same time. This assignment’s Java applet will also help you understand that Type II Error risk is determined by several features of a study beyond the selected level of significance.* Ouyang, R. (n.d.). Basic concepts of quantitative research: Inferential statistics . Retrieved March 23, 2007, from http://ksumail.kennesaw.edu/~rouyang/ED-research/i-statis.htm .† Asraf, R. M., & Brewer, J. K. (2004). Conducting tests of hypotheses: The need for an adequate sample size. Australian Educational Researcher , 31 (1), 79–94.* Appendix B contains references for all quoted material presented in this section.† A Type I error is made if a true null hypothesis is rejected.* This definition comes from the Merriam-Webster Online Dictionary. Retrieved March 25, 2007, from http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=nominal .† A computer simulation I conducted with μ1 , = μ2 , , n1 = 25, and n2 - eBook - ePub
- Schuyler W. Huck(Author)
- 2008(Publication Date)
- Routledge(Publisher)
There are two main reasons why the selected level of significance can be misleading as to the chance of a Type I error. One of these concerns underlying assumptions. The other deals with the number of tests being conducted.If one or more of the assumptions underlying a statistical test are violated, the actual probability of a Type I error can be substantially higher or lower than the nominal level of significance. For example, if a t-test comparing the means from two unequally sized samples is conducted with α = .05, and if the assumption of equal population variances does not hold true, the actual probability of a Type I error can be greater than .35. That's seven times higher than the nominal level of significance! The Type I error rate can also be far smaller than .05.†A statistical test is said to be robust if it functions as it should, even if its underlying assumptions are violated. However, certain statistical tests are robust only in specific situations (e.g., when sample sizes are large and equal), while other statistical tests are never robust if their assumptions are violated.*Even if a statistical test's underlying assumptions are valid, it still is possible for the level of significance to understate Type I error risk. This will happen if the test is applied more than once. Within the full set of tests being conducted, the probability of a Type I error occurring in one or more of the tests will exceed α, even if each test is conducted with a level of significance set equal to α.The phrase “inflated Type I error rate” describes this situation. A coin-flipping analogy may help to illustrate why the chances of a Type I error get elevated over α in the situation where multiple tests are conducted. Let's consider flipping a fair coin, and let's further consider that it's bad to end up with the coin landing on its tails side. If we flip the coin just once, the probability of a bad result is .50. But what if we flip the coin twice? Now, the probability of getting tails (on the first flip, on the second flip, or on both flips) is .75. If the first coin is flipped 10 times, the probability of getting at least 1 tail increases to .999. - eBook - PDF
- Frederick Gravetter, Larry Wallnau(Authors)
- 2016(Publication Date)
- Cengage Learning EMEA(Publisher)
Make a decision. If the obtained z -score is in the critical region, reject H 0 because it is very unlikely that these data would be obtained if H 0 were true. In this case, conclude that the treatment has changed the population mean. If the z -score is not in the critical region, fail to reject H 0 because the data are not significantly different from the null hypothesis. In this case, the data do not provide sufficient evidence to indicate that the treatment has had an effect. 4. Whatever decision is reached in a hypothesis test, there is always a risk of making the incorrect decision. There are two types of errors that can be committed. A Type I error is defined as rejecting a true H 0 . This is a serious error because it results in falsely reporting a treatment effect. The risk of a Type I error is deter-mined by the alpha level and therefore is under the experimenter’s control. A Type II Error is defined as the failure to reject a false H 0 . In this case, the experiment fails to detect an effect that actually occurred. The probability of a Type II Error cannot be specified as a single value and depends in part on the size of the treatment effect. It is identified by the symbol β (beta). 5. When a researcher expects that a treatment will change scores in a particular direction (increase or decrease), it is possible to do a directional, or one-tailed, test. The first step in this procedure is to incor-porate the directional prediction into the hypotheses. For example, if the prediction is that a treatment will increase scores, the null hypothesis says that there is no increase and the alternative hypothesis states that there is an increase. To locate the critical region, you must determine what kind of data would refute the null hypothesis by demonstrating that the treatment worked as predicted. These outcomes will be located entirely in one tail of the distribution, so the entire critical region (5% or 1% depending on α ) will be in one tail.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.










