Mathematics

Type I Error

Type I Error, also known as a false positive, occurs when a statistical test incorrectly rejects a true null hypothesis. In other words, it is the mistake of concluding that there is a significant effect or relationship when there isn't one in reality. This error is a fundamental concept in hypothesis testing and has implications in decision-making and research.

Written by Perlego with AI-assistance

10 Key excerpts on "Type I Error"

  • Book cover image for: Statistical Hypothesis Testing and its Applications
    If the result of the test does not correspond with the actual state of nature, then an error has occurred, but if the result of the test corresponds with the actual state of nature, then a correct decision has been made. There are two kinds of error, classified as Type I Error and type II error, depending upon which hypothesis has incorrectly been identified as the true state of nature. Type I Error Type I Error , also known as an error of the first kind , an α error , or a false positive : the error of rejecting a null hypothesis when it is actually true. Plainly speaking, it occurs when we are observing a difference when in truth there is none, thus indicating a test of poor specificity. An example of this would be if a test shows that a ________________________ WORLD TECHNOLOGIES ________________________ woman is pregnant when in reality she is not, or telling a patient he is sick when in fact he is not. Type I Error can be viewed as the error of excessive credulity. In other words, a Type I Error means that a positive inference is actually false. Type II error Type II error , also known as an error of the second kind , a β error , or a false negative : the error of failing to reject a null hypothesis when in fact we should have rejected it. In other words, this is the error of failing to observe a difference when in truth there is one, thus indicating a test of poor sensitivity. An example of this would be if a test shows that a woman is not pregnant, when in reality, she is. Type II error can be viewed as the error of excessive skepticism. In other words, a Type II error means that a negative inference is actually false. A table as follows can be useful in understanding the concept.
  • Book cover image for: Introductory Statistics for the Health Sciences
    This decision could have been a Type I Error, which occurs when we reject the null hypothesis when actually the null hypothesis is true in the popula-tion. This question says “significantly,” which means a null hypothesis was rejected. The only kind of error possible when we reject a null hypothesis is a Type I Error. Table 9.1 Two Possible Realities, Two Possible Decisions True State of the World H 0 is True H 0 is False Decision on a Given Hypothesis Test Reject H 0 This combination is a Type I Error This combination is a correct decision Retain H 0 This combination is a correct decision This combination is a Type II error 245 Probability of a Type I Error It may be alarming to you to think that we could be making a mistake and drawing erroneous conclusions whenever we perform a hypothesis test. It is pos-sible because we depend on probability to test hypotheses. The next section will begin our discussion of the probabilities of errors and correct decisions, includ-ing the ways that researchers try to limit the chances of errors and at the same time try to increase the chances of correct decisions. Probability of a Type I Error It may surprise you to learn that we already have talked about the probability of committing a Type I Error in a hypothesis test. Let’s take a look at an earlier figure, reproduced here as Figure 9.1. Figure 9.1 shows a standard normal distribution reflecting a reality in which the null hypothesis is true. To refresh your memory: we used this figure in the rat shipment example, where the null hypothesis said the population mean maze completion time was less than or equal to 33 seconds. We believed that we were sampling from a population of rats that would take longer than 33 seconds on average to complete the maze, so we had written an alternative hypothesis that said H 1 : µ > 33.
  • Book cover image for: Experimental Design and Data Analysis for Biologists
    What about the two errors? • A Type I Error is when we mistakenly reject a correct H 0 (e.g. when we conclude from our sample and a t test that the population parameter is not equal to zero when in fact the population parameter does equal zero) and is denoted . A Type I Error can only occur when H 0 is true. • A Type II error is when we mistakenly accept an incorrect H 0 (e.g. when we conclude from 42 HYPOTHESIS TESTING F ig ure 3.2 . Stat i st i cal dec i s i ons and errors when test i n g null hypotheses . POPULATION SITUATION Correct decision Effect detected Type II error Effect not detected Type I Error Effect detected; none exists STATISTICAL CONCLUSION Reject H 0 Retain H 0 Effect No effect Correct decision No effect detected; none exists our sample and a t test that the population parameter equals zero when in fact the population parameter is different from zero). Type II error rates are denoted by and can only occur when the H 0 is false. Both errors are the result of chance. Our random sample(s) may provide misleading infor-mation about the population(s), especially if the sample sizes are small. For example, two popula-tions may have the same mean value but our sample from one population may, by chance, contain all large values and our sample from the other population may, by chance, contain all small values, resulting in a statistically significant difference between means. Such a Type I Error is possible even if H 0 ( 1 2 ) is true, it’s just unlikely. Keep in mind the frequency interpreta-tion of P values also applies to the interpretation of error rates. The Type I and Type II error prob-abilities do not necessarily apply to our specific statistical test but represent the long-run prob-ability of errors if we repeatedly sampled from the same population(s) and did the test many times.
  • Book cover image for: Experimental Design and Statistical Analysis for Pharmacology and the Biomedical Sciences
    Because we are relying on probability values, then this raises the problem that we cannot be absolutely certain which of these possibilities is true! Consequently, whenever we draw conclusions from the results of statistical tests performed on experimental data, then we are always at risk of making one of two errors and drawing the wrong conclusion; these are known as Type 1 and Type 2 errors. The Type 1 error is where we conclude that there is a real effect on our data when really there is not one! I argued above that if the resulting probability in support of the Null Hypothesis is very small (i.e. p < 0.05), then we generally reject the Null Hypothesis and are persuaded to accept the Alternate Hypothesis. This also means that where there is no effect on the population, then there is a probability of 5% of making a Type 1 error. This value is often referred to as the α value . This means that if we repeat an exper-iment where we assume there is no effect on the population a number of times, then in 5% of those experiments (i.e. 1 in 20), we would expect to determine a statistical result large enough to persuade us that there was a real effect even though there was not; those 5% of occasions would be associated with a Type 1 error. In contrast, a Type 2 error is where we conclude that there is not a real effect on our population when really there is one! Such an error may occur where we obtain a small test statistic, perhaps because there is a lot of experimental noise in our data in relation-ship to the effect size such that the true effect of our experimental change becomes masked. It has been argued that the maximum acceptable probability of a Type 2 error is 20% (i.e. 1 in 5); this is known as the β value .
  • Book cover image for: Statistical Misconceptions
    eBook - ePub
    • Schuyler Huck, Schuyler W. Huck(Authors)
    • 2015(Publication Date)
    • Routledge
      (Publisher)
    There are two main reasons why the selected level of significance can be misleading as to the chance of a Type I Error. One of these concerns underlying assumptions. The other deals with the number of tests being conducted.
    If one or more of the assumptions underlying a statistical test are violated, the actual probability of a Type I Error can be substantially higher or lower than the nominal level of significance. For example, if a t-test comparing the means from two unequally sized samples is conducted with α = .05, and if the assumption of equal population variances does not hold true, the actual probability of a Type I Error can be greater than .35. That's seven times higher than the nominal level of significance! The Type I Error rate can also be far smaller than .05.
    A statistical test is said to be robust if it functions as it should, even if its underlying assumptions are violated. However, certain statistical tests are robust only in specific situations (e.g., when sample sizes are large and equal), while other statistical tests are never robust if their assumptions are violated.*
    Even if a statistical test's underlying assumptions are valid, it still is possible for the level of significance to understate Type I Error risk. This will happen if the test is applied more than once. Within the full set of tests being conducted, the probability of a Type I Error occurring in one or more of the tests will exceed α, even if each test is conducted with a level of significance set equal to α.
    The phrase "inflated Type I Error rate" describes this situation. A coin-flipping analogy may help to illustrate why the chances of a Type I Error get elevated over α in the situation where multiple tests are conducted. Let's consider flipping a fair coin, and let's further consider that it's bad to end up with the coin landing on its tails side. If we flip the coin just once, the probability of a bad result is .50. But what if we flip the coin twice? Now, the probability of getting tails (on the first flip, on the second flip, or on both flips) is .75. If the first coin is flipped 10 times, the probability of getting at least 1 tail increases to .999.
  • Book cover image for: Statistics for Veterinary and Animal Science
    • Aviva Petrie, Paul Watson(Authors)
    • 2013(Publication Date)
    • Wiley-Blackwell
      (Publisher)
    Table 6.1 ).
    Table 6.1
    Errors in hypothesis testing.
    Reject H0 Do not reject H0
    H0 true Type I Error Correct decision
    H0 false Correct decision Type II error

    6.4.2 Probability of making a wrong decision

    It is crucial that you understand the importance of these two errors and what each represents as they both play a role in determining the optimal size of an experiment (see Section 13.3 ) – a critical design consideration.
    • The probability of making a Type I Error is the probability of incorrectly rejecting the null hypothesis, i.e. it is the P-value obtained from the test. The null hypothesis will be rejected if this probability is less than the significance level, often denoted by α (alpha) and commonly taken as 0.05. Thus the significance level is the maximum chance of making a Type I Error. If the P-value is equal to or greater than α, then we do not reject the null hypothesis and we are not making a Type I Error. Therefore, by choosing the significance level of the test to be α at the design stage of the study, we are limiting the probability of a Type I Error to be less than α.
    • The probability of making a Type II error is usually designated by β (beta). It is the probability of not rejecting the null hypothesis when the null hypothesis is false. We should decide on a value of β that we regard as acceptable at the design stage of the experiment. β is affected by a number of factors, one of which is the sample size; the greater the sample size, the smaller β
  • Book cover image for: Statistical Misconceptions
    • Schuyler W. Huck(Author)
    • 2008(Publication Date)
    • Routledge
      (Publisher)
    There are two main reasons why the selected level of significance can be misleading as to the chance of a Type I Error. One of these concerns underlying assumptions. The other deals with the number of tests being conducted.
    If one or more of the assumptions underlying a statistical test are violated, the actual probability of a Type I Error can be substantially higher or lower than the nominal level of significance. For example, if a t-test comparing the means from two unequally sized samples is conducted with α = .05, and if the assumption of equal population variances does not hold true, the actual probability of a Type I Error can be greater than .35. That's seven times higher than the nominal level of significance! The Type I Error rate can also be far smaller than .05.
    A statistical test is said to be robust if it functions as it should, even if its underlying assumptions are violated. However, certain statistical tests are robust only in specific situations (e.g., when sample sizes are large and equal), while other statistical tests are never robust if their assumptions are violated.*
    Even if a statistical test's underlying assumptions are valid, it still is possible for the level of significance to understate Type I Error risk. This will happen if the test is applied more than once. Within the full set of tests being conducted, the probability of a Type I Error occurring in one or more of the tests will exceed α, even if each test is conducted with a level of significance set equal to α.
    The phrase “inflated Type I Error rate” describes this situation. A coin-flipping analogy may help to illustrate why the chances of a Type I Error get elevated over α in the situation where multiple tests are conducted. Let's consider flipping a fair coin, and let's further consider that it's bad to end up with the coin landing on its tails side. If we flip the coin just once, the probability of a bad result is .50. But what if we flip the coin twice? Now, the probability of getting tails (on the first flip, on the second flip, or on both flips) is .75. If the first coin is flipped 10 times, the probability of getting at least 1 tail increases to .999.
  • Book cover image for: Applied Statistics for Business and Economics
    • Robert M. Leekley(Author)
    • 2010(Publication Date)
    • CRC Press
      (Publisher)
    And the rules of evidence are intended to assure a low prob-ability that you will reject the null hypothesis when it is true. This sort of error—rejecting the null hypothesis when it is true—is called a Type I Error , and its probability is α . The Truth Innocent Guilty Your H o : Innocent Correct (1 – α) Type II error (β) Decision H a : Guilty Type I Error (α) Correct (1 – β) Figure 8 .1 Decision making with incomplete information. 168 Applied Statistics for Business and Economics We want α to be small. Still, we need to recognize that the harder we make it to convict an innocent person, the harder we make it to convict a guilty person too. This sort of error—failing to reject the null hypothesis when it is false—is called a type II error , and its probability is β . And, for a given amount of information, the smaller we make α , the larger we make β . Notice that we either reject H o or we fail to reject H o . If we fail to reject H o , it could be that this is because H o is true. It could also be because there was simply not enough evidence. You may not believe that the defendant is actually innocent. But if there is not enough evidence to establish his guilt beyond a reasonable doubt, you fail to convict. 8 .2 A Two-Tailed Test for the Population Proportion 8.2.1 The Null and Alternative Hypotheses Suppose someone claims that 12% of all college students are left-handed. This is a claim about a population parameter, π LH , which may or may not be true. We can test it in much the same way we tested the claim of inno-cence in the trial. A claim that we are “testing” must be the null hypoth-esis. The null hypothesis is H o : π LH = 0.12. The alternative hypothesis is what we will believe if we succeed in rejecting the null hypothesis. A general, “two-tailed” alternative would be simply that the null hypothesis is wrong. That is, H a : π LH ≠ 0.12. The first step in testing hypotheses is to always write down your null and alternative hypotheses.
  • Book cover image for: Introductory Statistics
    eBook - ePub

    Introductory Statistics

    A Conceptual Approach Using R

    • William B. Ware, John M. Ferron, Barbara M. Miller(Authors)
    • 2013(Publication Date)
    • Routledge
      (Publisher)
    Considering our three analyses independently, we would tentatively conclude that Asian students seem to come to school better prepared than the general population in both reading and math, but appear to be similar to the general population with regard to general knowledge. Though these decisions are based on our analyses, they are also based on sample data; we do not really know the “truth” in the population. Thus, we need to consider the possibility that our decisions might be wrong, noting particular kinds of errors we might make.

    POSSIBLE DECISION ERRORS IN HYPOTHESIS TESTING

    The unfortunate reality of the situation is that no matter what decision you make about the ten-ability of H0 , your decision may be wrong. On the one hand, if you reject H0 when testing at the .05 level, it may be that the null hypothesis is actually true. That is, you may have obtained one of those “bad” random samples (i.e., non-representative) that you will get 5% of the time. Recall that random samples are simply random samples; we hope they are representative, but we know that any one random sample is not necessarily representative. If this is the case, you will have made a Type I Error. On the other hand, you may find that while your data provide insufficient evidence to reject H0 , it may be the case that H1 is actually true and H0 is not true. For one reason or another, however, you have been unable to reject H0 . In this case, you will have made a Type II error. The possibilities are depicted in Table 11.1 .
    Table 11.1  Possible outcomes when testing a null hypothesis against an alternative hypothesis: Type I and Type II errors
    The “truth” in the population
          H0 true       H0 false
    Your decision Reject H0 Type I Error (α) Good decision ☺
    Do not reject H0 Good decision ☺ Type II error (β)
    The probability of making a Type I Error is set at α, which is also called the level of significance. The probability of making a Type II error is β. In discussing each of these types of errors and their interrelationship, we will return to the first example in this chapter, the aspiring astronaut example. As you may recall, the process of hypothesis testing consists of a number of steps: specifying H0 , setting the significance level, and examining the distribution of the sample statistic under H0 , which is based on the sample size (n
  • Book cover image for: Statistical Reasoning in the Behavioral Sciences
    • Bruce M. King, Patrick J. Rosopa, Edward W. Minium(Authors)
    • 2018(Publication Date)
    • Wiley
      (Publisher)
    In the meantime, we urge you to include such measures in your reports. After all, the most important product of a statistical test is not whether one rejects the null hypothesis, which we all know is probably false anyway, but the magnitude of the difference between  hyp and  true . Grissom and Kim (2012) pro- vide in depth coverage of effect size measures for various statistical procedures including those involving quantitative and qualitative outcomes. 210 CHAPTER 14. INTERPRETING THE RESULTS OF HYPOTHESIS TESTING 14.3 Errors in Hypothesis Testing There are, so to speak, two “states of nature”: Either the null hypothesis, H 0 , is true or it is false. Similarly, there are two possible decisions: we can reject the null hypothesis or we can retain it. Taken in combination, there are four possibilities. They are diagramed in Table 14.1. If the null hypothesis is true and we retain it, or if it is false and we reject it, we have made a correct decision. Otherwise, we have made an error. Notice that there are two kinds of errors. They are called Type I Error and Type II error. If we reject H 0 and in fact it is true, we have made a Type I Error. Consider the picture of Type I Error rejection of a true null hypothesis the sampling distribution shown in Figure 14.1 (for simplicity’s sake, we consider the sampling distribution of X rather than t). It illustrates a situation that might arise if we were testing the hypothesis H 0 :  X = 150 and using a two-tailed test at the 5% significance level. Suppose our obtained sample mean is 146, which leads us to reject H 0 . The logic in rejecting H 0 is that if the null hypothesis is true, a sample mean this deviant would occur less than 5% of the time. There- fore, it seems more reasonable for us to believe that this sample mean came from a population with a mean different from that specified in H 0 .
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.