Mathematics

Binomial Distribution

The binomial distribution is a probability distribution that describes the number of successes in a fixed number of independent trials, each with the same probability of success. It is characterized by two parameters: the number of trials and the probability of success on each trial. The distribution is widely used in statistics and probability theory to model various real-world phenomena.

Written by Perlego with AI-assistance

8 Key excerpts on "Binomial Distribution"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • Statistics for Business

    ...10 Discrete Probability Distribution: Binomial and Poisson Distributions 10.1    Introduction We can define a probability distribution as the relative frequency distribution that should theoretically occur for observations from a given population. In business and other contexts, it can be helpful to proceed from 1.  A basic understanding of how a natural process seems to operate in generating events, too. 2.  Identifying the probability that a given event may occur. By using a probability distribution as a model that represents the possible events and their respective likelihoods of occurrence, we can make more effective decisions and preparations in dealing with the events that the process is generating. 10.2    Binomial Distribution Binomial Distribution is one of the most widely used discrete distributions. It deals with consecutive trials, each of which has 2 possible outcomes. It relies on what is known as the ‘Bernoulli Process’. 10.2.1      Characteristics of a Bernoulli Process 1.  There are 2 or more consecutive trials. 2.  In each trial, there are just 2 possible outcomes (success or failure). 3.  The trials are independent. 4.  The probability of success is constancy to all trials. 10.2.2      Definition of Binomial Distribution The Binomial Distribution is defined as P(X) = n C x pˣ q n−x ; x = 0, 1, 2, …, n Where, n is the number of trials, x is the number of successes, p is the probability of success, and q is the the probability of failure (q = 1− p). The same can be expressed in a tabular form: From the table, it is clear that for x = 1, 2, …, n gives the successive terms of the binomial expansion of (p + q) n = 1 n = 1 [ p + q = 1]. The 2 constants, p and n, are known as the parameters of the distribution. NOTE : It is otherwise called a ‘Bernoulli distribution’ or ‘finite discrete distribution’...

  • Statistics for the Behavioural Sciences
    eBook - ePub

    Statistics for the Behavioural Sciences

    An Introduction to Frequentist and Bayesian Approaches

    • Riccardo Russo(Author)
    • 2020(Publication Date)
    • Routledge
      (Publisher)

    ...5  Probability distributions and the Binomial Distribution 5.1 Introduction The main aim of this chapter is to provide a description of the characteristics of the Binomial Distribution and its use in testing hypotheses. This is a particularly useful distribution. Consider a situation where a series of independent events occurs (e.g., a fair coin is tossed four times), and the outcome of each event can be either a success (e.g., “Head”) or a failure (e.g., “Tail”). We can then count the number of successes that can be obtained when a coin is tossed four times (i.e., 0, 1, 2, 3, and 4), and calculate the probability of obtaining each one of these five outcomes. The Binomial Distribution describes the probability of each of the outcomes of the above discrete random variable, i.e., “number of successes obtained when a coin is tossed four times”. This brief description of the Binomial Distribution may seem quite obscure at this stage. The introduction of a series of concepts will provide enough background for a more comprehensive presentation of this important probability distribution. We will also describe how the Binomial Distribution can be used in testing a hypothesis. For example, if you want to test whether a coin is biased you could simply toss the coin 20 times, record the number of “Heads” (e.g., 16), and compare this value with the Binomial Distribution of the discrete random variable “number of Heads obtained when a fair coin is tossed 20 times”. If the observed value is somehow at odds with the values given by the Binomial Distribution of the above variable, then there is good evidence that the coin is biased...

  • Statistics
    eBook - ePub

    Statistics

    The Essentials for Research

    ...9 The Binomial Distribution and its Normal Approximation In this chapter we shall apply the concepts of sampling and probability to a variety of problems involving binomial populations. These ideas are fundamental to statistical inference and they provide a continuously recurring theme for the remainder of the text. 9.1 Theoretical Sampling Distributions Based on Binomial Populations A binomial population is composed of two mutually exclusive classes of discrete events where each of the events within a class has the same probability of occurrence. If a coin is tossed 500 times, one class of events (or outcomes) is heads, the other tails. These two classes exhaust the possibilities and they are mutually exclusive. If the probability of heads is the same on every toss, and if the probability of tails is the same on every toss, then we have a binomial population. All of the Republicans and Democrats in the United States constitute a binomial population; so does the infinite number of possible die tosses where the outcome of each toss is classified as either a “6” or a “no 6.” Notice that a binomial population can be composed of a finite or an infinite number of events, and that the probability of obtaining an event in one of the classes is not necessarily the same as the probability of obtaining an event in the other class. Imagine a binomial population consisting of an infinite number of tosses of an unbiased coin. Suppose we select samples of four tosses (n = 4) from such a population, and we continue to select such samples until we have drawn an indefinitely large number. Each sample of four coins will contain either 0 tails, 1 tail, 2 tails, 3 tails, or 4 tails; there are, then, five possible outcomes. In the last chapter we showed how the relative frequency, or probability, of each of these outcomes can be determined from the expansion of (p + q) 4...

  • Introductory Probability and Statistics
    eBook - ePub

    Introductory Probability and Statistics

    Applications for Forestry and Natural Sciences (Revised Edition)

    • Robert Kozak, Antal Kozak, Christina Staudhammer, Susan Watts(Authors)
    • 2019(Publication Date)

    ...As in binomial experiments, the probability of failure is q = 1 – p. The probability function of the geometric distribution is defined by only one parameter p, as: Example 5.12. A bag contains several thousand white pine seeds, 10% of which are empty. What is the probability of finding the first empty seed the fifth time we cut a seed open? An extension of the geometric distribution occurs when trials are repeated until a fixed number of successes, k, occurs. Such a random experiment is called a negative binomial experiment and its probabilities are described by the negative Binomial Distribution. Like a geometric experiment, a negative binomial experiment possesses all the properties of a binomial experiment except that the number of trials is not fixed. The negative binomial random variable, X, represents the number of independent trials required to produce k successes, where p is the probability of success and q = 1 – p is the probability of failure. In developing a general equation, consider that to obtain the k th success on the x th trial, the k th success must be preceded by k – 1 successes and x – k failures. The probability of k – 1 successes and x – k failures can be arranged in the following number of ways: The probability of p k q x–k is multiplied by the number of possible arrangements 2 to obtain the probability of the k th success. The general form of the negative binomial probability distribution function is: The parameters p and k define the negative Binomial Distribution. Note that, when k = 1, the above equation reduces to the geometric distribution because the number of arrangements of one success preceded by x – 1 failures is simply 1. In other words, the geometric distribution can be said to be a special case (where k = 1) of the negative Binomial Distribution. Example 5.13. Consider the white pine seeds described in Example 5.12...

  • Statistical Misconceptions
    eBook - ePub
    • Schuyler W. Huck(Author)
    • 2015(Publication Date)
    • Routledge
      (Publisher)

    ...Indeed, the absolute difference between the numbers of heads and tails tends to become larger as the number of tosses increases. This surprising fact can be convincingly demonstrated using computer simulation. Why This Misconception Is Dangerous Probability distributions are important. They are centrally connected, for example, to confidence intervals that are built around sample statistics. Moreover, the p -value involved in hypothesis testing comes about by comparing a test statistic to an appropriate probability distribution. Because probability distributions are used so frequently in statistics, you are better off if you are able to visualize them and know how their shapes are influenced by various factors. One such factor is N, the number of observations. The Binomial Distribution is probably the easiest probability distribution to understand, especially when the probability, p, of the outcome being focused on is equal to.50 (as is the case if we count the number of heads that turn up when a fair coin is flipped several times). However, to understand how the Binomial Distribution changes as a function of N, you need to realize that the probability of observing a result that exactly matches Np is different from the probability of observing a result that approximates Np. If you don’t understand this difference, you are likely to be confused by illustrations or tables of the Binomial Distribution, prepared for various values of N, because the likelihood of a result turning out exactly equal to Np goes down (not up) as N increases. Undoing the Misconception If you flip a fair coin an even number of times, the probability of getting as many heads as tails is shown in Table 5.1.1. Note that as the number of coin flips, N, increases, the probability of observing “equality” decreases. * If you flip that same fair coin an even number of times, the probability of getting a result that approximates the expected value (Np) goes up as N increases...

  • Statistical Methods for Geography
    eBook - ePub

    ...The part of the equation represents the number of such outcomes in the sample space; it is the number of possible rearrangements of x successes and n − x failures. You should recognize that for given values of n and p, we can generate a histogram by using this formula to generate the expected frequencies associated with different values of x. This histogram, unlike those in the previous chapter, is not based upon observed data. This is instead a theoretical histogram. It is also known as the binomial probability distribution, and it reveals how likely particular outcomes are. For example, suppose that the probability that a surveyed resident is a newcomer to the neighborhood is p = 0.2. Then the probability that our survey of four residents will result in a given number of newcomers is: (3.4) The probabilities may be thought of as relative frequencies. If we took repeated surveys of four residents, 40.96% of the surveys would yield no newcomers, 40.96% would reveal one newcomer, 15.36% would reveal two newcomers, 2.56% would yield three newcomers, and 0.16% would result in four newcomers. Note that the probabilities or relative frequencies sum to one. The Binomial Distribution depicted in Figure 3.1 portrays these results graphically. If we multiplied the vertical scale by n, the histogram would represent the absolute frequencies expected in each category. Figure 3.1 Binomial Distribution with n = 4, p = 0.2 While the actual number of newcomers is determined from the survey, we can also define an expected value, or theoretical mean. In our present example, we would expect, on average, two-tenths of the four people to be newcomers. The expected value is therefore given simply as np, and in this example is equal to (4)(0.2) = 0.8. If this ‘experiment’ were repeated a large number of times, sometimes we would observe no newcomers, sometimes we would observe one newcomer, etc. The average result of a large number of such experiments would be 0.8 newcomers...

  • Statistics in Psychology
    eBook - ePub

    Statistics in Psychology

    An Historical Perspective

    ...De Moivre made extensive use of the method in his own work, and it was his Approximatio, first printed and circulated to some friends in 1733, that links the binomial to what we now call the normal distribution. The Approximatio is included in the second (1738) and third (1756) editions of the Doctrine. FIG. 6.1 The Binomial Distribution for N = 7 It should be mentioned that a Scottish mathematician, James Gregory (1638–1675), working at the time (1664–1668) in Italy, derived the binomial expansion and produced important work on the mathematics of infinite series, discovered quite independently of Newton. The Poisson Distribution Before the structure of the normal distribution is examined, the work of Siméon-Denis Poisson (1781–1840) on a useful special case of the binomial will be described. The Ecole Polytechnique was founded in Paris in 1794. It was the model for many later technical schools, and its methods inspired the production of many student texts in mathematics and engineering which are the forerunners of present-day textbooks. Among the brilliant mathematicians of the Ecole during the earlier years of the 19th century was Poisson. His name is a familiar label in equations and constants in calculus, mechanics, and electricity. He was passionately devoted to mathematics and to teaching, and published over 400 works. Among these was Recherche sur la Probabilité des Jugements in 1837. This contains the Poisson Distribution, sometimes called Poisson’s law of large numbers. It was noted earlier that as n in (P + Q) n increases, the Binomial Distribution tends to the normal distribution. Poisson considered the case where as n increases toward infinity, P decreases toward zero, and nP remains constant...

  • Understanding Credit Derivatives and Related Instruments
    • Antulio N. Bomfim(Author)
    • 2015(Publication Date)
    • Academic Press
      (Publisher)

    ...For instance: X = 1 if the company defaults and X = 0 if it survives If the probability of default is denoted as ω, the probability function of X can be written as p (x) = ω x (1 − ω) 1 − x x = 0, 1 (B.13) and one can say that X has a Bernoulli distribution. The expected value and variance. of X are: E [ X ] = ∑ x = 0 1 x ω x (1 − ω) 1 − x = (1) ω + (0) (1 − ω) = ω V [ X ] = ∑ x = 0 1 (x − ω) 2 ω x (1 − ω) 1 − x = ω (1 − ω) B.6 The Binomial Distribution Consider a sequence of n independent Bernoulli trials. For instance, given n corporate borrowers, each trial may involve either the default or survival of an individual borrower, where defaults among the n borrowers are mutually independent. Let the default probability for each borrower be denoted as ω. Let Y be the random variable that represents the number of defaults among the n borrowers over a given period of time. The probability function of Y is b (y ; n, ω) = n ! y ! (n − y) ! ω y (1 − ω) n − y (B.14) where y denotes the possible values of Y — y = 0,1,2,…, n —and Y is said to be binomially distributed. The probability function of the Binomial Distribution. is B (y ; n, ω) ≡ Prob [ Y ≤ y ] = ∑ s = 0 y b (s ; n, ω) y = 0, 1, …, n (B.15) which, continuing with our example, is the probability that at most y companies will default over a given time horizon. In Part IV, we use the results just derived to examine expected default-related losses in an equally-weighted homogeneous portfolio where defaults among the issuers represented in the portfolio are mutually independent. With default independence the question of how many issuers are likely to default or survive reduces to a sequence of independent Bernoulli trials, in which case the Binomial Distribution applies...