Mathematics
Mean and Variance of Poisson Distributions
The mean of a Poisson distribution represents the average number of events that occur in a given interval, while the variance measures the spread or dispersion of the distribution. For a Poisson distribution, the mean and variance are both equal to the parameter λ, which represents the average rate of occurrence for the events being modeled.
Written by Perlego with AI-assistance
Related key terms
1 of 5
12 Key excerpts on "Mean and Variance of Poisson Distributions"
- eBook - PDF
- Bernard Rosner, , , (Authors)
- 2015(Publication Date)
- Cengage Learning EMEA(Publisher)
This relationship can be stated as follows: EQUATION 4.9 For a Poisson distribution with parameter μ, the mean and variance are both equal to μ. This fact is useful, because if we have a data set from a discrete distribution where the mean and variance are about the same , then we can preliminarily identify it as a Poisson distribution and use various tests to confirm this hypothesis. EXAMPLE 4.39 Infectious Disease The number of deaths attributable to polio during the years 1968 − 1977 is given in Table 4.10 [4, 5]. Comment on the applicability of the Poisson distribution to this data set. Solution: The sample mean and variance of the annual number of deaths caused by polio during the period 1968 − 1977 are 18.0 and 23.1, respectively. The Poisson dis-tribution will probably fit well here because the variance is approximately the same as the mean. TABLE 4.10 Number of deaths attributable to polio during the years 1968 − 1977 Year 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 Number of deaths 15 10 19 23 15 17 23 17 26 15 Copyright 2016 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 4.12 Expected Value and Varianceof the Poisson Distribution 103 Suppose we are studying a rare phenomenon and want to apply the Poisson distribu-tion. A question that often arises is how to estimate the parameter µ of the Poisson distribution in this context. Because the expected value of the Poisson distribution is µ , µ can be estimated by the observed mean number of events over time t (e.g., 1 year), if such data are available. - eBook - PDF
Classical and Quantum Information Theory
An Introduction for the Telecom Scientist
- Emmanuel Desurvire(Author)
- 2009(Publication Date)
- Cambridge University Press(Publisher)
We will also see that the exponential PDF is commonly found in human society, for instance, concerning the distribution of alphabetic characters in Western languages. Let us consider next another PDF of interest, which is the Poisson distribution. This PDF is used to predict the number of occurrences of a discrete event over a fixed 24 Probability distributions 0 0.1 0.2 0.3 0.4 0 2 4 6 8 10 12 14 n p(n) N = 9 Figure 2.3 Plots of the Poisson distribution corresponding to mean values = N = 1, 3, 5, 7, 9 (open symbols corresponding to the case = 1). time interval. If N is the expected number of occurrences over that time interval, the probability that the count is exactly n is p(n) = e −N N n n! . (2.8) As a key property, the Poisson PDF variance is equal to the PDF mean, i.e., σ 2 = N . Figure 2.3 shows plots of the Poisson PDF for various values of the mean N . It is seen from the figure that as N increases, the distribution widens (but slowly, according to σ = √ N ). Also, the line joining the discrete points progressively takes the shape of a symmetric bell curve centered about the mean, as illustrated for N = 9 (the mean coinciding with the PDF peak only for N → ∞). This property will be discussed further on, when considering continuous distributions. There exist numerous examples in the physical world of the Poisson distribution, referred to as Poisson processes. In atomic physics, for instance, the Poisson PDF defines the count of nuclei decaying by radioactivity over a period t . Given the decay rate λ (number of decays per second), the mean count is N = λt , and the Poisson PDF p(n) gives the probability of counting n decays over that period. In laser physics, the Poisson PDF corresponds to the count of photons emitted by a coherent light source, or laser. - eBook - ePub
- Alan Anderson(Author)
- 2023(Publication Date)
- For Dummies(Publisher)
= 1 column. The probability is 0.1839.If you don’t care for using formulas or a table, try a specialized calculator or Excel. The Excel function is POISSON.DIST.The moments of the Poisson distribution are used to represent the average value of the distribution and the dispersion of the distribution. As with the binomial distribution, these moments may be computed with simplified formulas.Poisson distribution: Calculating the expected value
As with the binomial distribution (discussed earlier in this chapter), you can use simple formulas to compute the moments of the Poisson distribution. The expected value of the Poisson distribution isE (X ) = λFor example, say that on average three new companies are listed on the New York Stock Exchange (NYSE) each year. The number of new companies listed during a given year is independent of all other years. The number of new listings per year, therefore, follows the Poisson distribution, with a value of λ = 3. As a result, the expected number of new listings next year is λ = 3.Poisson distribution: Computing variance and standard deviation
Compute the variance for the Poisson distribution as σ2 = λ; the standard deviation (σ) equals .Based on the NYSE listing example in the previous section, the variance equals 3 and the standard deviation equals .Graphing the Poisson distribution
As with the binomial distribution, the Poisson distribution can be illustrated with a histogram. In Figures 8-4 through 8-6 , the results are shown for three values of λ: 2 (Figure 8-4 ), λ: 5 (Figure 8-5 ) and λ: 7 (Figure 8-6 - eBook - PDF
- Kevin Cahill(Author)
- 2019(Publication Date)
- Cambridge University Press(Publisher)
(15.68) 576 15 Probability and Statistics As N → ∞ and p → 0 with p N = n fixed, the variance (15.55) of the binomial distribution tends to the limit V P = lim N →∞ p→0 V B = lim N →∞ p→0 p (1 − p) N = n. (15.69) Thus the mean and the variance of a Poisson distribution are equal V P = (n − n) 2 = n = μ (15.70) as one may show directly (Exercise 15.11). Example 15.10 (Accuracy of Poisson’s distribution) If p = 0.0001 and N = 10,000, then n = 1 and Poisson’s approximation to the probability that n = 2 is 1 /2e . The exact binomial probability (15.63) and Poisson’s estimate are P b (2,0.01,1000) = 0.18395 and P P (2, 1) = 0.18394 . Example 15.11 (Coherent states) The coherent state |α introduced in Equation (3.146) |α = e −|α| 2 /2 e αa † |0 = e −|α| 2 /2 ∞ n=0 α n √ n! |n (15.71) is an eigenstate a|α = α|α of the annihilation operator a with eigenvalue α. The probability P (n) of finding n quanta in the state |α is the square of the absolute value of the inner product n|α P (n) = |n|α| 2 = |α| 2n n! e −|α| 2 (15.72) which is a Poisson distribution P (n) = P P (n, |α| 2 ) with mean and variance μ = n = V (α) = |α| 2 . Example 15.12 (Radiation and cancer) If a cell becomes cancerous only after being hit N times by ionizing radiation, then the probability of cancer P (n) N rises with the dose or mean number n of hits per cell as P (n) N = ∞ n=N n n n! e −n (15.73) or P (n) N ≈ n N / N ! for n 1. As illustrated in Fig. 15.2, although the incidence of cancer P (n) N rises linearly (solid) with the dose n of radiation if a single hit, N = 1, can cause a cell to become cancerous, it rises more slowly if the threshold for cancer is N = 2 (dot dash), 3 (dashes), or 4 (dots). Most mutations are harmless. The mean number N of harmful mutations that occur before a cell becomes cancerous is about 4, but N varies with the affected organ from 1 to 10 (Martincorena et al., 2017). - Rajan Chattamvelli, Ramalingam Shanmugam(Authors)
- 2022(Publication Date)
- Springer(Publisher)
101 C H A P T E R 6 Poisson Distribution After finishing the chapter, readers will be able to : : : • Understand Poisson distribution and its properties. • Discuss Poisson distribution’s relationships with other distributions. • Comprehend Poisson processes. • Apply Poisson distribution in practical situations. 6.1 INTRODUCTION The Poisson distribution was invented by the French mathematician Abraham de Moivre (1718) for D 1=2 [80]. S.D. Poisson (1781–1840) in 1837 obtained the general PMF as the limiting form of a binomial distribution discussed below (page 108). It has a single parameter (usually denoted by Greek letters , , , or , and occasionally by uppercase English alphabets (H is sometimes used in genetics, bioinformatics) that denotes the average number of occurrences of an event of interest in a specified time interval (called sampling period in biology, biomedical engineering, and signal processing). It need not be an integer, but must be positive. The de- notes the counts (arrivals) of random discrete occurrences in a fixed time interval (in temporal processes). It is called intensity parameter in some fields like geography, geology, ecology, min- ing engineering, microscopy; average vehicle flow in transportation engineering; rate parameter in economics and finance, and as defect density in semi-conductor electronics, fiber optics, and sev- eral manufacturing systems where events occur along a spatially continuous frame of reference (number of defects in a finished product). However, the occurrence rate for temporal processes is defined as intensity rate D (number of occurrences)/(unit of exposure). This is called the acci- dent rate in highway engineering, mortality rate in medical sciences and vital statistics, and by other names in different fields (e.g., microbe density in clinical pathology). We denote the distribution by POIS( ). It can be used to model temporal, spatial (along length or area), or spatio-temporal rare events that are open-ended.- eBook - PDF
Statistics
Principles and Methods
- Richard A. Johnson, Gouri K. Bhattacharyya(Authors)
- 2019(Publication Date)
- Wiley(Publisher)
3. MEAN (EXPECTED VALUE) AND STANDARD DEVIATION OF A PROBABILITY DISTRIBUTION We now introduce a numerical measure for the center of a probability distribution and another for its spread. In Chapter 2, we discussed the concepts of mean, as a measure of the center of a data set, and standard deviation, as a measure of spread. Because probability distributions are theoretical models in which the probabilities can be viewed as long-run relative frequencies, the sample measures of center and spread have their population counterparts. To motivate their definitions, we first refer to the calculation of the mean of a data set. Suppose a die is tossed 20 times and the following data obtained. 4, 3, 4, 2, 5, 1, 6, 6, 5, 2 2, 6, 5, 4, 6, 2, 1, 6, 2, 4 The mean of these observations, called the sample mean, is calculated as x = Sum of the observations Sample size = 76 20 = 3.8 Alternatively, we can first count the frequency of each point and use the relative frequencies to calculate the mean as x = 1 ( 2 20 ) + 2 ( 5 20 ) + 3 ( 1 20 ) + 4 ( 4 20 ) + 5 ( 3 20 ) + 6 ( 5 20 ) = 3.8 3. MEAN (EXPECTED VALUE) AND STANDARD DEVIATION OF A PROBABILITY DISTRIBUTION 145 This second calculation illustrates the formula Sample mean x = ( Value × Relative frequency ) Rather than stopping with 20 tosses, if we imagine a very large number of tosses of a die, the relative frequencies will approach the probabilities, each of which is 1 6 for a fair die. The mean of the (infinite) collection of tosses of a fair die should then be calculated as 1 ( 1 6 ) + 2 ( 1 6 ) + · · · + 6 ( 1 6 ) = ( Value × Probability ) = 3.5 Motivated by this example and the stability of long-run relative frequency, it is then natural to define the mean of a random variable X or its probability distribution as ( Value × Probability ) or x i f ( x i ) where x i ’s denote the distinct values of X. - A. John Bailer, Walter. Piegorsch, A.John Bailer(Authors)
- 2020(Publication Date)
- Routledge(Publisher)
) .An important characteristic of the Poisson distribution is that its variance equals its mean:Var [ X ] =. We use this feature to test if unbounded count data appear to exhibit Poisson variability, that is, whether or not their variability is of the same order as their mean; see Section 6.5.3 .σ X 2=μ XWhen observed variability among unbounded count data is so large that it exceeds its mean value, then the Poisson distribution is contraindicated. Indeed, the mean–variance equality required under Poisson sampling may be too restrictive in some environmental settings, a problem seen also with the binomial distribution for proportion data. In similar form to the binomial, we can consider overdispersed distributions for count data that exhibit extra-Poisson variability. We need not look far: we saw previously that the negative binomial p.m.f. in (1.11) gives a variance that is quadratic in its mean for unbounded counts: when Y ~ N B (μ, δ),Var [ Y ] = μ + δ. Thus the negative binomial variance is always larger than its mean when the dispersion parameter, δ, is positive. (Technically, δ is not a true dispersion parameter, since it cannot be written as a percentage increase in variability over the simpler Poisson variance. The parameter does quantify the departure from the Poisson model, however, and for simplicity we will continue to refer to it as a dispersion parameter.)μ 2Atδ = 0, the negative binomial variance returns to mean–variance equality. In fact, the Poisson distribution is a limiting form: as δ→0 the negative binomial c.d.f. approaches the Poisson c.d.f.The negative binomial p.m.f. may be constructed also as an extension of the Poisson p.m.f., based on a hierarchical model formulation. Specifically, if we take X|μ ~ Poisson(μ) and also take μ as random with some continuous distribution over 0 < μ < ∞, that is, a mixture distribution, the resulting marginal distribution for X will be overdispersed. A specific choice for fμ (μ) that brings about the negative binomial distribution (1.11) μ ~ Gamma(r, [1 – π]/π), for some π between 0 and 1; we will introduce the Gamma distribution in Section 1.3.2- N. T. Kottegoda, R. Rosso(Authors)
- 2009(Publication Date)
- Wiley-Blackwell(Publisher)
Using this result and considering the time T between occurrences as the random variable, we find the cdf of variable T is F T (T ≤ t ) = 1 − e −λt . This means that the waiting time between successive events of a Poisson process has an exponential distribution. We note that, as in the Poisson, the distribution is applicable for other variables such as length and space in addition to time. Also, it follows that the exponential in continuous time corresponds to the geometric distribution in discrete time. The pdf of the exponential distribution is written as follows by differentiating the expression just given with respect to t and by replacing T by a general variable X : f X (x ) = λe −λx , for x ≥ 0,λ > 0, = 0, otherwise. (4.2.3a) For the same conditions, the cdf is F X (x ) = 1 − e −λx . (4.2.3b) This is also referred to as the negative exponential because of the negative term in the exponent. 4.2.2.1 Mean, variance, and moment-generating function The mean of the exponential distribution is obtained as follows, integrating by parts, and using l’Hospital’s rule: E [ X ] = ∞ 0 xe −λx dx = [−xe −λx ] ∞ 0 + ∞ 0 e −λx dx = 1 λ . (4.2.4a) For the Poisson process, λ is the rate at which events occur, whereas 1/λ, as just shown, is the average time between events. In relation to reliability analysis, it is often referred to as the mean life time or time to failure. Proceeding further, E [ X 2 ] = [−x 2 e −λx ] ∞ 0 + 2 λ ∞ 0 x λe −λx dx = 2 λ 2 . Hence, the variance is Var[ X ] = E [ X 2 ] − { E [ X ]} 2 = 1 λ 2 . (4.2.4b) It is interesting to note that the coefficient of variation is V X = √ Var[ X ] E [ X ] = 1. (4.2.4c) Probability Distributions 197 The moment-generating function (as seen in Example 3.17) is M X (t ) = E [e tx ] = ∞ 0 e tx λe −λx dx = λ λ − t , for t < λ. (4.2.4d ) From the foregoing one can also show that the coefficients of skewness and kurtosis are 2 and 9, respectively (see Example 3.15).- eBook - PDF
- Norman L. Johnson, Samuel Kotz, Adrienne W. Kemp(Authors)
- 2005(Publication Date)
- Wiley-Interscience(Publisher)
They carried out a simulation study for these and other tests, applied all the tests to four well-known data sets, and provided a useful bibliography of tests for the Poisson distribution. 4.8 CHARACTERIZATIONS There has been much work concerning characterizations of the Poisson distribu- tion. Work prior to 1974 is summarized in Kotz (1974). Raikov (1938) showed that, if X 1 and X 2 are independent rv’s and X 1 + X 2 has a Poisson distribution, then X 1 and X 2 must each have Poisson distributions. (A similar property also holds for the sum of any number of independent Poisson rv’s.) Kosambi (1949) and Patil (1962a) established that, if X has a power series distribution, then a necessary and sufficient condition for X to have a Poisson 180 POISSON DISTRIBUTION distribution is µ [2] = µ 2 , where µ [2] is the second factorial moment. This implies that µ 2 = µ characterizes the Poisson distribution among PSDs; see also Patil and Ratnaparkhi (1977). Gokhale (1980) gave a strict proof of this important property of the Poisson distribution and proved furthermore that, within the class of PSDs, the condition µ 2 = aµ + bµ 2 , given a + bµ > 0, holds iff the distribution is Poisson or binomial or negative binomial. Gupta (1977b) proved that the equality of the mean and variance characterizes the Poisson distribution within the wider class of modified PSDs, provided that the series function f (θ) satisfies f (0) = 1. Moran (1952) discovered a fundamental property of the Poisson distribution. If X 1 and X 2 are independent nonnegative integer-valued rv’s such that the conditional distribution of X 1 given the total X 1 + X 2 is a binomial distribution with a common parameter p for all given values of X 1 + X 2 and if there exists at least one integer i such that Pr[X 1 = i ] > 0, Pr[X 2 = i ] > 0, then X 1 and X 2 are both Poisson rv’s. - eBook - PDF
The Theoretical Biologist's Toolbox
Quantitative Methods for Ecology and Evolutionary Biology
- Marc Mangel(Author)
- 2006(Publication Date)
- Cambridge University Press(Publisher)
The negative binomial distribution, 2: a Poisson process 111 We recognize this as another gamma density, with changed parameters: we started with parameters and # , collected data of k events in 0 to t , and update the parameters to þ t and # þ k , while keeping the same distribu-tion for the encounter rate. In the Bayesian literature, we say that the gamma density is the conjugate prior for the Poisson process (see Connections ). The normal (Gaussian) distribution: the standard for error distributions We now turn to the normal or Gaussian distribution, which most readers will have encountered previously – both in other sources and in our discussion of the physical process of diffusion in Chapter 2 . For that reason, I will not belabor matters and repeat much of what you already know, but will quickly move on to what I hope are new matters. However, some introduction is required. The density function for a random variable X that is normally distributed with mean and variance ' 2 is f ð x Þ ¼ 1 ffiffiffiffiffiffiffiffiffiffi 2 p ' 2 p exp ð x Þ 2 2 ' 2 ! (3 : 70) Note that I could have taken the square root of the variance, but chose to leave it within the square root. A particularly common and useful version is the normal distribution with mean 0 and variance 1; we denote this by N (0, 1) and write X N (0, 1) to indicate that the random variable X is normally distributed with mean 0 and variance 1. In that case, the probability density function becomes f ð x Þ ¼ 1 = ffiffiffiffiffi 2 p p exp x 2 = 2 ð Þ . Indeed, it is easy to see that if a random variable Y is normally distributed with mean and variance ' 2 then the transformed variables X ¼ ( Y )/ ' will be N (0, 1); we can make a normal random variable Y with specified mean and variance from a X N (0, 1) by setting Y ¼ þ ' X . Exercise 3.14 (E) Demonstrate the validity of the previous sentence. We already know that f ( x ) given by Eq. ( 3.70 ) will approach a Dirac delta function centered at as ' ! 0. - No longer available |Learn more
Engineering Mathematics Exam Prep
Problems and Solutions
- A. Saha, D. Dutta, S. Kar, P. Majumder, A. Paul, S. Musa, PhD(Authors)
- 2023(Publication Date)
- Mercury Learning and Information(Publisher)
So mode is 17. 153. 54. Given, that mean of the Poisson distribu- tion, l = 5. We know that in the Poisson distribution, mean = variance. So here, E(X) = V(X) = l =5. Now, ( ) ( ) ( ) ( ) 2 2 V X E X E X = - ( ) 2 2 5 5 E X Þ = - ( ) 2 2 5 5 30. E X Þ = + = 2 E[(X + 2) ] \ = E(X 2 + 4X + 4) ( ) ( ) 2 =E X 4 4 E X + + = 30 + 4 ×5 + 4 = 54. 154. (b) ( ) 1 f x dx ¥ -¥ = ò ( ) 1 0 1 a bx dx Þ + = ò 1 2 0 1 2 bx ax é ù Þ + = ê ú ë û 1, 2 b a Þ + = Which is satisfied for a = 0.5 and b = 1. 155. 3 4 Here, the sample space, S = {HH, HT, TH, TT}. Let A = the event that at least one heads occurs. Then, A = {HH, HT, TH}. Hence, the required probability = P(A) n(A) 3 . n(S) 4 = = 156. 0.5 Case-I: First drawn ball is red and the second drawn ball is also red. When the first drawn ball of red color is not replaced, the number of balls left in the jar is nine out of which four are red. So prob. (first drawn ball is red and the second drawn ball is also red) 5 4 . 10 9 = ´ Case-II: First drawn ball is black and the second drawn ball is also red. When the first drawn ball of black color is not replaced, the number of balls left in the jar is nine out of which five are red. So prob. (first drawn ball is black and the second drawn ball is also red) 5 5 . 10 9 = ´ Hence, the required probability 5 4 5 5 45 0.5 10 9 10 9 90 = ´ + ´ = = 157. 3.5 Let X be the random variable denoting the outcome of the die. Then, the probability distribution of X is X 1 2 3 4 5 6 P(X = x) 1 6 1 6 1 6 1 6 1 6 1 6 So mean = E(X) 1 1 1 1 1 1 1 2 3 4 5 6 6 6 6 6 6 6 1 21 3.5 6 = ´ + ´ + ´ + ´ + ´ + ´ = ´ = 158. (b) In the exponential distribution, the p.d.f is given by f(x) = le –lx , x > 0; x = 0 where the only parameter is l In the Gaussian distribution, the p.d.f is given by ( ) 2 1 2 1 2 x f x e -m æ ö - ç ÷ s è ø = s p , – ∞ < x < ∞ where the parameters are m and s. 159. 0.5 Since the coin is fair, the outcome of next toss will be independent of the previous toss. - eBook - PDF
Stochastic Modeling and Mathematical Statistics
A Text for Statisticians and Quantitative Scientists
- Francisco J. Samaniego(Author)
- 2014(Publication Date)
- Chapman and Hall/CRC(Publisher)
For example, at time t = 1, the expected number of occurrences of the event of interest is λ ; the expected number of occurrences of this event by time t = 2 is 2 λ . The Poisson process occurs in a wide variety of applications. No doubt many applications are a result of the applicability of the Law of Rare Events which gives specific conditions under which the Poisson distribution is a good approximation to the probability that a particular event will occur a certain number of times in a particular interval of time. Note that since for any s < t , we may write X ( t ) = X ( s ) + ( X ( t ) -X ( s )) , we see that X ( t ) may be represented as the sum of the two independent variables U and V , where U ∼ P ( λ s ) and V ∼ P ( λ ( t -s )) . Now, suppose that { X ( t ) , t ∈ ( 0 , ∞ ) } is a Poisson process with rate parameter λ . Define Continuous Probability Models 139 Y k to be the waiting time until the k th occurrence of the event. Clearly, the event { Y k ≤ t } will occur if and only if { X ( t ) ≥ k } . From this, it follows that F Y k ( y ) = P ( Y k ≤ y ) = 1 -P ( X ( y ) < k ) = 1 -k -1 ∑ x = 0 ( λ y ) x e -λ y x ! . (3.23) The density of Y k may thus be obtained, for any y > 0, as f Y k ( y ) = F 0 Y k ( y ) = λ e -λ y -k -1 ∑ x = 1 x ( λ y ) x -1 λ x ! e -λ y + k -1 ∑ x = 1 ( λ y ) x λ x ! e -λ y = λ e -λ y -k -1 ∑ x = 1 ( λ y ) x -1 λ ( x -1 ) ! e -λ y + k -1 ∑ x = 1 ( λ y ) x λ x ! e -λ y = λ e -λ y -k -2 ∑ x = 0 ( λ y ) x λ x ! e -λ y + k -1 ∑ x = 1 ( λ y ) x λ x ! e -λ y (by replacing ( x -1 ) by x) = λ e -λ y -e -λ y λ -( λ y ) k -1 λ ( k -1 ) ! (since the summands in the two sums above cancel each other out, with the exception of the leading term in the first sum and the last term of the second sum) = λ k Γ ( k ) y k -1 e -λ y for y > 0 . (3.24) The function in (3.24) is recognizable as the Γ ( k , 1 / λ ) density.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.











