Mathematics
Probability Generating Function
A probability generating function is a mathematical tool used to describe the probability distribution of a discrete random variable. It provides a way to calculate the probabilities of different outcomes and is particularly useful for analyzing the number of successes in a fixed number of trials. The function is defined as a power series and is often used in the study of probability and statistics.
Written by Perlego with AI-assistance
Related key terms
1 of 5
5 Key excerpts on "Probability Generating Function"
- eBook - PDF
Introduction to Probability
Multivariate Models and Applications
- Narayanaswamy Balakrishnan, Markos V. Koutras, Konstadinos G. Politis(Authors)
- 2021(Publication Date)
- Wiley(Publisher)
Consequently, this is especially useful in cases wherein the probability distribution cannot be obtained in explicit form directly. Definition 7.5.1 Let X be a random variable taking integer, nonnegative values. Then, the function G X (t) = E(t X ) is called the Probability Generating Function (abbreviated as p.g.f.) of the variable X. It is apparent from the definition that G X (1) = 1. As in the case of m.g.f.s, when there is no possible confusion, we will simply write G(t) rather than G X (t). Both m.g.f.s and p.g.f.s are defined in terms of an expectation; note, however, that in the former case existence of that expectation, at least over a finite interval including zero, was part of the definition. In contrast, for a variable X taking values over the set {0, 1, 2, … , } and f (x) = P(X = x), with G X (t) = ∞ ∑ x=0 t x f (x), (7.8) 7.5 Probability Generating Function 417 since ∞ ∑ x=0 |t x f (x)| = ∞ ∑ x=0 |t x |f (x) ≤ ∞ ∑ x=0 f (x) = 1 for t ∈ [−1, 1], we see that the probability function is always convergent (in fact, absolutely convergent) over the interval [−1, 1]. M.g.f.s produce moments through differentiation; on the other hand, differentiating the p.g.f. of a discrete variable X yields the probability function of X itself, as stated in the following proposition. Proposition 7.5.1 Let X be a variable taking values on the nonnegative integers and G(t) be the p.g.f. of X. Then: (i) the probability function of X is given by P(X = r) = 1 r! [ d r dt r G(t) ] t=0 , r = 0, 1, 2, … (here, by convention, the derivative of order zero is the function itself); (ii) the factorial moments of X, (r) = E[(X) r ] = E [X(X − 1)(X − 2) · · · (X − r + 1)] , r = 1, 2, … , are given by (r) = [ d r dt r G(t) ] t=1 . Proof: The result for r = 0 in Part (i) is trivial. Differentiating both sides of (7.8), we obtain G ′ (t) = ∞ ∑ x=0 (t x ) ′ f (x) = ∞ ∑ x=1 xt x−1 f (x). - eBook - PDF
- Parimal Mukhopadhyay(Author)
- 2011(Publication Date)
- WSPC(Publisher)
Chapter 7 Generating Functions 7.1 Introduction We have seen that moments, factorial moments, cumulants are important properties of probability distributions. We consider different generating functions of these quantities in this chapter. Section 7.2 addresses the Probability Generating Function of a discrete random variable. The moment generating function is considered in Section 7.3, factorial moment generat-ing function in Section 7.4, cumulant generating function in Section 7.5 and characteristic functions in Section 7.6. Standard discrete and continuous distributions will be considered in the next two chapters. 7.2 Probability Generating Function Let X be an integer-valued discrete random variable with P ( X = k ) = p k ,k = 0 , 1 , 2 ,... and ∑ ∞ k =0 p k = 1 . Definition 7.2.1 : The function defined by P X ( t ) = E ( t X ) = ∞ summationdisplay k =0 p k t k , | t | < 1 (7 . 2 . 1) is called the Probability Generating Function ( pgf ) of X . Since P X (1) = 1, series (7.2.1) is uniformly and absolutely convergent in | t | ≤ 1 and P X ( t ) is a continuous function of t . Every pgf determines a unique probability distribution, that is, a unique set of probabilities { p k } . Coefficients of t k in the expansion of P X ( t ) gives P ( X = k ) . The pgf is used for discrete variables only. Example 7.2.1 : X is uniformly distributed over 0 , 1 ,...,n with P ( X = 187 188 Generating Functions k ) = 1 n +1 ,k = 0 , 1 ,...,n. P X ( t ) = 1 n + 1 n summationdisplay k =0 t k = 1 (1 -t )( n + 1) [1 -t n +1 ] , | t | < 1 . Example 7.2.2 : X has a Binomial distribution, P ( X = k ) = ( n k ) p k (1 -p ) n -k , 0 - eBook - PDF
A Certain Uncertainty
Nature's Random Ways
- Mark P. Silverman(Author)
- 2014(Publication Date)
- Cambridge University Press(Publisher)
Taking derivatives is almost always more easily done than doing summations or integrals. Besides the ease afforded in calculating moments, there are other advantages to working with an mgf. For one thing, the mgf of a probability distribution is unique because a distribution is uniquely characterized by all its moments. Thus, if you do not know initially how some random variable is distributed – which is frequently the case in statistical physics – but you can by some means establish that its mgf takes the same form as the mgf of a known probability distribution, then you can be certain that the unknown distribution is identical to the recognized one. A second advantage is that generating functions provide an efficient means of determining the statistics of linear superpositions, such as sums and differences, of independent random vari- ables. Such superpositions of random variables occur frequently in physics since they may represent the outcome of a sequence of measurements or the difference of a signal and noise. An occasional drawback to the use of a moment-generating function is that not every distribution has one. In those instances – or generally, as an alterna- tive method – one can work with the characteristic function (cf), which is equivalent to a Fourier transform of the probability density function (pdf ) for a continuous distribution and Probability Generating Function (pgf) for a discrete distribution. The mgf of a random variable X, symbolized by g X (t), where t is a dummy variable eventually to be set equal to 0, is defined as the expectation of e Xt . Thus, the mgf of a discrete or continuous random variable is calculated, respectively, from the relations 16 Tools of the trade - Charles Therrien, Murali Tummala(Authors)
- 2018(Publication Date)
- CRC Press(Publisher)
Again, two interpretations are equally valid; the PGF can be thought of as either the expectation of z K or the z -transform of the PMF. 10 The name Probability Generating Function comes from the fact that if f K [ k ] = 0 for k < 0 then G K ( z ) = f K [0] + z f K [1] + z 2 f K [2] + z 3 f K [3] + · · · From this expansion it is easy to show that 1 k ! d k G K ( z ) dz k vextendsingle vextendsingle vextendsingle vextendsingle z =0 = f K [ k ] = Pr[ K = k ] (4.22) Our interest in the PGF, however, is more in generating moments than it is in generating probabilities. For this, it is not necessary to require that f K [ k ] = 0 for k < 0. Rather we can deal with the full two-sided transform defined in (4.21). The method for generating moments can be seen clearly by using the first form of the definition in (4.21), i.e., G K ( z ) = E braceleftbig z K bracerightbig The derivative of this expression is 11 dG K ( z ) dz = E braceleftbig Kz K -1 bracerightbig If this is evaluated at z = 1, the term z K -1 goes away and leaves the formula E { K } = dG K ( z ) dz vextendsingle vextendsingle vextendsingle vextendsingle z =1 (4.23) This is the mean of the discrete random variable. To generate higher order moments, we repeat the process. For example, d 2 G K ( z ) dz 2 vextendsingle vextendsingle vextendsingle vextendsingle z =1 = E braceleftbig K ( K -1) z K -2 bracerightbigvextendsingle vextendsingle z =1 = E braceleftbig K 2 bracerightbig -E { K } While this result is not as “clean” as the corresponding result for the MGF, we can use the last two equations to express the second moment as E braceleftbig K 2 bracerightbig = d 2 G K ( z ) dz 2 + dG K ( z ) dz vextendsingle vextendsingle vextendsingle vextendsingle z =1 (4.24) Table 4.3 summarizes the results for computing the first four moments of a discrete random variable using the PGF.- eBook - PDF
- Kushwaha, K.S.(Authors)
- 2021(Publication Date)
- NEW INDIA PUBLISHING AGENCY (NIPA)(Publisher)
(d) 2. (a) 3. (b) 4. (d) 5. (d) 6. (a) 7. (d) 8. (d) 9. (b) 10. (d) 11. (d) 12. (d) 13. (a) 14. (a) 15. (a) 16. (c) 17. (c) 18. (d) 19. (b) 20. (a) 21. (d) 22. (d) 23. (d) 24. (d) 25. (d) 26. (d) 27. (a) 28. (d) 29. (a) 30. (d) 2 179 31. (d) 32. (d) 33. (c) 34. (d) 35. (d) 36. (d) 37. (d) 38. (d) 39. (c) 40. (c) 41. (c) 42. (a) 43. (d) 44. (d) 45. (d) 46. (d) 47. (a) ••• Section-A : Write True / False Q.1 The moment generating function (m.g.f) of a random variable X (about origin having the probability function f (x) is defined as M x (t) = E (e tx ) = e tx ƒ(x) dx for continous probability distribution. Q.2 The m.g.f. for discrete probability function f (x), is given as M x (t) = E (e tx ) = e tx ƒ( x ) Q.3 In Mx (t), t is the real parameter and it is assumed that the fuction E (etx) is absolutely convergent for some positive number h such that (-h < t < h). Generating Functions Law of Large Numbers and Central Limit Theorems Chapter - 8 ∫ −∞ ∞ Σ x 182 Q.4 We write M x (t) as M x (t) = where μ r = E (X r ) is the rth moment about origin. Q.5 In the m.g.f i.e. M x (t), the coefficient of gives the rth moment of X about origin. Q.6 The M x (t) generates moment of random variable X, hence, it is known as Moment generating function. Q.7 In m.g.f. we can obtain rth moment about origin (say μ r ) from equation Q.8 In general the m.g.f of a r.v.X about a point (x = a) is defined as M x (t) (about x = a) = E (e t (x-a) ) Q.9 A random variable X may have no moments although its m.g.f exists. Q.10 For a discrete r.v with probability function no moment exists but M x (t) exist for t ≤ 0. Q.11 A random variable X can have m.g.f. but this m.g.f. does not generate moments. Q.12 A discrete r.v X with probability function P (x = 2 x ) = , x = 0, 1, 2 ....... have m.g.f. but it does not generate moments. Q.13 A r.v. X can have all or some moments but rn.g.f. does not exist except perhaps at one point.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.




