Mathematics
Cumulative Distribution Function
A Cumulative Distribution Function (CDF) is a function that gives the probability that a random variable is less than or equal to a certain value. It provides a complete description of the probability distribution of a random variable. The CDF is a fundamental concept in probability theory and statistics, and it is used to analyze and interpret data.
Written by Perlego with AI-assistance
Related key terms
1 of 5
10 Key excerpts on "Cumulative Distribution Function"
- eBook - ePub
Probability and Random Processes
With Applications to Signal Processing and Communications
- Scott Miller, Donald Childers(Authors)
- 2004(Publication Date)
- Academic Press(Publisher)
Chapter 12 , Simulation Techniques, for more details on how computer-generated random numbers work. For now,consider the limiting case as so that the random variable can truly fall anywhere in the interval [0,1). One curious result of passing to the limit is that now(3.2)That is, each point has zero probability of occurring. Yet, something has to occur! This problem is common to continuous random variables, and it is clear that the probability mass function is not a suitable description for such a random variable. The next sections develop two alternative descriptions for continuous random variables, which will be used extensively throughout the rest of the text.3.1 The Cumulative Distribution Function
Since a continuous random variable will typically have a zero probability of taking on a specific value, we avoid talking about such probabilities. Instead, events of the form {X ≤ x } can be considered.DEFINITION 3.1: The Cumulative Distribution Function (CDF) of a random variable, X , is(3.3)From this definition, several properties of the CDF can be inferred. First, since the CDF is a probability, it must take on values between 0 and 1. Since random variables are real-valued, it is easy to conclude that Fx (–∞) = 0 and Fx (∞) = 1. That is, a real number cannot be less than – ∞ and must be less than ∞. Next, if we consider two fixed values, x 1 and x 2 , such that x 1 < x 2 , then the event {X ≤ x 1 } is a subset of {X ≤ x 2 }. Hence,Fx(x 1 ) ≤Fx(x 2 ). This implies that the CDF is a monotonic nondecreasing function. Also, we can break the event {X ≤ x 2 } into the union of two mutually exclusive events, {X ≤ x 2 } = {X ≤ x 1 } ∪ {x 1 < X x 2 }. Hence,FX(x 2 ) =FX(x 1 ) + Pr(x 1 < X ≤ x 2 ) or, equivalently, Pr{x 1 } < X ≤ x 2 ) −FX(x 2 ) −FX(x 1 - Douglas C. Montgomery, George C. Runger(Authors)
- 2020(Publication Date)
- Wiley(Publisher)
The Cumulative Distribution Function is defined for all real numbers. EXAMPLE 4.3 Electric Current For the copper current measurement in Example 4.1, the Cumulative Distribution Function of the random variable X consists of three expressions. If x < 4.9, f (x) = 0. Therefore, F(x) = 0, for x < 4.9 and F(x) = ∫ x 4.9 f (u) du = 5x − 24.5, for 4.9 ≤ x < 5.1 Finally, F(x) = ∫ x 4.9 f (u) du = 1, for 5.1 ≤ x Therefore, F(x) = { 0 x < 4.9 5x − 24.5 4.9 ≤ x < 5.1 1 5.1 ≥ x The plot of F(x) is shown in Figure 4.6. 5.1 1 0 x 4.9 f(x) FIGURE 4.6 Cumulative Distribution Function for Example 4.3. Notice that in the definition of F(x), any < can be changed to ≤ and vice versa. That is, in Example 4.3 F(x) can be defined as either 5x − 24.5 or 0 at the end-point x = 4.9, and F(x) can be defined as either 5x − 24.5 or 1 at the end-point x = 5.1. In other words, F(x) is a continuous func- tion. For a discrete random variable, F(x) is not a continuous function. Sometimes a continuous random variable is defined as one that has a continuous Cumulative Distribution Function. The probability density function of a continuous random variable can be determined from the Cumulative Distribution Function by differentiating. The fundamental theorem of calculus states that d dx ∫ x −∞ f (u)du = f (x) Probability Density Function from the Cumulative Distribution Function Given F(x), f (x) = dF(x) dx as long as the derivative exists. 4.3 Mean and Variance of a Continuous Random Variable 71 EXAMPLE 4.4 Reaction Time The time until a chemical reaction is complete (in millisec- onds) is approximated by the Cumulative Distribution Function F(x) = { 0 x < 0 1 − e −0.01x 0 ≤ x Determine the probability density function of X.- eBook - PDF
- Matthew A. Carlton, Jay L. Devore(Authors)
- 2020(Publication Date)
- Wiley(Publisher)
5.2 The Cumulative Distribution Function and Percentiles 167 DEFINITION The Cumulative Distribution Function (cdf) F(x) for a continuous rv X is defined for every number x by F(x) = P(X ≤ x) = ∫ x −∞ f (y)dy For each x, F(x) is the area under the density curve to the left of x. This is illustrated in Figure 5.5, where F(x) increases smoothly as x increases. 5 0 .1 .2 .3 .4 Shaded area = F(8) F(8) .5 f (x) 6 7 8 9 10 x x 5 0 .2 .4 .6 .8 1.0 F(x) 6 7 8 9 10 FIGURE 5.5 Graphs of a PDF and Associated CDF EXAMPLE 5.6 Let X , the thickness of a membrane, have a uniform distribution on [A, B]. A graph of the density function is shown in Figure 5.6. A B x f (x) 1 B – A A B x Shaded area = F(x) 1 B – A FIGURE 5.6 The PDF for a Uniform Distribution For x < A, F(x) = 0, since there is no area under the graph of the density function to the left of such an x. For x ≥ B, F(x) = 1, since all the area is accumulated to the left of such an x. Finally, for A ≤ x < B, F(x) = ∫ x −∞ f (y)dy = ∫ x A 1 B − A dy = 1 B − A ⋅ y | | | | y=x y=A = x − A B − A The entire cdf is F(x) = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ 0 x < A x − A B − A A ≤ x < B 1 x ≥ B The graph of this cdf appears in Figure 5.7. 168 CONTINUOUS PROBABILITY DISTRIBUTIONS: GENERAL PROPERTIES FIGURE 5.7 The CDF for a Uniform Distribution F(x) 1 A B x ◼ Using F (x) to Compute Probabilities The importance of the cdf here, just as for discrete rvs, is that once F(x) is available either from a formula or a table, probabilities of various intervals can be easily computed. PROPOSITION 5.1 Let X be a continuous rv with pdf f (x) and cdf F(x). Then for any number a, P(X > a) = 1 − F(a) and for any two numbers a and b with a < b, P(a ≤ X ≤ b) = F(b) − F(a) Figure 5.8 illustrates the second part of this proposition: the desired probability is the shaded area under the density curve between a and b, and it equals the difference between the two shaded cumulative areas. - eBook - PDF
Probability and Stochastic Processes
A Friendly Introduction for Electrical and Computer Engineers
- Roy D. Yates, David J. Goodman(Authors)
- 2013(Publication Date)
- Wiley(Publisher)
3.4 Cumulative Distribution Function (CDF) 79 Keep in mind that at the discontinuities x = 0, x = 1 and x = 2, the values of F X (x) are the upper values: F X (0) = 1/4, F X (1) = 3/4 and F X (2) = 1. Math texts call this the right hand limit of F X (x). Consider any finite random variable X with all elements of S X between x min and x max . For this random variable, the numerical specification of the CDF begins with F X (x) = 0, x < x min , and ends with F X (x) = 1, x ≥ x max . Like the statement “P X (x) = 0 otherwise,” the description of the CDF is incomplete without these two statements. The next example displays the CDF of an infinite discrete random variable. Example 3.22 In Example 3.9, let the probability that a circuit is rejected equal p = 1/4. The PMF of Y , the number of tests up to and including the first reject, is the geometric (1/4) random variable with PMF P Y (y) = (1/4)(3/4) y−1 y = 1, 2, . . . 0 otherwise. (3.29) What is the CDF of Y ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Random variable Y has nonzero probabilities for all positive integers. For any integer n ≥ 1, the CDF is F Y (n) = n j=1 P Y (j ) = n j=1 1 4 3 4 j−1 . (3.30) Equation (3.30) is a geometric series. Familiarity with the geometric series is essen- tial for calculating probabilities involving geometric random variables. Appendix B summarizes the most important facts. In particular, Math Fact B.4 implies (1 − x) ∑ n j=1 x j−1 = 1 − x n . Substituting x = 3/4, we obtain F Y (n) = 1 − 3 4 n . (3.31) The complete expression for the CDF of Y must show F Y (y) for all integer and nonin- teger values of y. For an integer-valued random variable Y , we can do this in a simple way using the floor function y, which is the largest integer less than or equal to y. - eBook - PDF
Computational Statistics in the Earth Sciences
With Applications in MATLAB
- Alan D. Chave(Author)
- 2017(Publication Date)
- Cambridge University Press(Publisher)
2 Statistical Concepts 2.1 Overview This chapter builds on the probability concepts of Chapter 1 to construct the theoretical foundation for computational data analysis. The key ideas that are introduced include the probability density function (pdf) for discrete, continuous, and mixed distributions; the Cumulative Distribution Function (cdf), which is the sum or integral of the pdf; the quantile function, which is the inverse of the cdf; and the characteristic function, which serves as an alternate pathway for computing the pdf and cdf. A discussion of the bivariate distribution extends the prior univariate descriptions to two variables, with the full multivariate case deferred until Chapters 10 and 11 (although there will be some slight cheating in Chap- ter 3), and to independence of random variables (rvs). The formalism that enables trans- formation from one or more variables to another set (e.g., Cartesian to circular coordinates) is then described, and the distribution of the largest and smallest of a set of random variables is derived as an introduction to the order statistics that are covered in Chapter 4. A number of theoretical entities for location (e.g., the expected value), dispersion (e.g., the variance), shape (e.g., the skewness), direction (e.g., the mean direction), and the covar- iance between two rvs is described as a counterpart to the sample entities that are presented in Chapter 4. The concept of conditional probability from Chapter 1 is extended to the expected value and variance, leading to the laws of total expectation, variance, and covariance. Finally, the chapter closes by extending the concept of inequality to stochastic variables, leading to convergence relations for rvs that pervade the remainder of the book. - No longer available |Learn more
- (Author)
- 2014(Publication Date)
- Learning Press(Publisher)
________________________ WORLD TECHNOLOGIES ________________________ Chapter- 2 Normal Distribution and Cumulative Distribution Function Normal Distribution Probability density function The red line is the standard normal distribution Cumulative Distribution Function Colors match the image above ________________________ WORLD TECHNOLOGIES ________________________ notation: parameters: μ ∈ R — mean (location) σ 2 > 0 — variance (squared scale) support: x ∈ R pdf: cdf: mean: μ median: μ mode: μ variance: σ 2 skewness: 0 ex.kurtosis: 0 entropy: mgf: cf: Fisher information: In probability theory, the normal (or Gaussian ) distribution , is a continuous probability distribution that is often used as a first approximation to describe real-valued random variables that tend to cluster around a single mean value. The graph of the associated probability density function is “bell”-shaped, and is known as the Gaussian function or bell curve : where parameter μ is the mean (location of the peak) and σ 2 is the variance (the measure of the width of the distribution). The distribution with μ = 0 and σ 2 = 1 is called the standard normal . The normal distribution is considered the most “basic” continuous probability distri-bution due to its role in the central limit theorem, and is one of the first continuous distributions taught in elementary statistics classes. Specifically, by the central limit theorem, under certain conditions the sum of a number of random variables with finite means and variances approaches a normal distribution as the number of variables increases. For this reason, the normal distribution is commonly encountered in practice, and is used throughout statistics, natural sciences, and social sciences as a simple model ________________________ WORLD TECHNOLOGIES ________________________ for complex phenomena. - eBook - PDF
Random Vibrations
Analysis of Structural and Mechanical Systems
- Loren D. Lutes, Shahram Sarkani(Authors)
- 2004(Publication Date)
- Butterworth-Heinemann(Publisher)
Example 2.3 represents one very simple situation involving two random variables. In that particular case, Y is a function of X , so if one knows the value of X , then one knows exactly the value of Y . If one thinks of the ( , ) X Y plane, for this example, then all the possible outcomes lie on a simple (piecewise linear) curve. In other problems, there may be a less direct connection between the random variables of interest. For example, the set of possible values and/or the probability of any particular value for one random variable Y may depend on the value of another random variable X . As with a single random variable, the probabilities of two or more random variables can always be described by using a Cumulative Distribution Function. For two random variables X and Y , this can be written as F u v P X u Y v XY ( , ) ( , ) ≡ ≤ ≤ (2.17) in which the comma within the parentheses on the right-hand side of the expression represents the intersection operation. That is, the probability denoted is for the joint event that X u ≤ and Y v ≤ . The function F u v XY ( , ) is defined on the two-dimensional space of all possible ( , ) X Y values, and it is called the joint Cumulative Distribution Function. When we generalize to more than two or three random variables, it will often be more convenient to use a vector notation. In particular, we will use an arrow over a symbol to indicate that the quantity involved is a vector (which may also be viewed as a matrix with only one column). Thus, we will write r M X X X X n = 1 2 , r M u u u u n = 1 2 and use the notation F u F u u u P X u X X X X n j j j n n r L r L I ( ) ( , , , ) ( ≡ ≡ ≤ = 1 2 1 2 1 (2.18) Fundamentals of Probability and Random Variables 27 for the general joint Cumulative Distribution Function of n random components. - No longer available |Learn more
- (Author)
- 2014(Publication Date)
- Library Press(Publisher)
________________________ WORLD TECHNOLOGIES ________________________ Chapter 7 Normal Distribution and Cumulative Distribution Function Normal Distribution Probability density function The red line is the standard normal distribution Cumulative Distribution Function Colors match the image above ________________________ WORLD TECHNOLOGIES ________________________ notation: parameters: μ ∈ R — mean (location) σ 2 > 0 — variance (squared scale) support: x ∈ R pdf: cdf: mean: μ median: μ mode: μ variance: σ 2 skewness: 0 ex.kurtosis: 0 entropy: mgf: cf: Fisher information: In probability theory, the normal (or Gaussian ) distribution , is a continuous probability distribution that is often used as a first approximation to describe real-valued random variables that tend to cluster around a single mean value. The graph of the associated probability density function is “bell”-shaped, and is known as the Gaussian function or bell curve: where parameter μ is the mean (location of the peak) and σ 2 is the variance (the measure of the width of the distribution). The distribution with μ = 0 and σ 2 = 1 is called the standard normal . The normal distribution is considered the most “basic” continuous probability distri-bution due to its role in the central limit theorem, and is one of the first continuous distributions taught in elementary statistics classes. Specifically, by the central limit theorem, under certain conditions the sum of a number of random variables with finite means and variances approaches a normal distribution as the number of variables increases. For this reason, the normal distribution is commonly encountered in practice, and is used throughout statistics, natural sciences, and social sciences as a simple model ________________________ WORLD TECHNOLOGIES ________________________ for complex phenomena. - eBook - PDF
An Introduction to the Advanced Theory and Practice of Nonparametric Econometrics
A Replicable Approach Using R
- Jeffrey S. Racine(Author)
- 2019(Publication Date)
- Cambridge University Press(Publisher)
Part I Probability Functions, Probability Density Functions, and Their Cumulative Counterparts Chapter 1 Discrete Probability and Cumulative Probability Functions While being shown a house to buy, Garp and his wife Helen witness a single-engine plane, presumably suffering catastrophic mechanical failure, plowing right into the side of the house. Garp takes this as a good sign – “The odds of another plane hitting this house are astronomical!” – and agrees right then and there to buy the house. (John Irving, The World According to Garp). 1.1 Overview The first random variable typically encountered by students of basic statistics is known as a discrete random variable, after which they proceed to study continuous random variables. Discrete random variables do not always receive as much attention as continuous random variables receive, but in a nonparametric framework, the importance of their study should not be understated. Whether the discrete random variable is the number of times a single-engine plane crashes into a home or whether option “a”, “b”, or “c” was selected by a respondent on a questionnaire, it plays a fundamental role in statistical analysis. A discrete random variable is one that can take on a countable number of values. They come in many different flavours and go by a variety of names including nominal (unordered ) and ordinal (ordered ) categorical variables. Examples would include the number of heads in three tosses of a coin where the random variable takes on the values {0, 1, 2, 3}, or an individual’s employment status being classified as either “employed” or “unemployed” (i.e., an unordered categorical variable), or a response to a survey question recorded as one of “a”, “b”, or “c” where “a” indicates “most preferred” and 3 4 1 DISCRETE PROBABILITY AND CUMULATIVE PROBABILITY FUNCTIONS “c” “least preferred” (i.e., an ordered categorical variable). - John A. Gubner(Author)
- 2006(Publication Date)
- Cambridge University Press(Publisher)
A random variable whose cdf is continuous but whose derivative is the zero function is said to be singular . Since both singular random variables and continuous random variables have continuous cdfs, in advanced texts, continuous random variables are sometimes called absolutely continuous . Note 5. If X is a random variable defined on Ω , then µ ( B ) : = P ( { ω ∈ Ω : X ( ω ) ∈ B } ) satisfies the axioms of a probability measure on the Borel subsets of IR (Problem 4 in Chapter 2). (Also recall Note 1 and Problems 49 and 50 in Chapter 1 and Note 1 in Chapter 2.) Taking B = ( − ∞ , x ] shows that the cdf of X is F ( x ) = µ (( − ∞ , x ]) . Thus, µ determines F . The converse is also true in the sense that if F is a right-continuous, nondecreasing function satisfying F ( x ) → 1 as x → ∞ and F ( x ) → 0 as x → − ∞ , then there is a unique probability measure µ on the Borel sets of IR such that µ (( − ∞ , x ]) = F ( x ) for all x ∈ IR. A complete proof of this fact is beyond the scope of this book, but here is a sketch of the main ideas. Given such a function F , for a < b , put µ (( a , b ]) : = F ( b ) − F ( a ) . For more general Borel sets B , we proceed as follows. Suppose we have a collection of intervals ( a i , b i ] such that j B ⊂ ∞ i = 1 ( a i , b i ] . Such a collection is called a covering of intervals . Note that we always have the covering B ⊂ ( − ∞ , ∞ ) . We then define µ ( B ) : = inf B ⊂ i ( a i , b i ] ∞ ∑ i = 1 F ( b i ) − F ( a i ) , where the infimum is over all coverings of intervals. Uniqueness is a consequence of the fact that if two probability measures agree on intervals, then they agree on all the Borel sets. This fact follows from the π – λ theorem [3]. j If b i = ∞ , it is understood that ( a i , b i ] means ( a i , ∞ ) . 222 Cumulative Distribution Functions and their applications Problems 5.1: Continuous random variables 1. Find the Cumulative Distribution Function F ( x ) of an exponential random variable X with parameter λ .
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.









