Mathematics
Combining Random Variables
Combining random variables involves finding the probability distribution of a new random variable created by combining two or more existing random variables. This can be done through operations such as addition, subtraction, multiplication, or division. The resulting distribution is determined by the specific method of combination and the properties of the original random variables.
Written by Perlego with AI-assistance
Related key terms
1 of 5
6 Key excerpts on "Combining Random Variables"
- eBook - PDF
- C. Richard Cassady, Joel A. Nachlas(Authors)
- 2008(Publication Date)
- CRC Press(Publisher)
49 3 Analysis of Multiple Random Variables In. Chapter. 2,. we. considered. issues. related. to. a. single. random. variable. defined.on.a.random.experiment . .In.some.cases,.we.are.interested.in.more. than.one.random.variable . .For.example,.a.manufactured.product.may.have. more.than.one.measurable.quality.characteristic . .In.this.chapter,.we.consider. situations.in.which.there.are.two.random.variables.of.interest . .However,.all. of.the.concepts.explored.in.this.chapter.can.be.extended.to.more.than.two. random.variables . 3.1 Two Random Variables As.with.a.single.random.variable,.the.study.of.two.random.variables.may.begin. with.the.definition.of.a.cumulative.distribution.function . .Consider.two.random. variables,. X . and. Y ,. defined. on. a. single. random. experiment . . In. other. words,. each.outcome.of.the.random.experiment.has.an.associated.value.of. X .and. Y . . If. F X , Y .is.a.function.such.that . F x y X x Y y X Y , , Pr , ( ) = ≤ ≤ ( ) for.all.real. x .and.all.real. y ,.then. F X , Y .is.referred.to.as.the joint cumulative distribution function .(joint.CDF).of. X .and. Y . .The.function. F X , Y .is.nonde-creasing.in. x ,.nondecreasing.in. y ,.right-continuous.with.respect.to. x ,.and. right-continuous.with.respect.to. y . We.can.use.the.joint.CDF.to.determine.the.individual.(marginal).CDFs.for. the.two.random.variables . . Consider. two. random. variables,. X .and. Y ,. defined. on. a. single. random. experiment. .If. F X , Y .denotes.the.joint.CDF.of. X .and. Y ,. F X denotes.the. mar-ginal cumulative distribution function .of. X ,.and. F Y .denotes.the.mar-ginal.cumulative.distribution.function.of. Y ,.then . F x F x y X y x y ( ) = ( ) →∞ lim , , and 0 Probability Models in Operations Research . F y F x y Y x x y ( ) = ( ) →∞ lim , , 3.1.1 Two Discrete Random Variables As.with.the.study.of.single.random.variables,.the.analysis.of.multiple.ran-dom.variables.is.simplified.by.classifying.the.random.variables.as.discrete. - V. S. Pugachev(Author)
- 2014(Publication Date)
- Pergamon(Publisher)
Section 1.2.1 an intuitive definition of a random variable was given based on experimentally observable facts, and it was shown that with every random variable may be connected some events, its occurrences in different sets. For studying random variables it is necessary that the probabilities be determined for some set of such events, i.e. that this set of events belongs to the field of events δ connected with a trial. Furthermore, it is expedient to require that this set of events be itself a field of events (a subfield of the field δ). Thus we come to the following definition of a random variable.A random variable is a variable which assumes, as a result of a trial, one and only one of the set of possible values and with which is connected some field of events representing its occurrences in given sets, contained in the main field of events δ.2.1.2 Scalar and vector random variables
Random variables may be both scalar and vector. In correspondence with general definition of a vector we shall call a vector random variable or a random vector any ordered set of scalar random variables. Thus, for instance, an n -dimensional random vector X is a set of n scalar random variables X 1 ,…, X n . These random variables X 1 ,…, X n are called the components of the random vector X .In a general case the components of a random vector may be complex random variables (assuming complex numerical values as a result of a trial). But we may always get rid of complex variables by replacing every complex variable by a pair of real variables, namely by its real and imaginary parts. Thus an n -dimensional vector with complex components may always be considered a 2n -dimensional vector with real components. However, it is not always profitable. In many problems it is more convenient to consider complex random variables. Later on for brevity we shall call a vector with complex components a complex vector and a vector with real components a real vector .Instead of a random vector we may evidently consider a random point- John A. Gubner(Author)
- 2006(Publication Date)
- Cambridge University Press(Publisher)
7 Bivariate random variables The main focus of this chapter is the study of pairs of continuous random variables that are not independent. In particular, conditional probability and conditional expectation along with corresponding laws of total probability and substitution are studied. These tools are used to compute probabilities involving the output of systems with two (and sometimes three or more) random inputs. 7.1 Joint and marginal probabilities Consider the following functions of two random variables X and Y , X + Y , XY , max ( X , Y ) , and min ( X , Y ) . For example, in a telephone channel the signal X is corrupted by additive noise Y . In a wireless channel, the signal X is corrupted by fading (multiplicative noise). If X and Y are the traffic rates at two different routers of an Internet service provider, it is desirable to have these rates less than the router capacity, say u ; i.e., we want max ( X , Y ) ≤ u . If X and Y are sensor voltages, we may want to trigger an alarm if at least one of the sensor voltages falls below a threshold v ; e.g., if min ( X , Y ) ≤ v . We now show that the cdfs of these four functions of X and Y can be expressed in the form P (( X , Y ) ∈ A ) for various sets 1 A ⊂ IR 2 . We then argue that such probabilities can be computed in terms of the joint cumulative distribution function to be defined later in the section. Before proceeding, you should re-work Problem 6 in Chapter 1. Example 7.1 (signal in additive noise). A random signal X is transmitted over a channel subject to additive noise Y . The received signal is Z = X + Y . Express the cdf of Z in the form P (( X , Y ) ∈ A z ) for some set A z . Solution . Write F Z ( z ) = P ( Z ≤ z ) = P ( X + Y ≤ z ) = P (( X , Y ) ∈ A z ) , where A z : = { ( x , y ) : x + y ≤ z } . Since x + y ≤ z if and only if y ≤ − x + z , it is easy to see that A z is the shaded region in Figure 7.1.- eBook - PDF
The Probability Lifesaver
All the Tools You Need to Understand Chance
- Steven J. Miller(Author)
- 2017(Publication Date)
- Princeton University Press(Publisher)
The purpose of this chapter and the next few chapters is to concentrate on some similarities among all the different random variables. In particular, there are some tools and techniques which can be fruitfully applied to understand all of them. If you can master these methods, you can analyze almost any random variable. We’ll concentrate on five items: expectation and moments (which lead to concepts such as the mean, standard deviation, and variance) in this chapter, convolutions (which allow us to combine independent random variables) and changing variables (which allow us to pass from knowledge of one random variable to another) in Chapter 10, and differentiating identities (which often facilitate finding the mean, variance, and other moments) in later chapters. The later chapters on specific random variables all follow the same pattern: we’ll choose a probability density function and then study the associated random variable. The calculations are similar from chapter to chapter; the biggest change is the difficulty in doing the integrals or sums. This ranges from very easy (in the case of the uniform distribution) to impossible (in the case of normal distributions). Sadly, the later situation is more common. It’s very rare to be able to evaluate integrals in a nice, closed form expression, and sums are typically worse! In practice we must resort to numerical approximations or series expansions. While we can get whatever accuracy we need in general, this is a major problem; we’ll talk more about this at great length later. Before delving into these special distributions, we’re going to invest some time in learning some general tools to study continuous probability distributions. Some of these we’ve already seen, others we’ll see in much greater detail later. Our first tool is that of expected values and moments. We’ll study continuous and discrete random variables at - eBook - ePub
Biometry for Forestry and Environmental Data
With Examples in R
- Lauri Mehtatalo, Juha Lappi, Lauri Mehtätalo(Authors)
- 2020(Publication Date)
- Chapman and Hall/CRC(Publisher)
2 Random Variables2.1 Introduction to random variablesRandom variables are variables that can exhibit different values depending on the outcome of a random process. The value assigned to a random variable due to a specific outcome of an experiment is called realization. The random variable itself cannot be observed, but the realized value can be. However, realization of a random variable does not include all the information of the properties of the random variable. Furthermore, several realizations include more information than just a single one. A variable that is not random is fixed.Terms such as measurement and observation are often used synonymously with realization. However, ‘observation’ can also mean the unit or the subject from which we make measurements, and we can measure several variables for each unit. Measurements of a given property of a unit are called variables. For example, trees of a certain area may be the units selected for measurements of diameter, height, species, volume and biomass. Some quantities cannot be measured for practical or theoretical reasons, and they are called latent variables.Whenever the distinction between the random variable and the realized/observed value is explicitly shown, a common practice is to denote the random variable by an uppercase letter and the realized value by a lowercase letter. For example, X = x and Y = 2 means that values x and 2 were assigned to random variables X and Y, respectively, by some random processes.2.1.1 Sources of randomnessAn intuitive reason for the randomness may be sampling: the unit was selected from a population of units randomly, and therefore any characteristic that one observes from that particular unit is affected by the unit selected. Another way of thinking is to assume that there is a certain, usually unknown random process, which generated the value for the unit of interest. - eBook - PDF
- Joseph W. Goodman(Author)
- 2015(Publication Date)
- Wiley(Publisher)
2 RANDOM VARIABLES Since this book deals primarily with statistical problems in optics, it is essential that we start with a clear understanding of the mathematical methods used to analyze random or statistical phenomena. We shall assume at the start that the reader has been exposed previously to at least some of the basic elements of probability theory. The purpose of this chapter is to provide a review of the most important material, establish notation, and present a few specific results that will be useful in later appli- cations of the theory to optics. The emphasis is not on mathematical rigor but rather on physical plausibility. For more rigorous treatment of the theory of probability, the reader may consult various texts on statistics (see, e.g., Refs. [162] and [53]). In addi- tion, there are many excellent engineering-oriented books that discuss the theory of random variables and random processes (see, e.g., [148], [159] and [195]). 2.1 DEFINITIONS OF PROBABILITY AND RANDOM VARIABLES By a random experiment, we mean an experiment with an outcome that cannot be predicted in advance. Let the collection of possible outcomes be represented by the set of events {A}. For example, if the experiment consists of tossing two coins side by side, the possible “elementary events” are HH, HT , TH, and TT , where H indicates “heads” and T denotes “tails.” However, the set {A} contains more than four elements, since events such as “at least one head occurs in two tosses” (HH or HT or TH) are included. If A 1 and A 2 are any two events, then the set {A} must also include A 1 and Statistical Optics, Second Edition. Joseph W. Goodman. © 2015 John Wiley & Sons, Inc. Published 2015 by John Wiley & Sons, Inc. DEFINITIONS OF PROBABILITY AND RANDOM VARIABLES 7 A 2 , A 1 or A 2 , not A 1 , and not A 2 . In this way, the complete set {A} is derived from the underlying elementary events.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.





