Mathematics

Transforming Random Variables

Transforming random variables involves changing the probability distribution of a random variable through a function. This process allows for the analysis of the transformed variable in terms of its new distribution and properties. Common transformations include linear transformations, logarithmic transformations, and exponential transformations, each of which can provide valuable insights into the behavior of the random variable.

Written by Perlego with AI-assistance

8 Key excerpts on "Transforming Random Variables"

  • Book cover image for: Probability and Random Processes for Electrical and Computer Engineers
    • Charles Therrien, Murali Tummala(Authors)
    • 2018(Publication Date)
    • CRC Press
      (Publisher)
    3 Random Variables and Transformations Situations involving probability do not always deal strictly with events. Frequently there are real-valued measurements or observations associated with a random experi-ment. Such measurements or observations are represented by random variables . This chapter develops the necessary mathematical tools for the analysis of experi-ments involving random variables. It begins with discrete random variables, i.e., those random variables that take on only a discrete (but possibly countably infinite) set of possible values. Some common types of discrete random variables are described that are useful in practical applications. Moving from the discrete to the continuous, the chapter discusses random variables that can take on an uncountably infinite set of possible values and some common types of these random variables. The chapter also develops methods to deal with problems where one random variable is described in terms of another. This is the subject of “transformations.” The chapter concludes with two important practical applications. The first involves the detection of a random signal in noise. This problem, which is fundamental to every radar, sonar, and communication system, can be developed using just the information in this chapter. The second application involves the classification of objects from a number of color measurements. Although the problem may seem unrelated to the detection problem, some of the underlying principles are identical. 3.1 Discrete Random Variables Formally, a random variable is defined as a function X ( · ) that assigns a real number to each elementary event in the sample space. In other words, it is a mapping from the sample space to the real line (see Fig 3.1). A random variable therefore takes on S s Mapping, X ( s ) = x real number line x Figure 3.1 Illustration of a random variable. a given numerical value with some specified probability. A simple example is useful to make this abstract idea clearer.
  • Book cover image for: An Introduction to Probability and Statistical Inference
    • George G. Roussas(Author)
    • 2003(Publication Date)
    • Academic Press
      (Publisher)
    Chapter 6

    Transformation of Random Variables

    This chapter is devoted to transforming a given set of r.v.’s to another set of r.v.’s. The practical need for such transformations will become apparent by means of concrete examples to be cited and/or discussed. The chapter consists of five sections. In the first section, a single r.v. is transformed into another single r.v. In the following section, the number of available r.v.’s is at least two, and they are to be transformed into another set of r.v.’s of the same or smaller number. Two specific applications produce two new distributions, the t -distribution and the F -distribution, which are of great applicability in statistics. A brief account of specific kinds of transformations is given in the subsequent two sections, and the chapter is concluded with a section on order statistics.

    6.1. Transforming a Single Random Variable

    EXAMPLE 1
    Suppose that the r.v.’s X and Y represent the temperature in a certain locality measured in degrees Celsius and Fahrenheit, respectively. Then it is known that X and Y are related as follows: Y = X + 32.
    This simple example illustrates the need for transforming a r.v. X into another r.v. Y , if Celsius degrees are to be transformed into Fahrenheit degrees.
    EXAMPLE 2
    As another example, let the r.v. X denote the velocity of a molecule of mass m . Then it is known that the kinetic energy of the molecule is a r.v. Y related to X in the following manner: Y = mX 2 .
    Thus, determining the distribution of the kinetic energy of the molecule involves transforming the r.v. X as indicated above.
    The formulation of the general problem is as follows: Let X be a r.v. of the continuous type with p.d.f.
    fX ,
    and let h be a real-valued function defined on ℜ. Define the r.v. Y by Y = h (X ) and determine its p.d.f.
    fY .
    Under suitable regularity conditions, this problem can be resolved in two ways. One is to determine first the d.f.
    FY
    and then obtain
    fY
    by differentiation, and the other is to obtain
    fY
  • Book cover image for: Random Phenomena
    eBook - PDF

    Random Phenomena

    Fundamentals of Probability and Statistics for Engineers

    • Babatunde A. Ogunnaike(Author)
    • 2009(Publication Date)
    • CRC Press
      (Publisher)
    Chapter 6 Random Variable Transformations 6.1 Introduction and Problem Definition ................................. 172 6.2 Single Variable Transformations ...................................... 172 6.2.1 Discrete Case .................................................. 173 A Practical Application ..................................... 173 6.2.2 Continuous Case .............................................. 175 6.2.3 General Continuous Case ..................................... 176 6.2.4 Random Variable Sums ....................................... 177 The Cumulative Distribution Function Approach .......... 178 The Characteristic Function Approach ..................... 180 6.3 Bivariate Transformations ............................................ 182 6.4 General Multivariate Transformations ................................ 184 6.4.1 Square Transformations ....................................... 184 6.4.2 Non-Square Transformations .................................. 185 6.4.3 Non-Monotone Transformations .............................. 188 6.5 Summary and Conclusions ............................................ 188 REVIEW QUESTIONS ........................................ 189 EXERCISES ................................................... 190 APPLICATION PROBLEMS ................................. 192 From a god to a bull! a heavy descension! it was Jove’s case. From a prince to a prentice! a low transformation! that shall be mine; for in every thing the purpose must weigh with the folly. Follow me, Ned. William Shakespeare (1564–1616) King Henry the Fourth Many problems of practical interest involve a random variable Y that is de-fined as a function of another random variable X , say according to Y = φ ( X ), so that the characteristics of the one arise directly from those of the other via the indicated transformation.
  • Book cover image for: Introduction to Probability
    5 Transforms and transformations The theme of this chapter is functions. The first section introduces the moment generating function which is an example of a transform of a random variable. It is a tool for working with random variables that sometimes can be more convenient than probability mass functions or density functions. The second section of the chapter shows how to derive the probability distribution of a function, that is, a transformation, of a random variable. 5.1. Moment generating function Up to now we have described distributions of random variables with probabil- ity mass functions, probability density functions, and cumulative distribution functions. The moment generating function (m.g.f.) offers an alternative way to characterize the distribution. Furthermore, as the name suggests, it can also be used to compute moments of a random variable. Definition 5.1. The moment generating function of a random variable X is defined by M (t ) = E(e tX ). It is a function of the real variable t . The moment generating function is analogous to Fourier and Laplace transforms commonly used in engineering and applied mathematics. As with other notation, we write M X (t ) if we wish to distinguish the random variable X . We begin with two examples, first a discrete and then a continuous random variable. Example 5.2. Let X be a discrete random variable with probability mass function P(X = −1) = 1 3 , P(X = 4) = 1 6 , and P(X = 9) = 1 2 . Find the moment generating function of X . The calculation is an application of formula (3.24) for the expectation of g(X ) with the function g(x ) = e tx . The function g contains a parameter t that can vary: M X (t ) = E[e tX ] =  k e tk P(X = k) = 1 3 e −t + 1 6 e 4t + 1 2 e 9t . ▲ 182 Transforms and transformations Example 5.3. Let X be a continuous random variable with probability density function f (x ) = ⎧ ⎪ ⎨ ⎪ ⎩ e x e − 1 , if 0 < x < 1 0, otherwise.
  • Book cover image for: Advanced Probability Theory for Biomedical Engineers
    • John D. Enderle, David C. Farden, Daniel J. Krause, John Enderle, David Farden, Daniel Krause(Authors)
    • 2022(Publication Date)
    • Springer
      (Publisher)
    45 C H A P T E R 6 Transformations of Random Variables Functions of random variables occur frequently in many applications of probability theory. For example, a full wave rectifier circuit produces an output that is the absolute value of the input. The input/output characteristics of many physical devices can be represented by a nonlinear memoryless transformation of the input. The primary subjects of this chapter are methods for determining the probability dis- tribution of a function of a random variable. We first evaluate the probability distribution of a function of one random variable using the CDF and then the PDF. Next, the probability distribution for a single random variable is determined from a function of two random variables using the CDF. Then, the joint probability distribution is found from a function of two random variables using the joint PDF and the CDF. 6.1 UNIVARIATE CDF TECHNIQUE This section introduces a method of computing the probability distribution of a function of a random variable using the CDF. We will refer to this method as the CDF technique. The CDF technique is applicable for all functions z = g (x ), and for all types of continuous, discrete, and mixed random variables. Of course, we require that the function z : S → R ∗ , with z(ζ ) = g (x (ζ )), is a random variable on the probability space ( S, , P ); consequently, we require z to be a measurable function on the measurable space ( S, ) and P (z(ζ ) ∈ {−∞, +∞}) = 0. The ease of use of the CDF technique depends critically on the functional form of g (x ). To make the CDF technique easier to understand, we start the discussion of computing the probability distribution of z = g (x ) with the simplest case, a continuous monotonic function g .
  • Book cover image for: Elements of Stochastic Dynamics
    • Guo-Qiang Cai, Wei-Qiu Zhu(Authors)
    • 2016(Publication Date)
    • WSPC
      (Publisher)
    ω ∈ Ω. We have the
    A random variable X (ω ), ω ∈ Ω, is a function defined on a sample space Ω, such that for every real number x there exists a probability that X (ω ) ≤ x , denoted by Prob[ω : X (ω ) ≤ x ].
    For simplicity, the argument ω in X (ω ) of a random variable is usually omitted, and the probability can be written as Prob[X x ].
    There are two types of random variables: discrete and continuous. A discrete random variable takes only a countable number, finite or infinite, of distinct values. On the other hand, the sample space of a continuous random variable is an uncountable continuous space. following definition:
    A random variable can be either a single-valued quantity or an n -dimensional vector described by n values. Except specified otherwise, we assume implicitly that a random variable is a singlevalued quantity in the book.

    2.2Probability Distribution

    Since a random variable is uncertain, we can describe it in terms of probability measure. For a discrete random variable, the simplest and direct way is to specify its probability to take a possible discrete value, written as
    In the notation
    PX
    (x ), X is the random variable, and x is the state variable, i.e., the possible value of X . The convention of using a capital letter to denote a random quantity and a low case letter to represent its corresponding state variable will be followed throughout the book.
    Another probability measure to describe a random variable is known as the probability distribution function, denoted as
    FX
    (x
  • Book cover image for: Statistical Optics
    • Joseph W. Goodman(Author)
    • 2015(Publication Date)
    • Wiley
      (Publisher)
    2 RANDOM VARIABLES Since this book deals primarily with statistical problems in optics, it is essential that we start with a clear understanding of the mathematical methods used to analyze random or statistical phenomena. We shall assume at the start that the reader has been exposed previously to at least some of the basic elements of probability theory. The purpose of this chapter is to provide a review of the most important material, establish notation, and present a few specific results that will be useful in later appli- cations of the theory to optics. The emphasis is not on mathematical rigor but rather on physical plausibility. For more rigorous treatment of the theory of probability, the reader may consult various texts on statistics (see, e.g., Refs. [162] and [53]). In addi- tion, there are many excellent engineering-oriented books that discuss the theory of random variables and random processes (see, e.g., [148], [159] and [195]). 2.1 DEFINITIONS OF PROBABILITY AND RANDOM VARIABLES By a random experiment, we mean an experiment with an outcome that cannot be predicted in advance. Let the collection of possible outcomes be represented by the set of events {A}. For example, if the experiment consists of tossing two coins side by side, the possible “elementary events” are HH, HT , TH, and TT , where H indicates “heads” and T denotes “tails.” However, the set {A} contains more than four elements, since events such as “at least one head occurs in two tosses” (HH or HT or TH) are included. If A 1 and A 2 are any two events, then the set {A} must also include A 1 and Statistical Optics, Second Edition. Joseph W. Goodman. © 2015 John Wiley & Sons, Inc. Published 2015 by John Wiley & Sons, Inc. DEFINITIONS OF PROBABILITY AND RANDOM VARIABLES 7 A 2 , A 1 or A 2 , not A 1 , and not A 2 . In this way, the complete set {A} is derived from the underlying elementary events.
  • Book cover image for: Stochastic Dynamics, Filtering and Optimization
    The above result is equally applicable to the case of discrete random variables. It is noteworthy that the transformations may be usefully employed in the simulation of random variables with a specified probability distribution (see Section 2.5, Chapter 2– Monte Carlo simulation of random variables). 1.11.1 Transformation involving a scalar function of vector random variables If X = (X 1 , X 2 , ... , X n ) is a vector random variable, the CDF of the scalar random variable Y = g (X ) is given by F Y (y ) = P (g (x) ≤ y ) where the probability covers all the points x : = (x 1 , x 2 , ... , x n ) in R n such that, g (x) ≤ y . Hence knowing, for example, the pdf of X , we obtain the CDF of Y as: F Y (y ) = ∫ g (x)≤y f X (x)d x (1.153a) and the pdf of Y as: f Y (y ) = dF Y (y ) dy = d dy (∫ g (x)≤y f X (x)d x ) (1.153b) Example 1.22 If X and Y are independent random variables with pdf s given by f X (x) = αe -αx U(x) and f Y (y ) = βe -βy U(y ) respectively, let us find the pdf of Z = X + Y . Directly from Eq. (1.153b) with g (x) = x + y , we have: f Z (z) = d dz (∫ x+y ≤z f XY (x, y )dxdy ) = d dz ∫ ∞ 0 ∫ z-y 0 f XY (x, y )dxdy (1.154) Probability Theory and Random Variables 53 By Lebnitz rule,       ∂ ∂z ∫ b(z) a(z) f (x, y ) dx = ∂b ∂z f (b(y ),y ) - ∂a ∂z f (a(y ),y ) + ∫ b(z) a(z) ∂ ∂z f (x, y ) dx       , we simplify Eq. (1.154) as: f Z (z) = ∫ ∞ 0 f XY (z - y , y ) dy (1.155a) By writing f Z (z) = d dz ( ∫ x+y ≤z f XY (x, y )dxdy ) = d dz ∫ ∞ 0 ∫ z-x 0 f XY (x, y )dydx, one may alternatively obtain: f Z (z) = ∫ ∞ 0 f XY (x, z - x)dx (1.155b) x y z + = x y z dz + = + y z y – x y Fig. 1.8 Convolution integral in Eq. (1.155) Clearly, the two integrals in the above equations are convolution integrals and we can obtain the solution from any one of them.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.