Mathematics
Linear Transformations of Matrices
Linear transformations of matrices refer to the process of applying a linear function to a matrix, resulting in a new matrix. This transformation preserves the structure of the original matrix, including properties such as linearity and the zero vector. It is a fundamental concept in linear algebra and has applications in various fields, including computer graphics and engineering.
Written by Perlego with AI-assistance
Related key terms
1 of 5
10 Key excerpts on "Linear Transformations of Matrices"
- James M. Van Verth, Lars M. Bishop(Authors)
- 2015(Publication Date)
- A K Peters/CRC Press(Publisher)
3 Linear Transformations and Matrices 3.1 Introduction In Chapter 2 we discussed vectors and points and some simple operations we can apply to them. Now we'll begin to expand our discussion to cover specific functions that we can apply to vectors and points, functions known as transformations. In this chapter, we'll discuss a class of transformations that we can apply to vectors called linear transforma-tions . These encompass nearly all of the common operations we might want to perform on vectors and points, so understanding what they are and how to apply them is important. We'll define these functions and how distinguished they are from the other more general transformations. Properties of linear transformations allow us to use a structure called a matrix as a compact representation for transforming vectors. A matrix is a simple two-dimensional (2D) array of values, but within it lies all the power of a linear transformation. Through simple operations we can use the matrix to apply linear transformations to vectors. We can also combine the two transformation matrices to create a new one that has the same effect of the first two. Using matrices effectively lies at the heart of the pipeline for manipulating virtual objects and rendering them on the screen. Matrices have other applications as well. Examining the structure of a matrix can tell us something about the transformation it represents, for example, whether it can be reversed, what that reverse transformation might be, or whether it distorts the data that it is given. Matrices can also be used to solve systems of linear equations, which is useful to know for certain algorithms in graphics and physical simulation. For all of these reasons, matrices are primary data structures in graphics application programmer interfaces (APIs). 77 78 Linear Transformations and Matrices 3.2 Linear Transformations Linear transformations are a very useful and important concept in linear algebra.- eBook - PDF
- Tom M. Apostol(Author)
- 2019(Publication Date)
- Wiley(Publisher)
16 LINEAR TRANSFORMATIONS AND MATRICES 16.1 Linear transformations One of the ultimate goals of analysis is a comprehensive study of functions whose domains and ranges are subsets of linear spaces. Such functions are called transformations, mappings, or operators. This chapter treats the simplest examples, called linear transformations, which occur in all branches of mathematics. Properties of more general transformations are often obtained by approximating them by linear transformations. First, we introduce some notation and terminology concerning arbitrary functions. Let V and W be two sets. The symbol T ∶ V → W will be used to indicate that T is a function whose domain is V and whose values are in W. For each x in V , the element T (x) in W is called the image of x under T , and we say that T maps x onto T (x). If A is any subset of V , the set of all images T (x) for x in A is called the image of A under T and is denoted by T (A). The image of the domain V, T (V ), is the range of T . Now, we assume that V and W are linear spaces having the same set of scalars, and we define a linear transformation as follows. definition. If V and W are linear spaces, a function T : V → W is called a linear transfor- mation of V into W if it has the following two properties: (a) T (x + y) = T (x) + T (y) for all x and y in V , (b) T (cx) = cT (x) for all x in V and all scalars c. These properties are verbalized by saying that T preserves addition and multiplication by scalars. The two properties can be combined into one formula which states that T (ax + by) = aT (x) + bT (y) for all x, y in V and all scalars a and b. By induction, we also have the more general relation T ( n ∑ i=1 a i x i ) = n ∑ i=1 a i T (x i ) for any n elements x 1 , . . . , x n in V and any n scalars a 1 , . . . , a n . 578 Null space and range 579 The reader can easily verify that the following examples are linear transformations. - eBook - PDF
Linear Algebra
A First Course with Applications
- Larry E. Knop(Author)
- 2008(Publication Date)
- Chapman and Hall/CRC(Publisher)
Mathemat-ics is about relationships, and if we have only two ideas then all we can ask is how the two are related. With many concepts to draw upon however, we can ask how each relates to each of the others, and how each pair in combination relates to any of the remaining concepts, and so on. The more mathematics we have, the more mathematics we can do, 435 and we now have a wealth of ideas to ponder. What we will do next is to look deeper into the nature of linear transformations. Life is change; the past transforms into the present and the present transforms into the future. Populations change, economies change, even the weather changes. In linear algebra, change is modeled by linear transformations. Thus far we have only looked at linear transformations in terms of vector space isomorphisms. Now we will explore what linear transformations look like in general, what they do to subspaces, and how linear transformations relate to systems of equations. We begin, as usual, by looking at an example. Example 1: Recall that the standard basis for R 3 is the set B S ¼ e 1 , e 2 , e 3 f g , where e 1 ¼ 1 0 0 2 4 3 5 , e 2 ¼ 0 1 0 2 4 3 5 , and e 3 ¼ 0 0 1 2 4 3 5 . De fi ne a linear transformation T : R 3 ! R 2 by specifying that T e 1 ð Þ ¼ 1 2 ! , T e 2 ð Þ ¼ 3 4 ! , and T e 3 ð Þ ¼ 5 6 ! . A general equation for T is T x y z 2 6 4 3 7 5 0 B @ 1 C A ¼ T x 1 0 0 2 6 4 3 7 5 þ y 0 1 0 2 6 4 3 7 5 þ z 0 0 1 2 6 4 3 7 5 0 B @ 1 C A ¼ xT e 1 ð Þ þ yT e 2 ð Þ þ zT e 3 ð Þ ¼ x 1 2 ! þ y 3 4 ! þ z 5 6 ! , so T x y z 2 6 4 3 7 5 0 B @ 1 C A ¼ 1 x þ 3 y þ 5 z 2 x þ 4 y þ 6 z ! : The last equation is very interesting. The matrix on the right side of our formula for T looks very much like the matrices we encountered when solving systems of equations. In systems of equations the coef fi cient matrix was critically important, so much so that we did not even bother to write the variables but only kept track of their positions. - eBook - PDF
- Bruce Cooperstein(Author)
- 2015(Publication Date)
- Chapman and Hall/CRC(Publisher)
The following are algorithms you should be able to perform: Solve a linear system of equations with coefficients in a field F ; given a finite spanning sequence for a subspace of a vector space, find a basis for the subspace and compute the dimension of the subspace; and compute the coordinate vector of a vector v in a finite-dimensional vector space V with respect to a basis B of V. The notion of a matrix is probably familiar to the reader from elementary linear algebra, however for completeness we introduce this concept as well as some of the related concepts terminology we will use in later sections. Definition 2.10 Let F be a field. A matrix over F is defined to be a rectan-gular array whose entries are elements of F . The sequences of numbers which go across the matrix are called rows and the sequences of numbers that are vertical are called the columns of the matrix. If there are m rows and n columns, then it is said to be an m by n matrix and we write this as m × n . The numbers which occur in the matrix are called its entries . The one which is found at the intersection of the i th row and the j th column is called the ij th entry, often written as ( i,j ) − entry. Of particular importance is the n × n matrix whose ( i,j ) -entry is 0 if i negationslash = j and 1 if i = j . This is the n × n identity matrix . It is denote d by I n . Definition 2.11 Assume A is an m × n matrix with ( i,j ) − entry a ij . The transpose of A , denoted by A tr , is the n × m matrix whose ( k,l ) − entry is a lk . Linear Transformations 69 Example 2.6 Let A = parenleftbigg 1 2 3 4 5 6 parenrightbigg . Then A tr = 1 4 2 5 3 6 . Let T : V → W be a linear transformation from an n -dimensional vector space V to an m -dimensional vector space W, B V = ( v 1 , v 2 ,..., v n ) be a basis for V , and B W = ( w 1 , w 2 ,..., w m ) be a basis for W. - eBook - PDF
Sets, Groups, and Mappings
An Introduction to Abstract Mathematics
- Andrew D. Hwang(Author)
- 2019(Publication Date)
- American Mathematical Society(Publisher)
Chapter 11 Linear Transformations Groups and other algebraic phenomena arise in geometry. This chapter introduces a simple but rich class of mappings of the Cartesian space 𝐑 𝑛 , particularly mappings of the plane 𝐑 2 . We begin with dimension-independent generalities. 11.1. The Cartesian Vector Space In mathematics, “vectors” are objects that can be added to each other (satisfying the ax-ioms of an Abelian group) and that can be multiplied by numerical “scalars” (satisfying axioms analogous to the associative and distributive laws; see Remark 11.2 below). The elements of 𝐑 𝑛 are ordered ? -tuples of real numbers. We denote the general element of 𝐑 𝑛 by ? = (? 1 , ? 2 , … , ? 𝑛 ) . The superscripts are indices, not exponents. Componentwise addition is a binary operation on 𝐑 𝑛 , and (𝐑 𝑛 , +) is an Abelian group with identity element ? = (0, … , 0) and with −? = (−? 1 , … , −? 𝑛 ) the additive inverse of ? = (? 1 , … , ? 𝑛 ) . Scalar multiplication on 𝐑 𝑛 is the mapping ⋅ ∶ 𝐑 × 𝐑 𝑛 ↦ 𝐑 𝑛 defined by ? ⋅ ? = ? ⋅ (? 1 , … , ? 𝑛 ) = (?? 1 , … , ?? 𝑛 ) . Definition 11.1. The data (𝐑 𝑛 , +, ⋅) comprise the ? -dimensional Cartesian vector space . If ? = (? 1 , … , ? 𝑛 ) is a vector in 𝐑 𝑛 , the number ? 𝑗 is the 𝑗 th component of ? . The spe-cial elements 𝐞 1 = (1, 0, … , 0) , 𝐞 2 = (0, 1, 0, … , 0) , ..., 𝐞 𝑛 = (0, … , 0, 1) , are collectively called the standard basis of 𝐑 𝑛 . Remark 11.2. If ? , ? 1 , and ? 2 are vectors in 𝐑 𝑛 and ? and ? are real scalars, the fol-lowing properties hold, as you should check: • (Associativity) (??) ⋅ ? = ? ⋅ (? ⋅ ?) . • (Left-distributivity) (? + ?) ⋅ ? = (? ⋅ ?) + (? ⋅ ?) . 159 160 11. Linear Transformations • (Right-distributivity) ? ⋅ (? 1 + ? 2 ) = (? ⋅ ? 1 ) + (? ⋅ ? 2 ) . • (Normalization) 1 ⋅ ? = ? . Loosely, “the usual rules of algebra hold” when working with vectors. In practice, the dot denoting scalar multiplication is often omitted. - eBook - PDF
Quadratic Forms and Matrices
An Introductory Approach
- N. A. Yefimov(Author)
- 2014(Publication Date)
- Academic Press(Publisher)
C H A P T E R III. Linear Transformations and Matrices § 12. Linear Transformations of the Plane 58. Let α be a plane and Ο a point of a. Every point Μ of α determines a vector χ = OM, the radius vector of Μ relative to O. We shall suppose all vectors to be applied at O. This means that we shall regard a vector in α as the radius vector (relative to O) of a definite point in a. 59. If we are given a rule which associates with a point Μ of α a point Λί' of α we say that there is defined in α a point transformation. The point M' is called the image of the point M, We shall assume that the image of Ο is Ο itself 60. In addition to point transformations we shall consider vector transformations. We say that a vector transformation of the plane α is defined if we are given a rule which associates with a vector χ = OM a vector x' = OM' of a. The vector x' is called the image of the vector χ and we write x' = Ax, 61. A transformation x' = Ax is said to be linear if the following two conditions are satisfied: 64 12. Linear Transformations of the Plane 65 Fio. 7. Since Α{λχ) = A(.4x), ON' is obtained from OM' by the same stretching which takes OM into ON,] As for the second condition, put χ = OM, y = ON, χ + y = OP (Fig. 8). Let M', N', P' be the images of points M, N, Ρ under the given transformation. Then Ax = OM', Ay = ON^, A(x+ y) = OP'and, in view of condition (2), OP' = A(x 4- y) « Ax + Ay^ OM' + ON', Hence the second condition states that every parallelogram OMNP is transformed into a quad-rangle OM'N'P' which is again a parallelogram. (1) Α{λχ) = λΑχ, for every vector χ of α and every number A; (2) A(x + y) = i4x + Ay, for every pair of vectors χ and y of a. We shall now clarify the meaning of these conditions. Before we do so we wish to note that all references to points and vectors pertain to points and vectors of the plane a. Consider condition (1). We know that the vectors χ and Ax are collinear and Ax is obtained from χ by stretching the latter by a factor A. - eBook - PDF
- John M. Erdman(Author)
- 2018(Publication Date)
- American Mathematical Society(Publisher)
Chapter 21 Linearity 21.1. Linear transformations Linear transformations are central to our study of calculus. Functions are differ-entiable, for example, if they are smooth enough to admit decent approximation by (translates of) linear transformations. Thus, before tackling differentiation (in Chapter 25), we familiarize ourselves with some elementary facts about linearity. 21.1.1. Definition. A function T : V → W between vector spaces is linear if (23) T ( x + y ) = Tx + Ty for all x, y ∈ V and (24) T ( αx ) = αT x for all x ∈ V and α ∈ R . A linear function is most commonly called a linear transformation , some-times a linear mapping . If the domain and codomain of a linear transformation are the same vector space, then it is often called a linear operator , and occa-sionally a vector space endomorphism . The family of all linear transformations from V into W is denoted by L ( V, W ). Two oddities of notation concerning linear transformations deserve comment. First, the value of T at x is usually written Tx rather than T ( x ). Naturally, the parentheses are used whenever their omission would create ambiguity. For example, in (23) above Tx + y is not an acceptable substitute for T ( x + y ). Second, the symbol for composition of two linear transfor-mations is ordinarily omitted. If S ∈ L ( U, V ) and T ∈ L ( V, W ), then the composite of T and S is denoted by TS (rather than by T ◦ S ). This will cause no confusion since we will define no other “multiplication” of linear maps. As a consequence of this convention if T is a linear operator, then T ◦ T is written as T 2 , T ◦ T ◦ T as T 3 , and so on. One may think of condition (23) in the definition of linearity in the following fashion. Let T × T be the mapping from V × V into W × W defined by ( T × T )( x, y ) = ( T x, Ty ) . 135 136 21. Linearity Then condition (23) holds if and only if the diagram V W T V × V V + V × V W × W T × T W × W W + commutes. - eBook - PDF
- Howard Anton, Chris Rorres, Anton Kaul(Authors)
- 2019(Publication Date)
- Wiley(Publisher)
This leads us to the following two questions. Question 1. Are there algebraic properties of a transformation ∶ n → m that can be used to determine whether is a matrix transformation? Question 2. If we discover that a transformation ∶ n → m is a matrix transforma- tion, how can we find a matrix for which = ? The following theorem and its proof will provide the answers. Theorem 1.8.2 ∶ n → m is a matrix transformation if and only if the following relationships hold for all vectors u and v in n and for every scalar k: (i) (u + v) = (u) + (v) [Additivity property] (ii) (ku) = k(u) [Homogeneity property] 1.8 Introduction to Linear Transformations 81 Proof If is a matrix transformation, then properties (i) and (ii) follow respectively from parts (c) and (b) of Theorem 1.8.1. Conversely, assume that properties (i) and (ii) hold. We must show that there exists an m × n matrix such that (x) = x for every vector x in n . Recall that the derivation of Formula (10) used only the additivity and homogeneity properties of . Since we are assuming that has those properties, it must be true that (k 1 u 1 + k 2 u 2 + ⋅ ⋅ ⋅ + k r u r ) = k 1 (u 1 ) + k 2 (u 2 ) + ⋅ ⋅ ⋅ + k r (u r ) (12) for all scalars k 1 , k 2 , . . . , k r and all vectors u 1 , u 2 , . . . , u r in n . Let be the matrix = [(e 1 ) ∣ (e 2 ) ∣ ⋅ ⋅ ⋅ ∣ (e n )] (13) where e 1 , e 2 , . . . , e n are the standard basis vectors for n . It follows from Theorem 1.3.1 that x is a linear combination of the columns of in which the successive coefficients are the entries x 1 , x 2 , . . . , x n of x. That is, x = x 1 (e 1 ) + x 2 (e 2 ) + ⋅ ⋅ ⋅ + x n (e n ) Using Formula (10) we can rewrite this as x = (x 1 e 1 + x 2 e 2 + ⋅ ⋅ ⋅ + x n e n ) = (x) which completes the proof. The two properties listed in Theorem 1.8.2 are called linearity conditions, and a transformation that satisfies these conditions is called a linear transformation. - Ernest Davis(Author)
- 2012(Publication Date)
- A K Peters/CRC Press(Publisher)
6.4. Geometric Transformations 167 The most general form of scale transformation combines it with a rotation, reflection, and/or translation. In homogeneous coordinates, this corresponds to multiplication by a matrix of the form c · R v 0 T 1 , where c = 0 and R is a orthonormal matrix. Conversely, a matrix M corresponds to a general scale transformation if the following conditions hold. Let A be the upper left-hand corner of such a matrix (that is, all but the last row and column). Then • the last row of M has the form 〈 0,...,0,1 〉 , • A T A is a diagonal matrix with the same value ( c 2 ) all along the main di-agonal, and 0 elsewhere. The transformation is not a reflection if Det( A ) > 0. It is a reflection if Det( A ) < 0. The invariants of a general scale transformation are the angles between ar-rows and the ratios of distances. 6.4.5 Affine Transformations For the final class of transformations, it is easiest to go in the opposite direction, from matrices to geometry. Let M be any matrix of the form M = A v 0 T 1 . Then multiplication by M transforms one vector of homogeneous coordinates (i.e., the vector with final component 1) to another. What is the geometric sig-nificance of this operation? To answer this question, it is easiest to consider the case where v = 0, so that the origin remains fixed. In this case, the transformation can also be viewed as matrix multiplication of the natural coordinates of a point by the matrix A ; that is, Coords( Γ ( p ), C ) = A · Coords( p , C ). In two-space, let ⇒ x and ⇒ y be the x and y coordinate arrows. Let p be a point with coordinates 〈 a , b 〉 . Then we have Coords( Γ ( ⇒ x ), C ) = A 1 0 = A [:, 1] and Coords( Γ ( ⇒ y ), C ) = A 0 1 = A [:, 2], 168 6. Geometry 1 3 2 4 y ( y ) Γ Γ ( x ) 2 1 3 4 x 1 2 3 4 1 2 3 4 Figure 6.15. Affine transformation. so Coords( Γ ( p ), C ) = A a b = aA [:, 1] + bA [:, 2] = a · Coords( Γ ( ⇒ x ), C ) + b · Coords( Γ ( ⇒ y ), C ).- eBook - PDF
- Howard Anton(Author)
- 2020(Publication Date)
- Wiley(Publisher)
246 Chapter 8: Linear Transformations CHAPTER 8: LINEAR TRANSFORMATIONS 8.1 General Linear Transformations 1. (a) 2 1 0 0 1 = 2 0 0 2 = 2 0 0 2 = 4 0 0 4 does not equal 2 1 0 0 1 = 2 1 0 0 1 = 2 1 0 0 1 = 2 0 0 2 so does not satisfy the homogeneity property. Consequently, is not a linear transformation. (b) Let and be any 2 × 2 matrices and let be any real number. We have () = tr() = + = ( + ) = tr() = () and ( + ) = tr( + ) = ( + ) + ( + ) = ( + ) + ( + ) = tr() + tr() = () + () therefore is a linear transformation. The kernel of consists of all matrices such that = + = 0, i.e., = −. We conclude that the kernel of consists of all matrices of the form − . (c) Let and be any 2 × 2 matrices and let be any real number. We have () = + () = + = ( + ) = () and ( + ) = + + ( + ) = + + + = + + + = () + () therefore is a linear transformation. The kernel of consists of all matrices such that = + = + = 2 + + 2 = 0 0 0 0 therefore = = 0 and = −. We conclude that the kernel of consists of all matrices of the form 0 − 0 . 3. For ≠ , (−1) = ‖−‖ = ‖‖ = () ≠ −1(), so the mapping is not a linear transformation. 5. Let and be any 2 × 2 matrices and let be any real number. We have ( ) = ( ) = ( ) = ( ) and ( + ) = ( + ) = + = ( ) + ( ) therefore is a linear transformation. The kernel of consists of all 2 × 2 matrices whose rows are orthogonal to all columns of . 7. Let () = + + and () = + + .
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.









