Mathematics

Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors are concepts in linear algebra. Eigenvalues are scalars that represent how a linear transformation stretches or compresses a vector, while eigenvectors are the corresponding non-zero vectors that remain in the same direction after the transformation. They are used to analyze and understand the behavior of linear transformations and systems of linear equations.

Written by Perlego with AI-assistance

11 Key excerpts on "Eigenvalues and Eigenvectors"

  • Book cover image for: Introduction to Matrix Decomposition, An
    ________________________ WORLD TECHNOLOGIES ________________________ Chapter- 6 Eigenvalue, Eigenvector & Eigenspace In mathematics, eigenvalue , eigenvector and eigenspace are related concepts in the field of linear algebra. The prefix eigen-is adopted from the German word eigen for innate, idiosyncratic, own. Linear algebra studies linear transformations, which are represented by matrices acting on vectors. Eigenvalues, eigenvectors and eigenspaces are properties of a matrix. They are computed by a method described below, give important information about the matrix, and can be used in matrix factorization. They have applications in areas of applied mathematics as diverse as economics and quantum mechanics. In general, a matrix acts on a vector by changing both its magnitude and its direction. However, a matrix may act on certain vectors by changing only their magnitude, and leaving their direction unchanged (or possibly reversing it). These vectors are the eigenvectors of the matrix. A matrix acts on an eigenvector by multiplying its magnitude by a factor, which is positive if its direction is unchanged and negative if its direction is reversed. This factor is the eigenvalue associated with that eigenvector. An eigenspace is the set of all eigenvectors that have the same eigenvalue, together with the zero vector. These concepts are formally defined in the language of matrices and linear trans-formations. Formally, if A is a linear transformation, a non-null vector x is an eigenvector of A if there is a scalar λ such that The scalar λ is said to be an eigenvalue of A corresponding to the eigenvector x . Overview In linear algebra, there are two kinds of objects: scalars, which are just numbers; and vectors, which can be thought of as arrows, and which have both magnitude and direction (though more precisely a vector is a member of a vector space).
  • Book cover image for: Elementary Linear Algebra with Supplemental Applications
    • Howard Anton, Chris Rorres(Authors)
    • 2014(Publication Date)
    • Wiley
      (Publisher)
    293 C H A P T E R 5 Eigenvalues and Eigenvectors CHAPTER CONTENTS 5.1 Eigenvalues and Eigenvectors 293 5.2 Diagonalization 301 5.3 Complex Vector Spaces 310 5.4 Differential Equations 321 INTRODUCTION In this chapter we will focus on classes of scalars and vectors known as “eigenvalues” and “eigenvectors,” terms derived from the German word eigen, meaning “own,” “peculiar to,” “characteristic,” or “individual.” The underlying idea first appeared in the study of rotational motion but was later used to classify various kinds of surfaces and to describe solutions of certain differential equations. In the early 1900s it was applied to matrices and matrix transformations, and today it has applications in such diverse fields as computer graphics, mechanical vibrations, heat flow, population dynamics, quantum mechanics, and economics to name just a few. 5.1 Eigenvalues and Eigenvectors In this section we will define the notions of “eigenvalue” and “eigenvector” and discuss some of their basic properties. Definition of Eigenvalue and Eigenvector We begin with the main definition in this section. DEFINITION 1 If A is an n × n matrix, then a nonzero vector x in R n is called an eigenvector of A (or of the matrix operator T A ) if Ax is a scalar multiple of x; that is, Ax = λx for some scalar λ. The scalar λ is called an eigenvalue of A (or of T A ), and x is said to be an eigenvector corresponding to λ. The requirement that an eigen- vector be nonzero is imposed to avoid the unimportant case A0 = λ0, which holds for ev- ery A and λ. In general, the image of a vector x under multiplication by a square matrix A differs from x in both magnitude and direction. However, in the special case where x is an eigenvector of A, multiplication by A leaves the direction unchanged. For example, in R 2 or R 3 multiplication by A maps each eigenvector x of A (if any) along the same line through the origin as x.
  • Book cover image for: Matrix Analysis for Statistics
    • James R. Schott(Author)
    • 2016(Publication Date)
    • Wiley
      (Publisher)
    3 Eigenvalues and Eigenvectors 3.1 INTRODUCTION Eigenvalues and Eigenvectors are special implicitly defined functions of the elements of a square matrix. In many applications involving the analysis of a square matrix, the key information from the analysis is provided by some or all of these Eigenvalues and Eigenvectors, which is a consequence of some of the properties of eigenvalues and eigevectors that we will develop in this chapter. However, before we get to these properties, we must first understand how Eigenvalues and Eigenvectors are defined and how they are calculated. 3.2 EIGENVALUES, EIGENVECTORS, AND EIGENSPACES If A is an m × m matrix, then any scalar λ satisfying the equation Ax = λx, (3.1) for some m × 1 vector x = 0, is called an eigenvalue of A. The vector x is called an eigenvector of A corresponding to the eigenvalue λ, and (3.1) is called the eigenvalue-eigenvector equation of A. Eigenvalues and Eigenvectors are also Matrix Analysis for Statistics, Third Edition. James R. Schott. © 2017 John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc. Companion Website: www.wiley.com/go/Schott/MatrixAnalysis3e 96 Eigenvalues and Eigenvectors sometimes referred to as latent roots and vectors or characteristic roots and vectors. Equation (3.1) can be equivalently expressed as (A − λI m )x = 0. (3.2) Note that if |A − λI m | = 0, then (A − λI m ) −1 would exist, and so premultiplication of (3.2) by this inverse would lead to a contradiction of the already stated assumption that x = 0. Thus, any eigenvalue λ must satisfy the determinantal equation |A − λI m | = 0, which is known as the characteristic equation of A. Applying the definition of a determinant, we readily observe that the characteristic equation is an mth degree poly- nomial in λ; that is, scalars α 1 , . . . , α m−1 exist, such that the characteristic equation above can be expressed equivalently as (−λ) m + α m−1 (−λ) m−1 + · · · + α 1 (−λ) + α 0 = 0.
  • Book cover image for: Linear Algebra and Matrix Analysis for Statistics
    The importance of Eigenvalues and Eigenvectors lies in the fact that almost all ma-trix results associated with square matrices can be derived from representations of square matrices in terms of their Eigenvalues and Eigenvectors. Eigenvalues and their associated eigenvectors form a fundamentally important set of scalars and vectors that uniquely identify matrices. They hold special importance in numerous applica-tions in fields as diverse as mathematical physics, sociology, economics and statis-tics. Among the class of square matrices, real symmetric matrices hold a very special place in many applications. Fortunately, the eigen-analysis of real symmetric matri-ces yield many astonishing simplifications, to the point that matrix analysis of real symmetric matrices can be done with the same ease and elegance of scalar analysis. We will see such properties later. 11.1 The Eigenvalue equation Equation (11.1) is sometimes referred to as the eigenvalue equation . Definition 11.1 says that λ is an eigenvalue of an n × n matrix A if and only if the homogenous system ( A -λ I ) x = 0 has a non-trivial solution x 6 = 0 . This means that any λ for which A -λ I is singular is an eigenvalue of A . Equivalently, λ is an eigenvalue of an n × n matrix A if and only if N ( A -λ I ) has a nonzero member. In terms of rank and nullity, we can say the following: λ is an eigenvalue of A if and only if ν ( A -λ I ) > 0 or if and only if ρ ( A -λ I ) < n . For some simple matrices the Eigenvalues and Eigenvectors can be obtained quite easily. Finding Eigenvalues and Eigenvectors of a triangular matrix is easy. Example 11.3 Eigenvalues of a triangular matrix. Let U be an upper-triangular matrix U =      u 11 u 12 . . . u 1 n 0 u 22 . . . u 2 n . . . . . . . . . . . . 0 0 . . . u nn      . The matrix U -λ I is also upper-triangular with u ii -λ as its i -th diagonal element.
  • Book cover image for: Elementary Linear Algebra, International Metric Edition
    347 7.1 Eigenvalues and Eigenvectors 7.2 Diagonalization 7.3 Symmetric Matrices and Orthogonal Diagonalization 7.4 Applications of Eigenvalues and Eigenvectors 7 Eigenvalues and Eigenvectors Diffusion (p. 354) Relative Maxima and Minima (p. 375) Genetics (p. 365) Clockwise from top left, Anikakodydkova/Shutterstock.com; ostill/Shutterstock.com; Sergey Nivens/Shutterstock.com; Shi Yali/Shutterstock.com; yienkeat/Shutterstock.com Population of Rabbits (p. 379) Architecture (p. 388) Copyright 2018 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-300 348 Chapter 7 Eigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvectors Verify eigenvalues and corresponding eigenvectors. Find eigenvalues and corresponding eigenspaces. Use the characteristic equation to find Eigenvalues and Eigenvectors, and find the Eigenvalues and Eigenvectors of a triangular matrix. Find the Eigenvalues and Eigenvectors of a linear transformation. THE EIGENVALUE PROBLEM This section presents one of the most important problems in linear algebra, the eigenvalue problem. Its central question is “when A is an n × n matrix, do nonzero vectors x in R n exist such that A x is a scalar multiple of x ?” The scalar, denoted by the Greek letter lambda ( λ ) , is called an eigenvalue of the matrix A , and the nonzero vector x is called an eigenvector of A corresponding to λ . The origins of the terms eigenvalue and eigenvector are from the German word Eigenwert, meaning “proper value.” So, you have Eigenvalue A x = λ x . Eigenvector Eigenvalues and Eigenvectors have many important applications, many of which are discussed throughout this chapter. For now, you will consider a geometric interpretation of the problem in R 2 . If λ is an eigenvalue of a matrix A and x is an eigenvector of A corresponding to λ , then multiplication of x by the matrix A produces a vector λ x that is parallel to x , as shown below.
  • Book cover image for: Methods of Matrix Algebra
    CHAPTER 111 Eigenualues and Eigenuectors We are now ready to begin the study of the possible behavior of a matrix. The concept that we shall take as central for this purpose is the eigenvalues and their associated eigenvectors and chains of generalized eigenvectors. In this chapter, we will develop this concept itself, both analytically and by appeal to physical intuition. The rigorous develop- ment, we will defer to later chapters, particularly Chapter VIII. 1. BASIC CONCEPT We consider the equation y = AX (1) and say that A is an operation that maps the space of possible x onto the space of possible y. If A is an n x n matrix, then we can consider the linear vector space S of n-dimensional vectors, and say that A maps S onto or into itself. Now many things can happen. If we consider all x in S, then the situation can be quite complicated. However, among all of S, there may be, and in fact will be, certain vectors that behave in a particularly simple way. These vectors are simply stretched or compressed, or their phase is changed. That is, the effect of A on them simply multiplies them by some scalar. If xi is such a vector, then AX^ = hixi (2) where Xi is a scalar quantity. A vector xi that satisfies Eq. (2) is an eigenvector of A, and the corre- sponding scalar Ai is an eigenvalue. Other terms are also used. Some auihors call the vectors proper vectors or characteristic vectors and the scalars proper values or charac- teristic values. The terms eigenvectors and eigenvalues seem to be winning out, however, in spite of their confused etymology (eigenvalue is a semitranslation of the German word “eigenwerte”), and we shall use them here. 66 1. BASIC CONCEPT 67 i We can easily illustrate what is happening here from strain theory. Suppose we put a block of material under a pure compression along the y-axis. Then it expands along the x-axis as shown in Fig. I .
  • Book cover image for: Mathematica for Physicists and Engineers
    • K. B. Vijaya Kumar, Antony P. Monteiro(Authors)
    • 2023(Publication Date)
    • Wiley-VCH
      (Publisher)
    161 9 Eigenvalues and Eigenvectors of a Matrix 9.1 Introduction If A is an n × n matrix and X is a column matrix with n elements, then, AX is also a column matrix with n rows, and if AX = X , then X is called the eigenvector of A with an eigenvalue . This leads to a set of linear homogeneous equations which will have a nontrivial solution, if and only if the determinant det(A − I ) = 0, which is known as the characteristic equation of matrix A. Roots of this equation are the eigenval- ues. The eigenvalues may all be distinct or some of them may be repeated. When an eigenvalue is repeated say, k(k ≤ n) times, it is said to be a k-fold eigenvalue, which means the eigenvalue occurs k times. Once an eigenvalue is determined by solv- ing the corresponding characteristic equation, one can, in principle, determine the corresponding eigenvectors. The process of determining the eigenvalues and eigen- vectors of a matrix is known as matrix diagonalization. As an illustration, let us find the Eigenvalues and Eigenvectors of the matrix by working out a few examples. The sum of the eigenvalues gives the trace of the matrix and the determinant corresponds to the product of the eigenvalues. 9.2 Eigenvalues and Eigenvectors Mathematica has built-in commands to compute the Eigenvalues and Eigenvectors and to find the characteristic polynomial. Eigenvalues[m] compute the eigenvalues of the matrix m. Eigenvectors[m] compute the eigenvectors of the matrix m. Eigensystem[m] computes the eigenvalues and the corresponding eigenvectors of the m. Example 9.1 (i) Determine the Eigenvalues and Eigenvectors of the following matrix, A = ⎡ ⎢ ⎢ ⎣ 1 −1 −1 −1 1 −1 −1 −1 1 ⎤ ⎥ ⎥ ⎦ Mathematica for Physicists and Engineers, First Edition. K.B. Vijaya Kumar and Antony P. Monteiro. © 2023 WILEY-VCH GmbH. Published 2023 by WILEY-VCH GmbH. 162 9 Eigenvalues and Eigenvectors of a Matrix (ii) Verify that the sum of the eigenvalues is equal to the trace of the matrix, i.e.
  • Book cover image for: Linear Algebra: Gateway to Mathematics
    8 Eigenvalues and Eigenvectors This final chapter brings together all the topics we have covered in the course. Our goal is to analyze the characteristic features of any given lin-ear transformation. Of course, we will be dealing with vectors and vec-tor spaces. We will be using the concepts of dimension in our quest for a basis that is especially compatible with the linear transformation. In cer-tain cases we will even be able to choose an orthogonal basis. Matrices will provide concrete representations of the linear transformations. The Gauss-Jordan reduction algorithm and the determinant function will be our basic tools in carrying out numerous computational details. 8.1 Definitions Consider the moderately unpleasant-looking linear transformation ? ∶ ℝ 2 → ℝ 2 defined by ?([ ? ? ]) = [ 23 −12 40 −21 ][ ? ? ] = [ 23? − 12? 40? − 21? ]. This formula provides little in the way of geometric insight as to how ? transforms vectors in the plane. Notice, however, that ?([ 3 5 ])= [ 9 15 ] = 3[ 3 5 ] and ?([ 1 2 ])= [ −1 −2 ] = −1[ 1 2 ]. Hence, relative to the basis ? = {[ 3 5 ], [ 1 2 ]} for ℝ 2 , the matrix of ? is particularly simple. According to Theorem 6.12, the columns of this matrix are [?( [ 3 5 ]) ] 𝐵 = [ 9 15 ] 𝐵 = [ 3 0 ] and [?( [ 1 2 ]) ] 𝐵 = [ −1 −2 ] 𝐵 = [ 0 −1 ]. 333 334 Chapter 8. Eigenvalues and Eigenvectors Transforming a vector in ℝ 2 by multiplication by this matrix ? = [ 3 0 0 −1 ] is compar-atively easy to describe. Looking at this transformation geometrically, we see that the vector gets stretched by a factor of 3 in the ? -direction and reflected through the ? -axis. This insight helps us picture what the original transformation ? does. It can be described as a three-step process. First warp the plane linearly so that [ 3 5 ] and [ 1 2 ] coincide with the original locations of [ 1 0 ] and [ 0 1 ] , respectively. Then perform the stretching and reflecting as described above. Finally, let the plane snap back to its unwarped configuration.
  • Book cover image for: Linear Algebra
    eBook - PDF

    Linear Algebra

    A First Course with Applications

    The collection of eigenvectors corresponding to an eigenvalue need not be just multiples of some vector, and that in turn raises the question: Just what can we say about the collection of eigenvectors of an eigenvalue? De fi nition 3 : Let A be an n n matrix, let T A : R n ! R n be the linear transformation de fi ned by T A ( X ) ¼ AX , and let l be an eigenvalue of A . The eigenspace of A (equivalently, the eigenspace of T A ) associated with l is the set of all eigenvectors associated with l together with the 0 vector.
  • Book cover image for: Linear Algebra
    eBook - PDF

    Linear Algebra

    Examples and Applications

    • Alain M Robert(Author)
    • 2005(Publication Date)
    • WSPC
      (Publisher)
    It may be possible to describe the evolution of this vector by a matrix multiplication: v’ = Av. If the population changes, one may be especially concerned by the variation (or preservation) of the shape of the pyramid of ages. In the transition from one generation to the next one, this shape is preserved precisely when the vector representing the population is simply multiplied by a scalar v’ = Xv. In other words, assuming that we know the matrix A, can we find a stable pyramid of ages? This problem leads to the theory which is explained below. 6.2 Definitions and Examples Let us call operator any linear map from a vector space E into itself. 6.2.1 Definitions Definition. A n eigenvector of an operator T is an element v E E such that v # 0 and Tv is proportional to v. We then write T v = Xv with a scalar A. A nonzero vector v is an eigenvector of T when T v = Xv, Tv - Xv = 0, (T - XI)v = 0, v E ker(T - XI). Definition. A n eigenvalue of an operator T is a scalar X such that ker(T - XI) # (0). The nonzero elements of ker(T - X I ) are the eigenvectors of T corresponding to the eigenvalue A. The eigenvalues are the special values of a variable z such that ker(T - X I ) # (0). When E is a finite-dimensional space, these are the values of z such that the rank of T - z1 is not maximal. Definition. If X is an eigenvalue of an operator T in a vector space E, V, = {v E E : T v = Xv} = ker(T - X I ) is the eigenspace of T relative to the eigenvalue A. Its dimension m, = dimV, = dimker(T - XI) 146 CHAPTER 6. EIGENVECTORS AND EIGEWALUES is the geometric multiplicity of the eigenvalue A. By definition X eigenvalue of T Vx = ker(T - X I ) # (0) w mA >, 1, and the geometric multiplicity of X is the maximal number of linearly inde- pendent eigenvectors that can be found for this eigenvalue.
  • Book cover image for: Elementary Linear Algebra
    • Howard Anton, Anton Kaul(Authors)
    • 2020(Publication Date)
    • Wiley
      (Publisher)
    d. If  is a complex eigenvalue of a real matrix  with a cor- responding complex eigenvector v, then  is a complex eigenvalue of  and v is a complex eigenvector of  corre- sponding to . e. Every eigenvalue of a complex symmetric matrix is real. f . If a 2 × 2 real matrix  has complex eigenvalues and x 0 is a vector in  2 , then the vectors x 0 ,  x 0 ,  2 x 0 , . . . ,  n x 0 , . . . lie on an ellipse. 5.4 Differential Equations Many laws of physics, chemistry, biology, engineering, and economics are described in terms of “differential equations”—that is, equations involving functions and their deriva- tives. In this section we will illustrate one way in which matrix diagonalization can be used to solve systems of differential equations. Calculus is a prerequisite for this section. 324 CHAPTER 5 Eigenvalues and Eigenvectors Terminology Recall from calculus that a differential equation is an equation involving unknown func- tions and their derivatives. The order of a differential equation is the order of the highest derivative it contains. The simplest differential equations are the first-order equations of the form y ′ = ay (1) where y = (x) is an unknown differentiable function to be determined, y ′ = dy /dx is its derivative, and a is a constant. As with most differential equations, this equation has infinitely many solutions; they are the functions of the form y = ce ax (2) where c is an arbitrary constant. That every function of this form is a solution of (1) follows from the computation y ′ = cae ax = ay and that these are the only solutions is shown in the exercises. Accordingly, we call (2) the general solution of (1). As an example, the general solution of the differential equation y ′ = 5y is y = ce 5 x (3) Often, a physical problem that leads to a differential equation imposes some conditions that enable us to isolate one particular solution from the general solution.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.