Technology & Engineering

Eigenvalue

Eigenvalue is a fundamental concept in linear algebra and is used in various technological and engineering applications. It represents a scalar value that characterizes the behavior of a linear transformation or a matrix. Eigenvalues are crucial for understanding stability, oscillations, and equilibrium points in systems, making them essential for analyzing and solving engineering problems.

Written by Perlego with AI-assistance

9 Key excerpts on "Eigenvalue"

  • Book cover image for: Introduction to Matrix Decomposition, An
    ________________________ WORLD TECHNOLOGIES ________________________ Chapter- 6 Eigenvalue, Eigenvector & Eigenspace In mathematics, Eigenvalue , eigenvector and eigenspace are related concepts in the field of linear algebra. The prefix eigen-is adopted from the German word eigen for innate, idiosyncratic, own. Linear algebra studies linear transformations, which are represented by matrices acting on vectors. Eigenvalues, eigenvectors and eigenspaces are properties of a matrix. They are computed by a method described below, give important information about the matrix, and can be used in matrix factorization. They have applications in areas of applied mathematics as diverse as economics and quantum mechanics. In general, a matrix acts on a vector by changing both its magnitude and its direction. However, a matrix may act on certain vectors by changing only their magnitude, and leaving their direction unchanged (or possibly reversing it). These vectors are the eigenvectors of the matrix. A matrix acts on an eigenvector by multiplying its magnitude by a factor, which is positive if its direction is unchanged and negative if its direction is reversed. This factor is the Eigenvalue associated with that eigenvector. An eigenspace is the set of all eigenvectors that have the same Eigenvalue, together with the zero vector. These concepts are formally defined in the language of matrices and linear trans-formations. Formally, if A is a linear transformation, a non-null vector x is an eigenvector of A if there is a scalar λ such that The scalar λ is said to be an Eigenvalue of A corresponding to the eigenvector x . Overview In linear algebra, there are two kinds of objects: scalars, which are just numbers; and vectors, which can be thought of as arrows, and which have both magnitude and direction (though more precisely a vector is a member of a vector space).
  • Book cover image for: Elementary Linear Algebra with Supplemental Applications
    • Howard Anton, Chris Rorres(Authors)
    • 2014(Publication Date)
    • Wiley
      (Publisher)
    293 C H A P T E R 5 Eigenvalues and Eigenvectors CHAPTER CONTENTS 5.1 Eigenvalues and Eigenvectors 293 5.2 Diagonalization 301 5.3 Complex Vector Spaces 310 5.4 Differential Equations 321 INTRODUCTION In this chapter we will focus on classes of scalars and vectors known as “Eigenvalues” and “eigenvectors,” terms derived from the German word eigen, meaning “own,” “peculiar to,” “characteristic,” or “individual.” The underlying idea first appeared in the study of rotational motion but was later used to classify various kinds of surfaces and to describe solutions of certain differential equations. In the early 1900s it was applied to matrices and matrix transformations, and today it has applications in such diverse fields as computer graphics, mechanical vibrations, heat flow, population dynamics, quantum mechanics, and economics to name just a few. 5.1 Eigenvalues and Eigenvectors In this section we will define the notions of “Eigenvalue” and “eigenvector” and discuss some of their basic properties. Definition of Eigenvalue and Eigenvector We begin with the main definition in this section. DEFINITION 1 If A is an n × n matrix, then a nonzero vector x in R n is called an eigenvector of A (or of the matrix operator T A ) if Ax is a scalar multiple of x; that is, Ax = λx for some scalar λ. The scalar λ is called an Eigenvalue of A (or of T A ), and x is said to be an eigenvector corresponding to λ. The requirement that an eigen- vector be nonzero is imposed to avoid the unimportant case A0 = λ0, which holds for ev- ery A and λ. In general, the image of a vector x under multiplication by a square matrix A differs from x in both magnitude and direction. However, in the special case where x is an eigenvector of A, multiplication by A leaves the direction unchanged. For example, in R 2 or R 3 multiplication by A maps each eigenvector x of A (if any) along the same line through the origin as x.
  • Book cover image for: Fundamentals of Matrix Computations
    5 Eigenvalues and Eigenvectors I Eigenvalues and eigenvectors turn up in stability theory, theory of vibrations, quantum mechanics, statistical analysis, and many other fields. It is therefore important to have efficient, reliable methods for computing these objects. The main business of this chapter is to develop such algorithms, culminating in the powerful and elegant QR algorithm. 1 Before we embark on the development of algorithms, we take the time to illustrate (in Section 5.1) how Eigenvalues and eigenvectors arise in the analysis of systems of differential equations. The material is placed here entirely for motivational purposes. It is intended to convince you, the student, that Eigenvalues are important. Section 5.1 is not, strictly speaking, a prerequisite for the rest of the chapter. Section 5.1 also provides an opportunity to introduce MATLAB's eig command. When you use eig to compute the Eigenvalues and eigenvectors of a matrix, you are using the QR algorithm. 5.1 SYSTEMS OF DIFFERENTIAL EQUATIONS Many applications of Eigenvalues and eigenvectors arise from the study of systems of differential equations. 'The QR algorithm should not be confused with the QR decomposition, which we studied extensively in Chapter 3. As we shall see, the QR algorithm is an iterative procedure that performs QR decompositions repeatedly. 289 290 EigenvalueS AND EIGENVECTORS I Fig. 5.7 Solve for the time-varying loop currents. Example 5.1.1 The electrical circuit in Figure 5.1 is the same as the one that was featured in Example 1.2.8, except that two inductors and a switch have been added. Whereas resistors resist current, inductors resist changes in current. If we are studying constant, unvarying currents, as in Example 1.2.8, we can ignore the inductors, since their effect is felt only when the currents are changing. However, if the currents are varying in time, we must take the inductances into account. Once the switch in the circuit is closed, current will begin to flow.
  • Book cover image for: Matrices in Engineering Problems
    • Marvin Tobias(Author)
    • 2022(Publication Date)
    • Springer
      (Publisher)
    145 C H A P T E R 6 Matrix Eigenvalue Analysis 6.1 INTRODUCTION Matrix analysis is particularly interesting because of the insight that it brings to so many areas of engineering. With the advent of the modern computer, much of the numerical labor is at least transferred into the fascinating realm of programming. Perhaps the single most interesting matrix analysis is that which will now be discussed. It has fundamental bearing on the solution to many differential equations governing vibration problems, and the analysis of electrical networks. The Eigenvalue problem is basically concerned with the transformation of vectors and matrices in a most advantageous fashion. 6.2 THE Eigenvalue PROBLEM The beginning is simple enough: Concerning the transform Ax = y (6.1) where A is a general, real, square matrix, we ask whether or not an (input) x vector can be found such that the (output) y vector is proportional to x. That is: Ax = λx . (6.2) The constant λ is the (scalar) factor of proportionality. We can bring λx to the left side of (6.2): [A − λI]x = A(λ)x = 0 . (6.3) In (6.3), the notation [A – λI] is used rather than the more familiar (A – λI) in order to emphasize that the quantity within the “[. .]” is a square matrix. A is nXn, x is nX1, I is the nXn unit matrix; so the right side zero is an nX1 null column. The matrix [A - λI] is often referred to as A(λ), the “Lambda Matrix,” or “Characteristic Matrix.” When A is not symmetric, the “companion” Equation (6.4) must also be considered: z[A − λI] = 0 (6.4) where, now, z is 1Xn (a row vector), and the 0 is a null 1Xn row. As will be seen, these two equations are “bound together,” and will be solved together. From Chapter 4, the homogeneous sets (6.3) and (6.4) have nontrivial solution iff the matrix [A – λI] is singular. Furthermore, in this treatment of the problem, we will require that the rank of the matrix [A – λI] be n-1. This condition is met by most engineering problems of interest.
  • Book cover image for: Matrix Theory
    eBook - PDF
    • David W Lewis(Author)
    • 1991(Publication Date)
    • WSPC
      (Publisher)
    116 Chapter 4 EigenvalueS AND EIGENVECTORS We introduce the fundamental and important notions of Eigenvalues and eigenvectors of a square matrix. We give the definitions, method of determination, and basic properties of Eigenvalues and eigenvectors. We consider matrix polynomials, the Cayley-Hamilton theorem, and minimal polynomials of matrices. We then consider diagonalizable matrices, i.e. matrices similar to diagonal matrices, and determine exactly which matrices are diagonalizable. Finally we give a few examples of situations where the above notions are encountered. 4.1 Eigenvalues and eigenvectors Let M F be the set of all nxn matrices with entries in F, where F = R n or c. We will regard M R as a subset of M C. For the consideration of ° n n Eigenvalues it is convenient to think of a real-valued matrix as an element of Mc. n 4.1.1 Definition The complex number X is called an Eigenvalue of the nxn matrix A provided that there exists a non-zero column vector v in C n such that Av = X o v. Eigenvalues are also sometimes known as characteristic values, latent values or proper values. The name comes from the German word eigenwert. 117 4.1.2 Definition The non-zero vector v in (4.1.1) is called an eigenvector corresponding to the Eigenvalue X,. 4.1.3 Remark It is important that the vector v in (4.1.1) is non-zero. If we allowed v in (4.1.1) to be the zero vector then every complex number would be an Eigenvalue of A ! Observe also that if v is an eigenvector for A corresponding to the Eigenvalue X then so is ocv for every non-zero a e c. 4.2 Determination of Eigenvalues The equation Av = X v in (4.1.1) can be rewritten in the form (A -A. I)v = 0. This means that the non-zero vector v lies in the kernel of the linear map defined by the matrix A -XL 4.2.1 Definition The characteristic polynomial of the run matrix A is the polynomial in one variable X given by det (A - XI). We write p * (X) for this polynomial. It is a polynomial of degree n, i.e.
  • Book cover image for: Methods of Matrix Algebra
    CHAPTER 111 Eigenualues and Eigenuectors We are now ready to begin the study of the possible behavior of a matrix. The concept that we shall take as central for this purpose is the Eigenvalues and their associated eigenvectors and chains of generalized eigenvectors. In this chapter, we will develop this concept itself, both analytically and by appeal to physical intuition. The rigorous develop- ment, we will defer to later chapters, particularly Chapter VIII. 1. BASIC CONCEPT We consider the equation y = AX (1) and say that A is an operation that maps the space of possible x onto the space of possible y. If A is an n x n matrix, then we can consider the linear vector space S of n-dimensional vectors, and say that A maps S onto or into itself. Now many things can happen. If we consider all x in S, then the situation can be quite complicated. However, among all of S, there may be, and in fact will be, certain vectors that behave in a particularly simple way. These vectors are simply stretched or compressed, or their phase is changed. That is, the effect of A on them simply multiplies them by some scalar. If xi is such a vector, then AX^ = hixi (2) where Xi is a scalar quantity. A vector xi that satisfies Eq. (2) is an eigenvector of A, and the corre- sponding scalar Ai is an Eigenvalue. Other terms are also used. Some auihors call the vectors proper vectors or characteristic vectors and the scalars proper values or charac- teristic values. The terms eigenvectors and Eigenvalues seem to be winning out, however, in spite of their confused etymology (Eigenvalue is a semitranslation of the German word “eigenwerte”), and we shall use them here. 66 1. BASIC CONCEPT 67 i We can easily illustrate what is happening here from strain theory. Suppose we put a block of material under a pure compression along the y-axis. Then it expands along the x-axis as shown in Fig. I .
  • Book cover image for: A Course in Ordinary Differential Equations
    Except in special cases, we need to use some technological tool to find these solutions. If we are using a computer algebra system, then we might as well just use it to solve the system and skip the method of this section. This is especially true if the graph of the solution is the only item of interest. On the other hand, if one is interested in the functional form of the solution, frequently the form of the solution given by a computer algebra system is large, cumbersome, and hard to study. Even simplification routines may not help much. In this case, the method of this section is valuable, even if we are using a machine to perform the individual 5.5. Real and Distinct or Complex Eigenvalues 363 steps. From a philosophical point of view, it is important that we know how to solve a problem, even though we may let a machine do the work. Our discussion of Eigenvalues and eigenvectors was motivated as a way to solve the constant coefficient, homogeneous version of our general system of linear equations (5.1), written in matrix vector form as: x 0 = Ax . As a brief recap, we guessed solutions of the form x = c v e λt for some constant c 6 = 0. Then the solution must satisfy the differential equation so that cλ v e λt = c Av e λt . We can divide by ce λt (because it is never zero) and then rewrite the equation as Av = λ v . (5.74) Now we are in the realm of an Eigenvalue problem. In particular, the only way to have solutions other than v = 0 is to have the matrix A -λI be singular, which means that det( A -λI ) = 0. Evaluating this determinant yields an n th degree polynomial in λ which has n , possibly repeating and possibly complex (non-real) Eigenvalues λ 1 , . . . , λ n . Each of these Eigenvalues is substituted for λ in Equation (5.74), which in turn is solved for its corresponding eigenvector v i .
  • Book cover image for: Numerical Methods for Engineers
    Chapter 4 Eigenvalue Equations 4.1 Introduction Equations of the type [ A ] { x } = λ { x } (4.1) often occur in practice, for example in the analysis of structural stability or the natural frequencies of vibrating systems. We have to find a vector { x } which, when multiplied by [ A ], yields a scalar multiple of itself. This multiple λ is called an “Eigenvalue” or “characteristic value” of [ A ] and we shall see there are n of these for a matrix of order n . Physically they might represent frequencies of oscillation. There are also n vectors { x } , one associated with each of the Eigenvalues λ . These are called “eigenvectors” or “characteristic vectors” . Physically they might represent the mode shapes of oscillation. A specific example of equation (4.1) might be ⎡ ⎣ 16 − 24 18 3 − 2 0 − 9 18 − 17 ⎤ ⎦ ⎧ ⎨ ⎩ x 1 x 2 x 3 ⎫ ⎬ ⎭ = λ ⎧ ⎨ ⎩ x 1 x 2 x 3 ⎫ ⎬ ⎭ (4.2) which can be rewritten ⎡ ⎣ 16 − λ − 24 18 3 − 2 − λ 0 − 9 18 − 17 − λ ⎤ ⎦ ⎧ ⎨ ⎩ x 1 x 2 x 3 ⎫ ⎬ ⎭ = ⎧ ⎨ ⎩ 0 0 0 ⎫ ⎬ ⎭ (4.3) A nontrivial solution of this set of linear simultaneous equations is only pos-sible if the determinant of the coefficients is zero 16 − λ − 24 18 3 − 2 − λ 0 − 9 18 − 17 − λ = 0 (4.4) Expanding the determinant gives λ 3 + 3 λ 2 − 36 λ + 32 = 0 (4.5) which is called the “characteristic polynomial”. Clearly one way of solving Eigenvalue equations would be to reduce them to an n th degree characteristic 131 132 Numerical Methods for Engineers polynomial and use the methods of the previous chapter to find its roots. This is sometimes done, perhaps as part of a total solution process, but on its own is not usually the best means of solving Eigenvalue equations. In the case of equation (4.5) the characteristic polynomial has simple factors ( λ − 4)( λ − 1)( λ + 8) = 0 (4.6) and so the Eigenvalues of our matrix are 4, 1 and -8. Note that for arbitrary matrices [ A ] the characteristic polynomial is likely to yield imaginary as well as real roots.
  • Book cover image for: Advanced Engineering Mathematics with Modeling Applications
    • S. Graham Kelly(Author)
    • 2008(Publication Date)
    • CRC Press
      (Publisher)
    The Eigenvalues are the roots of this polynomial, called the characteristic polynomial. Since the Eigenvalues are the roots of a polynomial of order n , an n × n matrix has n Eigenvalues. All Eigenvalues may not be distinct; some may be repeated. If A is a real matrix, then all coefficients of its characteristic poly-nomial are real. This implies that if an Eigenvalue is complex, that it has a nonzero imaginary part, then its complex conjugate is also an Eigenvalue. The eigenvectors are the nontrivial solutions of Equation 5.15 corresponding to each Eigenvalue. It has been shown in chapter 2 that the adjoint of a real n × n matrix A with respect to the standard inner product on R n is the transpose of the matrix, A * = A T . A matrix is self-adjoint with respect to the standard inner product if it is symmetric. Thus, from Theorems 5.3 and 5.4, all Eigenvalues of a sym-metric matrix are real, and the corresponding eigenvectors are mutually orthogonal with respect to the standard inner product on R n . Example 5.1 The stress tensor σ ≈ defines the state of stress at a point in a continuum: σ σ σ σ σ σ σ σ σ σ 11 12 13 21 22 23 31 32 33 ≈ = ⎛ ⎝ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ (a) The stress tensor represents the stress vector acting on three mutually perpen-dicular planes passing through the point. The stress vector acting on a plane can be resolved into a component normal to the plane (the normal stress) and a component that lies in the plane (the shearing stress), as illustrated in Figure 5.1. The diagonal elements are the normal stresses, the component of stress normal to the planes whose normals are in the x-, y-, and z-directions. The off-diagonal elements are the shear stresses; σ ij is the stress acting in 282 Advanced engineering mathematics with modeling applications the j-direction on the plane whose normal is in the i-direction.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.