Mathematics
Diagonalising Matrix
Diagonalizing a matrix involves finding a similarity transformation that transforms the matrix into a diagonal matrix. This process is important in various areas of mathematics and physics, as it simplifies calculations and reveals important properties of the original matrix. Diagonalizing a matrix allows for easier analysis and manipulation of its elements.
Written by Perlego with AI-assistance
Related key terms
1 of 5
7 Key excerpts on "Diagonalising Matrix"
- eBook - PDF
- Martin Anthony, Michele Harvey(Authors)
- 2012(Publication Date)
- Cambridge University Press(Publisher)
9 Applications of diagonalisation We will now look at some applications of diagonalisation. We apply diagonalisation to find powers of diagonalisable matrices. We also solve systems of simultaneous linear difference equations. In particular, we look at the important topic of Markov chains. We also look at systems of differential equations. (Do not worry if you are unfamiliar with differ- ence or differential equations. The key ideas you’ll need are discussed.) We will see that the diagonalisation process makes the solution of linear systems of difference and differential equations possible by essentially changing basis to one in which the problem is readily solvable, namely a basis of R n consisting of eigenvectors of the matrix describing the system. 9.1 Powers of matrices For a positive integer n , the n th power of a matrix A is simply A n = A A A · · · A n times . Example 9.1 Consider the matrix A = 7 −15 2 −4 (which we met in Example 8.3). We have A 2 = AA = 7 −15 2 −4 7 −15 2 −4 = 19 −45 6 −14 , A 3 = A. A. A = A. A 2 = 7 −15 2 −4 19 −45 6 −14 = 43 −105 14 −34 . 280 Applications of diagonalisation It is often useful, as we shall see in this chapter, to determine A n for a general integer n . As you can see from Example 9.1, we could calculate A n by performing n − 1 matrix multiplications. But it would be more satisfying (and easier) to have a ‘formula’ for the n th power, a matrix expression involving n into which one could substitute any desired value of n . Diagonalisation helps here. If we can write P −1 AP = D, then A = P DP −1 and so A n = A A A · · · A n times = ( P DP −1 ) ( P DP −1 ) ( P DP −1 ) · · · ( P DP −1 ) n times = P D( P −1 P ) D( P −1 P ) D( P −1 P ) · · · D( P −1 P ) DP −1 = P DI DI DI · · · DI DP = P DDD · · · D n times P −1 = P D n P −1 . The product P D n P −1 is easy to compute since D n is simply the diag- onal matrix with entries equal to the n th power of those of D. - eBook - PDF
- C Y Hsiung, G Y Mao(Authors)
- 1998(Publication Date)
- World Scientific(Publisher)
CHAPTER 5 MATRICES SIMILAR TO DIAGONAL MATRICES In the previous chapter we demonstrated that any symmetric matrix can be congruent to a diagonal matrix, i.e., for any symmetric matrix A there is an invertible matrix P such that P'AP is a diagonal matrix, the background of which is a quadratic form. In this chapter we shall consider matrices that are similar to diagonal matrices, where the matrices under discussion are square matrices but are not necessarily symmetric matrices. This type of problem arises from the treatment of linear transformations that will be discussed later. Normally, the content of this chapter should come after linear transformations. However, as the content of this chapter deals mainly with calculations, not new concepts, we put it ahead of linear transformations for the sake of continuity with the previous chapters. First we shall give the necessary and sufficient conditions for a matrix to be similar to a diagonal matrix, and then discuss on how to reduce real symmetric matrices and orthogonal matrices, two kinds of very important ma-trices, to diagonal matrices. In the process of discussion we need the concepts of eigenvalues, eigenvectors, A-matrix, etc., as well as their basic properties. Besides, the concept of the minimal polynomial is also necessary. These con-cepts are themselves very important and useful. In this chapter we shall discuss the following four problems in detail: 1. Concepts of eigenvalues and eigenvectors, and their basic properties. 2. The necessary and sufficient condition for a matrix to be similar to a dia-gonal matrix, and the method of reducing matrices to diagonal matrices. 173 174 Linear Algebra 3. Methods for reducing real symmetric matrices and orthogonal matrices to diagonal matrices. 4. Minimal polynomials of matrices. The chapter consists of five sections. In the first four we deal with the first three problems, and in the last section we discuss the fourth problem. - eBook - PDF
- Howard Anton, Anton Kaul(Authors)
- 2020(Publication Date)
- Wiley(Publisher)
As we will see, this problem is closely related to that of finding an orthonormal basis for R n that consists of eigenvectors of . Problems of this type are important because many of the matrices that arise in applications are symmetric. The Orthogonal Diagonalization Problem In Section 5.2 we defined two square matrices, and , to be similar if there is an invertible matrix such that −1 = . In this section we will be concerned with the special case in which it is possible to find an orthogonal matrix for which this relationship holds. We begin with the following definition. Definition 1 If and are square matrices, then we say that is orthogonally similar to if there is an orthogonal matrix such that = . Note that if is orthogonally similar to , then it is also true that is orthogonally similar to since we can express as = = , where = . This being the case we will say that and are orthogonally similar matrices if either is orthogonally similar to the other. If is orthogonally similar to some diagonal matrix, say = then we say is orthogonally diagonalizable and orthogonally diagonalizes . Our first goal in this section is to determine what conditions a matrix must satisfy to be orthogonally diagonalizable. As an initial step, observe that there is no hope of orthog- onally diagonalizing a matrix that is not symmetric. To see why this is so, suppose that = (1) where is an orthogonal matrix and is a diagonal matrix. Multiplying the left side of (1) by , the right side by , and then using the fact that = = , we can rewrite this equation as = (2) Now transposing both sides of this equation and using the fact that a diagonal matrix is the same as its transpose we obtain = ( ) = ( ) = = so must be symmetric if it is orthogonally diagonalizable. - eBook - ePub
- P.M. Cohn(Author)
- 2017(Publication Date)
- CRC Press(Publisher)
The set of all eigenvalues of A is also called the spectrum of A. Equation (8.8) shows that a matrix similar to a diagonal matrix has n linearly independent eigenvectors; conversely, the existence of n linearly independent eigenvectors entails the equations (8.8) and hence (8.7). This result may be summed up as follows: Theorem 8.2 A square matrix A is similar to a diagonal matrix if and only if there is a basis for the whole space, formed by eigenvectors for A, and the matrix with linearly independent eigenvectors as columns provides the matrix of transformation, while the eigenvalues are the entries of the diagonal matrix. It is important to note that the characteristic polynomial is unchanged by similarity transformation: Theorem 8.3 Similar matrices have the same characteristic polynomial, hence the same trace and determinant, and the same eigenvalues. Proof If A ′ = P −1 AP, then P −1 (x I − A) P = x I − P −1 AP = x I − A ′, hence | x I − A ′| = | P | −1 | x I − A | ·. | P | = | x I − A |. The rest is clear. As an example let us find the eigenvalues of A = (4 − 2 3 − 1) The characteristic equation is x 2 − 3 x + 2 = 0; its roots are x = 1, 2. For each root we can find an eigenvector. For x = 1: (A − I) u = (3 − 2 3 − 2) (u 1 u 2) = 0 A solution is u = (2, 3) T. And for x = 2: (A − 2 I) u = (2 − 2 3 − 3) (u 1 u 2) = 0 A solution is u = (1, 1) T. The two columns of eigenvectors found are linearly independent, so we can use the matrix with the eigenvectors as columns to transform A to diagonal form: P = (2 1 3 1) has inverse P − 1 = (− 1 1 3 − 2), and so we have (− 1 1 3 − 2) (4 − 2 3 − 1) (2 1 3 1) = (− 1 1 3 − 2) (2 2 3 2) = (1 0 0 2) The linear independence of the eigenvectors in this example was no accident, but is a general property, expressed in the following. result: Theorem 8.4 Let A be a square matrix. Then any set of eigenvectors belonging to distinct eigenvalues of A is linearly independent - eBook - PDF
- Bruce Solomon(Author)
- 2014(Publication Date)
- Chapman and Hall/CRC(Publisher)
Operators in these classes are called diagonalizable . Definition 6.1 . An operator is diagonal if it is represented by a diagonal matrix. It is diagonalizable if it is similar to a diagonal operator. Most diagonalizable operators are not actually diagonal . Example 6.2 . The matrix A = 2 3 5 0 is certainly not diagonal, but it is diagonalizable, for it is similar to D = 5 0 0 -3 Indeed, A = BDB -1 , with B = 1 -3 1 5 , and B -1 = 1 8 5 3 -1 1 Geometrically, the eigenspaces of D are just the x -and y -axes, so D magnifies along the x -axis by a factor of 5, while along the y -axis, it multiplies by -3. Because A is similar to D it also multiplies by 5 and -3 in indepen-dent directions (those of the corresponding eigenvectors). 6. DIAGONALIZABILITY AND THE SPECTRAL THEOREM 391 This example illustrates the geometric nature of diagonalizability, but how did we find the invertible matrix B that realized the precise al-gebraic similarity to a diagonal matrix? More generally, we face three obvious questions: • How can one tell whether an operator is diagonalizable? • If it is diagonalizable, to what diagonal operator is it similar? • Finally, how can we find an invertible matrix B that will re-alize the similarity ? A single theorem settles all three questions: Theorem 6.3 (Main Diagonalizability test) . A linear operator T on R n is diagonalizable if and only if it has n independent eigenvectors v 1 , v 2 , . . . , v n . If so, it is similar to the diagonal matrix D = λ 1 0 · 0 0 λ 2 · · · 0 . . . . . . . . . . . . 0 0 · · · λ n Each λ i is the eigenvalue of the eigenvector v i , and we have the sim-ilarity T = BDB -1 (or D = B -1 TB ) where B is the matrix whose j th column is v j : B = v 1 v 2 · · · v n · · · · · · Proof. The proof is surprisingly short, and once one sees how it works, the result is almost obvious. The key is to consider the effect of T on the matrix B . - Mary L. Boas(Author)
- 2011(Publication Date)
- Wiley(Publisher)
158 Linear Algebra Chapter 3 Simultaneous Diagonalization Can we diagonalize two (or more) matrices us-ing the same similarity transformation? Sometimes we can, namely if, and only if, they commute. Let’s see why this is true. Recall that the diagonalizing C matrix has columns which are mutually orthogonal unit eigenvectors of the matrix being diagonalized. Suppose we can find the same set of eigenvectors for two matrices F and G; then the same C will diagonalize both. So the problem amounts to showing how to find a common set of eigenvectors for F and G if they commute. Example 7. Let’s start by diagonalizing F. Suppose r (a column matrix) is the eigenvector corresponding to the eigenvalue λ , that is, Fr = λ r. Multiply this on the left by G and use GF = FG (matrices commute) to get (11.38) GFr = λ Gr , or F(Gr) = λ (Gr) . This says that Gr is an eigenvector of F corresponding to the eigenvalue λ . If λ is not degenerate (that is if there is just one eigenvector corresponding to λ ) then Gr must be the same vector as r (except maybe for length), that is, Gr is a multiple of r, or Gr = λ r. This is the eigenvector equation for G; it says that r is an eigenvector of G. If all eigenvalues of F are non-degenerate, then F and G have the same set of eigenvectors, and so can be diagonalized by the same C matrix. Example 8. Now suppose that there are two (or more) linearly independent eigenvec-tors corresponding to the eigenvalue λ of F. Then every vector in the degenerate eigenspace corresponding to λ is an eigenvector of matrix F (see discussion of de-generacy above). Next consider matrix G. Corresponding to all non-degenerate F eigenvalues we already have the same set of eigenvectors for G as for F. So we just have to find the eigenvectors of G in the degenerate eigenspace of F. Since all vectors in this subspace are eigenvectors of F, we are free to choose ones which are eigenvectors of G.- Howard Anton, Chris Rorres(Authors)
- 2014(Publication Date)
- Wiley(Publisher)
Remark In many numerical algorithms the initial matrix is first converted to upper Hessenberg form to reduce the amount of computation in subsequent parts of the algorithm. Many computer packages have built-in commands for finding Schur and Hessenberg decompositions. Concept Review • Orthogonally similar matrices • Orthogonally diagonalizable matrix • Spectral decomposition (or eigenvalue decomposition) • Schur decomposition • Subdiagonal • Upper Hessenburg form • Upper Hessenburg decomposition Skills • Be able to recognize an orthogonally diagonalizable matrix. • Know that eigenvalues of symmetric matrices are real numbers. • Know that for a symmetric matrix eigenvectors from different eigenspaces are orthogonal. • Be able to orthogonally diagonalize a symmetric matrix. • Be able to find the spectral decomposition of a symmetric matrix. • Know the statement of Schur’s Theorem. • Know the statement of Hessenburg’s Theorem. True-False Exercises In parts (a)–(g) determine whether the statement is true or false, and justify your answer. (a) If A is a square matrix, then AA T and A T A are orthog- onally diagonalizable. (b) If v 1 and v 2 are eigenvectors from distinct eigenspaces of a symmetric matrix, then v 1 + v 2 2 = v 1 2 + v 2 2 . (c) Every orthogonal matrix is orthogonally diagonaliz- able. (d) If A is both invertible and orthogonally diagonalizable, then A −1 is orthogonally diagonalizable. (e) Every eigenvalue of an orthogonal matrix has absolute value 1. (f ) If A is an n × n orthogonally diagonalizable matrix, then there exists an orthonormal basis for R n consist- ing of eigenvectors of A. (g) If A is orthogonally diagonalizable, then A has real eigenvalues. 7.3 Quadratic Forms In this section we will use matrix methods to study real-valued functions of several variables in which each term is either the square of a variable or the product of two variables.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.






