Technology & Engineering

Cholesky Decomposition

Cholesky decomposition is a method used to factorize a positive definite matrix into the product of a lower triangular matrix and its transpose. This decomposition is often used in numerical simulations and optimization problems, as it simplifies the process of solving linear equations and computing determinants. It is particularly efficient for solving systems of linear equations with symmetric and positive definite matrices.

Written by Perlego with AI-assistance

7 Key excerpts on "Cholesky Decomposition"

  • Book cover image for: Numerical Methods in Computational Mechanics
    • Jamshid Ghaboussi, Xiping Steven Wu(Authors)
    • 2016(Publication Date)
    • CRC Press
      (Publisher)
    58 Numerical Methods in Computational Mechanics L LD = 1 2 / (3.47) The basic equation for the algorithm can be derived from Equation 3.45 by considering the relationship between elements in the matrices on the left-hand and right-hand sides of the equation, as illustrated in Figure 3.6. From matrix multiplication, by equating the term in the matrix on the left-hand side of the matrix equation with that on right-hand side, we have a ij ik jk k 1 i 1 ij ii = ∑ l l l l = -+ (3.48) Thus, the element in the lower triangular matrix can be computed in the following relation: l l l l ij ii ij ik jk k 1 i 1 a = -        = ∑ 1 -(3.49) If we let i = j in the above equation, then we have the following: l l ii ii ik k 1 i 1 a = -        = ∑ 2 1 2 -/ (3.50) This indicates that the Cholesky Decomposition process will break down if there exists a zero eigenvalue in the coefficient matrix. In order to use Cholesky Decomposition, the stiff-ness matrix must be symmetric and positive definite. In linear finite element analysis, this condition is usually satisfied. However, the satisfaction of this condition usually cannot be guaranteed in nonlinear finite element analysis, especially with structural stability problems such as buckling, snap-through, and bifurcation. 3.6 ORTHOGONAL DECOMPOSITION In orthogonal decomposition, we basically premultiply the coefficient matrix with a series of orthogonal matrices in order to reduce it to an upper triangular matrix. Through this pro-cedure, matrix A is decomposed into the product of an orthogonal matrix Q and an upper triangular matrix U q : A QU = q (3.51) Row i Column j a ij Column j l ij n × n n × n n × n x x l ii x x x x x x 0 0 i -1 i -1 = Figure 3.6 Cholesky Decomposition of a symmetric matrix. Solution of systems of linear equations 59 The inverse of an orthogonal matrix Q is equal to its transpose.
  • Book cover image for: Scientific Computing with Multicore and Accelerators
    • Jakub Kurzak, David A. Bader, Jack Dongarra(Authors)
    • 2010(Publication Date)
    • CRC Press
      (Publisher)
    33 2.1 Introduction It is clear that the impact of the multicore processors and accelerators will be ubiquitous. There are obvious advantages, however, to look at linear algebra in general and dense linear algebra in particular. This type of software is critically important to computational science across an enormous spectrum of disciplines and applications. Yet more importantly, dense linear algebra has strategic advantages as a research vehicle, because the methods and algorithms that underlie it have been so thoroughly studied and are so well understood [5, 6, 10, 17]. This chapter dissects highly optimized Cell B. E. implementations of two classic dense linear algebra computations, the Cholesky factorization and the QR factorization. 21 22 Scientific Computing with Multicore and Accelerators SSYRK SGEMM SSYRK SGEMM SPOTRF SGEMM STRSM SGEMM STRSM A T T A B C C T FIGURE 2.1 : Tile operations in the Cholesky factorization. The sequence is left-to-right and top-down. Hatching indicates input data, shade of gray indicates in/out data. 2.2 Cholesky Factorization The Cholesky factorization (or Cholesky Decomposition) is mainly used for the numerical solution of linear equations Ax = b , where A is symmetric and positive definite. Such systems arise often in physics applications, where A is positive definite due to the nature of the modeled physical phenomenon. This happens frequently in numerical solutions of partial differential equations. The Cholesky factorization of an n × n real symmetric positive definite matrix A has the form A = LL T , where L is an n × n real lower triangular matrix with positive diagonal ele-ments. The algorithm can be expressed using either the top-looking version, the left-looking version or the right-looking version. The first one follows depth-first exploration of the task graph and the last one follows the breadth-first exploration of the task graph.
  • Book cover image for: Signal Processing Algorithms for Communication and Radar Systems
    Then we have a 11 = l 2 11 ≥ 0. Thus, if a symmetric A has a negative a 11 element, it cannot have a Cholesky Decomposition. However, every symmetric positive-definite matrix A has a Cholesky Decomposition. The matrix A is positive-definite if (x, Ax) = x T Ax > 0 and non-negative-definite if (x, Ax) = x T Ax ≥ 0 for any x = 0. For any n × n matrix B , let A = B T B . Then A is symmetric and non-negative-definite as shown by (x, Ax) = (x,B T Bx) = (Bx , Bx) = (B T Bx,x ) = (A T x,x ) ≥ 0. If rank(B ) = n, then A is positive-definite. This follows from x = 0 implies y = Bx = 0 and x T Ax = x T B T Bx = (Bx, Bx) > 0. theorem 10.1 (Cholesky Decomposition Theorem) If A is symmetric and positive- definite, then there is a unique lower triangular matrix L with positive diagonal elements such that A = LL T . Cholesky Decomposition Algorithm Consider the decomposition of a symmetric positive-definite matrix A = [a ij ] i,j =1,...,n = LL T , L = [l ij ] i,j =1,...,n . 1. Initialize with l 11 = √ a 11 . For i = 2, 3, . . . ,n, do 2. l ij = (a ij − ∑ j −1 k=1 l ik l jk )/l jj , j = 1, 2, . . . ,i − 1. 3. l ii =  a ii − ∑ i −1 k=1 l 2 ik  1 2 . 4. l i −1,j = 0, j = i,i + 1, . . . ,n. 176 Computational Linear Algebra In the above expressions, the sum is taken to be zero if the upper limit in the sum is less than the lower limit. The inverse of a lower triangular matrix L is also lower triangular and is given simply by L −1 = [l ∗ ij ]i,j = 1, . . . ,n, L −1 L = I, l ∗ ii l ii = 1, i = 1, . . . ,n, i  k=1 l ∗ ik l kj = 0, i = 2, 3, . . . ,n; j = i − 1, . . . , 1. Inversion Algorithm for L. 1. Initialize with l ∗ 11 = 1/l 11 . For i = 2, 3, . . . ,n, 2. l ∗ ii = 1/l ii . 3. l ∗ ij = (− ∑ i k=j +1 l ∗ ik l kj )/l jj , j = i − 1,i − 2, . . . , 1. 4. l ∗ (i −1)j = 0,j = i,i + 1, . . . ,n. In detection, estimation, and signal processing, Cholesky Decomposition can be used to perform whitening operation on colored observations.
  • Book cover image for: Factorization Methods for Discrete Sequential Estimation
    • Gerald J Bierman(Author)
    • 1977(Publication Date)
    • Academic Press
      (Publisher)
    We have chosen the unique square root corresponding to positive diagonal L elements. A perusal of the Cholesky algorithm reveals that it involves n scalar square roots and they appear on the diagonals and divide the columns. This suggests factoring L and writing P = D E T with E unit lower triangular and D diagonal. Such a representation is free of square roots and has application in matrix inversion and parameter estimation (cf. Chapter V). Corollary If P > 0, then P = z D E T , where 1 is lower triangular with unit diagonal elements and D = Diag(d,, . . . , d,). The elements of and D are given by the following algorithm: For j = 1, . . . , n - 1,. recursively cycle through the following ordered set of equations: (Square Root Free Cholesky Decomposition) - 4 = P ( j , j ) , ( j , j ) = 1 (3.15) (3.16) k = j + 1, . . . , n i = k , ..., n P ( i , k ) := P ( i , k ) - z ( i , j ) P ( k , j ) - L ( k , j ) = P ( k , j ) / d , , k = j + I , . . . , n (3.17) and then d,, = P ( n , n). The result can be obtained by mimicking the Cholesky theorem derivation, or more directly, by identifying d/ = L(j,j)' and z ( i , j ) = L(i,j)/L(j,j) and manipulating result (3.13). Upper triangular factorization algorithms can be obtained by mimicking the developments of this section. The corresponding upper triangular Cholesky factorizations are given in Appendix 1II.A. Upper and lower triangular factorizations require the same amount of computation so that 111.3 Matrlx Square Roots and the Cholesky Decomposltlon Algorlthm 43 deciding which factorization to use depends on the demands of the application. Our investigations and algorithms will involve factors of the error covariance matrix, with P = SST, and factors of the information matrix P -' , written as RTR. The computation of R will often put it in an upper triangular form and since R = S -I this motivates us to write S in upper triangular form (because matrix inversion preserves triangularity).
  • Book cover image for: Essential Mathematical Methods for the Physical Sciences
    18 We cannot set the diagonal elements of L equal to unity in this case, because we require the same number of independent elements in L as in A . The reason that the decom-position can only be applied to positive semi-definite matrices can be seen by considering the Hermitian form (or quadratic form in the real case) x † Ax = x † LL † x = ( L † x ) † ( L † x ) . Denoting the column matrix L † x by y , we see that the last term on the RHS is y † y , which must be greater than or equal to zero. Thus, we require x † Ax ≥ 0 for any arbitrary column matrix x . As mentioned above, the requirement that a matrix be positive semi-definite is equivalent to demanding that all the eigenvalues of A are positive or zero. If one of the eigenvalues of A is zero, then, as will be shown in equation ( 1.104 ), | A | = 0 and A is singular . Thus, if A is a non-singular matrix, it must be positive definite (rather than just positive semi-definite) for a Cholesky Decomposition ( 1.71 ) to be possible. In fact, in this case, the inability to find a matrix L that satisfies ( 1.71 ) implies that A cannot be positive definite. The Cholesky Decomposition can be used in a way analogous to that in which the LU decomposition was employed earlier, but we will not explore this aspect further. Some practice decompositions are included in the problems at the end of this chapter. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 18 In the special case where A is real, the decomposition becomes A = LL T . 34 Matrices and vector spaces Cramer’s rule A further alternative method of solution is to use Cramer’s rule , which also provides some insight into the nature of the solutions in the various cases.
  • Book cover image for: Introduction to Numerical Programming
    eBook - PDF

    Introduction to Numerical Programming

    A Practical Guide for Scientists and Engineers Using Python and C/C++

    • Titus A. Beu(Author)
    • 2014(Publication Date)
    • CRC Press
      (Publisher)
    Equivalently, a positive-definite matrix has only positive eigenvalues. It can be shown that a symmetric positive-definite matrix can always be factorized under the particular form A = L · L T , (7.60) where L is a lower triangular matrix and L T is its transpose (upper triangular). This decomposition, known as Cholesky factorization , is an important particular case of the LU factorization and leads to a 198 Introduction to Numerical Programming significant reduction of the computational cost. As a matter of fact, the Cholesky factorization can be shown to perform in the case of linear systems with positive-definite matrices roughly 2 times faster than the general methods based on Gaussian elimination or LU factorization. From the explicit form of decomposition (7.60), A = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ l 11 . . . . . . 0 l i 1 · · · l ii . . . . . . l n 1 · · · · · · · · · l nn ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ · ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ l 11 · · · l j 1 · · · l n 1 . . . . . . . . . l jj . . . 0 . . . . . . l nn ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , (7.61) we can establish the following relations between the elements of matrices A and L : a ij = a ji = j k = 1 l ik l jk , i ≥ j . (7.62) Expressing from the last term of the sum the entries on the column j of matrix L and employing for reasons of storage efficiency just the lower triangle of matrix A , we get: ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ l jj = ⎡ ⎣ a jj − j − 1 k = 1 l 2 jk ⎤ ⎦ 1 / 2 , j = 1, 2, . . . , n , l ij = 1 l jj ⎡ ⎣ a ij − j − 1 k = 1 l ik l jk ⎤ ⎦ , i = j + 1, . . . , n . (7.63) Analyzing the algorithmic consistency of these relations, we find that if matrix L is filled column wise , calculating on each column j first the diagonal element l jj and then the elements located underneath, l ij with i = j + 1, . . . , n , all the implied quantities are available as they become necessary.
  • Book cover image for: Computation of Generalized Matrix Inverses and Applications
    x ) is square root free Cholesky Decomposition in the following form:
    A
    ( x )
    = L
    ( x )
    D
    ( x )
    L
    ( x )
    ,
    where L (x ) is lower triangular, a D (x ) diagonal matrix,
    (1.3.2)
    L
    ( x )
    =
    1
    0
    0
    l 21
    ( x )
    1
    0
    l
    n 1
    ( x )
    l
    n 2
    ( x )
    1
    , ,
    (1.3.3)
    D
    ( x )
    =
    d 1
    ( x )
    0
    0
    0
    d 2
    ( x )
    0
    0
    0
    d n
    ( x )
    wherein l
    ij
    , d
    j
    , 1 ≤ j < i n are rational functions. Thereby, L * (x ) denotes conjugate transpose matrices L (x ) .
    Replacing LL * decomposition with LDL * factorization in order to avoid calculating square root elements there is wellknown method of numerical analysis, described by Golub and Van Loan [19 ]. Computation of squareroot entries in the Cholesky Decomposition can be avoided by using the LDL * factorization. This technique is of essential importance for polynomial and rational elements of the matrix L i D . If, for instance, we have
    i - - 0
    q
    A i
    x i
    symbolic, where A
    i
    are, i = 0, . .., q constant n × n matrices, it can be very difficult. Therefore, it is the motivation modifying algorithms with direct Cholesky factorization polynomial matrix A (x ) . It is evident that this form is appropriate for manipulation with polynomial matrix.
    Accordingly, the fullrank decomposition can be used for the symbolic generalized inverse calculation using a number of techniques introduced in the works, such as [53 , 65
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.