Technology & Engineering
Gaussian Elimination
Gaussian elimination is a method used to solve systems of linear equations by transforming the augmented matrix into row-echelon form through a series of row operations. This process involves eliminating variables one by one until the matrix is in a triangular form, making it easier to solve for the variables. It is a fundamental technique in linear algebra and numerical analysis.
Written by Perlego with AI-assistance
Related key terms
1 of 5
11 Key excerpts on "Gaussian Elimination"
- Sudipto Banerjee, Anindya Roy(Authors)
- 2014(Publication Date)
- Chapman and Hall/CRC(Publisher)
. . + a 2 n x n = b 2 , . . . a n 1 x 1 + a m 2 x 2 + . . . + a nn x n = b n . (2.3) Thus, we have Ax = b , where A is now an n × n coefficient matrix, x is n × 1 and b is also n × 1 . Gaussian Elimination is a sequential procedure of transforming a linear system of n linear algebraic equations in n unknowns, into another simpler system having the same solution set. Gaussian Elimination proceeds by successively Gaussian Elimination 35 eliminating unknowns and eventually arriving at a system that is easily solvable. The elimination process relies on three simple operations by which to transform one system to another equivalent system. The Gaussian Elimination process relies on three simple operations which to trans-form the system in (2.3) into a triangular system. These are known as elementary row operations . Definition 2.2 An elementary row operation on a matrix is any one of the follow-ing: (i) Type-I: interchange two rows of the matrix, (ii) Type-II: multiply a row by a nonzero scalar, and (iii) Type-III: replace a row by the sum of that row and a scalar multiple of another row. We will use the notation E ik for interchanging the i -th and k -th rows, E i ( α ) for multiplying the i -th row by α , and E ik ( β ) , with i 6 = k for replacing the i -th row with the sum of the i -th row and β times the k -th row. We now demonstrate how these operations can be used to solve linear equations. The key observation is that the solution vector x of a linear system Ax = b , as in (2.1), will remain unaltered as long as we apply the same elementary operations on both A and b , i.e., we apply the elementary operations to both sides of the equation. Note that this will be automatically ensured by writing the linear system as the augmented matrix [ A : b ] and performing the elementary row operations on the augmented matrix. We now illustrate Gaussian Elimination with the following example.- eBook - PDF
- Larkin Ridgway Scott, Terry Clark, Babak Bagheri, L. Ridgway Scott, Terry W. Clark(Authors)
- 2021(Publication Date)
- Princeton University Press(Publisher)
Chapter Twelve Linear Systems In scientific computing, performance is a constraint, not an objective— one of the authors One of the most basic numerical computations is the solution of linear equations by direct methods. Gaussian 1 elimination is the familiar technique of adding a suitable multiple of one equation to the other equations to reduce to a smaller system of equations. Done repeatedly, this eventually produces a triangular system that can be solved easily. We consider basic algorithms for parallelizing Gaussian Elimination and solution of triangular systems of equations. The solution of triangular systems represents one of the greatest chal-lenges in parallel computing, especially for the sparse matrix case. We an-alyze several algorithms in part to provide some more challenging examples of parallel codes. The inner loops of these codes will be seen to be of the type that can be executed efficiently by the BLAS (basic linear algebra sub-routines) [97, 104]. 12.1 Gaussian Elimination We consider parallelization of the fundamental algorithm for the solution of linear equations based on Gaussian Elimination. We begin by review-ing the sequential algorithm and its dependences. Then, using information gleaned from the latter, we describe two of the many ways of deriving a parallelization. 12.1.1 Dependences in Gaussian Elimination Consider a system of equations n j =1 a ij x j = f i ∀ i = 1 , . . . , n. (12.1.1) 1 Johann Carl Friedrich Gauss (1777–1855) was one of the giants of mathematics and indeed of all science. Gauss developed least squares approximation, the concept of complex numbers, and gave the first proof of the fundamental theorem of algebra, to list just three topics from a remarkable range of contributions. 258 CHAPTER 12 N j k 1 N i Figure 12.1 The iteration space for the standard sequential algorithm for Gaussian Elimination forms a trapezoidal region with square cross-section in the i, j plane. - eBook - ePub
- Gerard Meurant(Author)
- 1999(Publication Date)
- North Holland(Publisher)
2Gaussian Elimination for General Linear Systems
2.1 Introduction to Gaussian Elimination
The problem we are concerned with in this chapter is obtaining by a direct method the numerical solution of a linear system(2.1)where A is a square non–singular matrix (i.e. det(A ) ≠ 0) of order n, b is a given vector, x : is the solution vector. Of course, the solution x of (2.1) is given bywhere A −1 denotes the inverse of A. Unfortunately, in most cases A −1 is not explicitly known, except for some special problems and/or for small values of n. But, it is well known that the solution can be expressed by Cramer’s formulae (see [601 ]):(2.2)The computation of the solution x by (2.2) requires the evaluation of n + 1 determinants of order n. This implies that this method will require more than (n + 1)! operations (multiplications and additions) to compute the solution. Fortunately, there are much better methods than Cramer’s rule which is almost never used.As we said in Chapter 1 , direct methods obtain the solution by making some combinations and modifications of the equations. Of course, as computer floating point operations are only performed to a certain precision (see Chapter 1 ), the computed solution is generally different from the exact solution. We shall return to this point later.The most widely used direct methods for general matrices belong to a class collectively known as Gaussian Elimination. There are many variations of the same basic idea and we shall describe some of them in the next sections.2.1.1 Gaussian Elimination without Permutations
In this Section we describe Gaussian Elimination without permutations. We give the necessary and sufficient conditions for a matrix to have an LU factorization where L (resp. U ) is a lower (resp. upper) triangular matrix. Then we shall introduce permutations to handle the general case.The first step of the algorithm is the elimination of the unknown x 1 in the equations 2 to n. This can be done through n − 1 steps. Suppose that α1,1 ≠ 0, α1,1 is then called the first pivot. To eliminate x 1 from the second equation, we (left) multiply A - Tirupathi Chandrupatla, Ashok Belegundu(Authors)
- 2021(Publication Date)
- Cambridge University Press(Publisher)
2 Gaussian Elimination 2.1 Introduction The analysis of engineering problems by the finite element method (FEA) reduces to solving simultaneous equations Ax ¼ b or an eigenvalue problem Ay ¼ λBy: In this chapter, after presenting a few basic concepts in matrix algebra, we discuss the Gaussian Elimination method and its variants to handle sparse matrices. These variants include LU decompos- ition, pivoting, symmetric banded, skyline, and sparse matrices. The concept of fill-in is discussed. The discussion and accompanying computer codes are useful in themselves, and also give insight into many of the routines and commands that one can use in MATLAB ® or similar software. Commands to use in-built MATLAB ® routines are also given. Some code listings are presented in this chapter with the aim of explaining the algorithm steps, while other codes are on the publisher’s website (see Preface). For more information related to the content of this chapter, see Pissanetsky (1984) and Duff et al. (1986), which have helped in the writing of this chapter, and Nguyen (2006) for the reader interested in parallel computing. 2.2 Basic Concepts in Matrix Algebra 2.2.1 Simultaneous Equations The study of matrices here is largely motivated by the need to solve systems of simultaneous equations of the form a 11 x 1 þ a 12 x 2 þ þ a 1n x n ¼ b 1 a 21 x 1 þ a 22 x 2 þ þ a 2n x n ¼ b 2 a n1 x 1 þ a n2 x 2 þ þ a nn x n ¼ b n (2.1) where x 1 , x 2 , . . ., x n are the unknowns. Equation (2.1) can be conveniently expressed in matrix form as Ax ¼ b, where A is a square matrix of dimensions (n n), and x and b are column arrays of dimension (n 1). The multiplication of two matrices, A and x, is implicitly defined above: the dot product of the ith row of A with x is equated to b i , resulting in the ith equation of eq. (2.1). The product of an (m n) matrix A and an (n p) matrix B results in an (m p) matrix C.- Nabil Nassif, Dolly Khuwayri Fayyad(Authors)
- 2016(Publication Date)
- Chapman and Hall/CRC(Publisher)
Chapter 3 Solving Systems of Linear Equations by Gaussian Elimination 3.1 Mathematical Preliminaries 3.2 Computer Storage and Data Structures for Matrices 3.3 Back Substitution for Upper Triangular Systems 3.4 Gauss Reduction 3.4.1 Naive Gauss Elimination 3.4.2 Partial Pivoting: Unscaled and Scaled Partial Pivoting 3.5 LU Decomposition 3.5.1 Computing the Determinant of a Matrix 3.5.2 Computing the Inverse of A 3.5.3 Solving Linear Systems Using LU Factorization 3.6 Exercises 3.7 Computer Projects 3.1 Mathematical Preliminaries This chapter assumes basic knowledge of linear algebra, in particular Elementary Matrix Algebra as one can find these notions in a multitude of textbooks such as [ 32 ]. Thus, we consider the problem of computing the solution of a system of n linear equations in n unknowns. The scalar form of that system is as follows: (S) { a 11 x 1 + a 12 x 2 + … + … + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + … + … + a 2 n x n = b 2 … … … … a n 1 x 1 + a n 2 x 2 + … + … … + a n n x n = b n Written in matrix form, (S) is equivalent. to: A x = b, (3.1) where the coefficient square matrix A ∈ ℝ n,n, and the column vectors x, b ∈ ℝ n, 1 ≅ ℝ n. Specifically, A = (a 11 a 12 ⋯ ⋯ a 1 n a 21 a 22 ⋯ ⋯ a 2 n ⋯ ⋯ ⋯ ⋯ ⋯ a n 1 a n 2 ⋯ ⋯ a n n) x = (x 1 x 2 ⋯ x n) a n d b = (b 1 b 2 ⋯ b 2). We assume that the basic linear algebra property for systems of linear equations like (3.1) are satisfied. Specifically: Proposition 3.1 The following statements are equivalent : System (3. 1) has a unique solution. det(A) ≠ 0. A is invertible. Our objective is to present the basic ideas of a linear system solver which consists of two main procedures allowing to solve (3.1) with the least number of floating point arithmetic operations (flops). The first, referred to as Gauss elimination (or reduction) reduces (3.1) into an equivalent system of linear equations, which matrix is upper triangular- Howard Anton, Chris Rorres(Authors)
- 2014(Publication Date)
- Wiley(Publisher)
There are various techniques for minimizing roundoff error and instability. For example, it can be shown that for large linear systems Gauss–Jordan elimination involves roughly 50% more operations than Gaussian Elimination, so most computer algorithms are based on the latter method. Some of these matters will be considered in Chapter 9. Concept Review • Reduced row echelon form • Row echelon form • Leading 1 • Leading variables • Free variables • General solution to a linear system • Gaussian Elimination • Gauss–Jordan elimination • Forward phase • Backward phase • Homogeneous linear system • Trivial solution • Nontrivial solution • Dimension Theorem for Homogeneous Systems • Back-substitution Skills • Recognize whether a given matrix is in row echelon form, reduced row echelon form, or neither. • Construct solutions to linear systems whose corresponding augmented matrices that are in row echelon form or reduced row echelon form. • Use Gaussian Elimination to find the general solution of a linear system. • Use Gauss–Jordan elimination to find the general solution of a linear system. • Analyze homogeneous linear systems using the Free Variable Theorem for Homogeneous Systems. True-False Exercises In parts (a)–(i) determine whether the statement is true or false, and justify your answer. (a) If a matrix is in reduced row echelon form, then it is also in row echelon form. (b) If an elementary row operation is applied to a matrix that is in row echelon form, the resulting matrix will still be in row echelon form. (c) Every matrix has a unique row echelon form. (d) A homogeneous linear system in n unknowns whose corresponding augmented matrix has a reduced row echelon form with r leading 1’s has n − r free vari- ables. (e) All leading 1’s in a matrix in row echelon form must occur in different columns. (f ) If every column of a matrix in row echelon form has a leading 1 then all entries that are not leading 1’s are zero.- Dale Anderson, John C. Tannehill, Richard H. Pletcher, Ramakanth Munipalli, Vijaya Shankar(Authors)
- 2020(Publication Date)
- CRC Press(Publisher)
The third equation in the altered system is then used as the next pivot equation, and the process continues until only an upper triangular form remains: FIGURE 4.32 Gaussian Elimination, u 1 eliminated below main diagonal. FIGURE 4.33 Gaussian Elimination, u 1 and u 2 eliminated below main diagonal. 217 Application of Numerical Methods * We must always interchange rows if necessary to avoid division by zero. 11 1 12 2 1 22 2 23 3 2 33 3 3 a u a u c a u a u c a u c a u c nn n n + + = ′ + ′ + = ′ ′ + = ′ ′ = ′ (4.140) At this point, only one unknown appears in the last equation, two in the next to last equation, etc., so a solution can be obtained by back substitution. Consider the following system of three equations as a specific numerical example: U U 1 2 4 7 U 3 6 13 2 2 5 1 2 3 1 2 3 U U U U U U + − = − + = + + = Using the top equation as a pivot, we can eliminate U 1 from the lower two equations: 4 7 2 2 6 9 0 9 1 2 3 2 3 2 U U U U U U + + = − = − + = − Now using the second equation as a pivot, we obtain the upper triangular form: 4 7 2 2 6 9 18 1 2 3 2 3 3 U U U U U U + + = − = − = Back substitution yields U 3 = − 2, U 2 = 1, U 1 = 5. Block-iterative methods for Laplace’s equation (Section 4.3.4) lead to systems of simulta-neous algebraic equations, which have a tridiagonal matrix of coefficients. This was also observed in Sections 4.1 and 4.2 for implicit formulations of PDEs for marching problems. To illustrate how Gaussian Elimination can be efficiently modified to take advantage of the tridiagonal form of the coefficient matrix, we will consider the simple implicit scheme for the heat equation as an example: 2 2 2 1 2 1 1 1 1 1 u t u x u u t x u u u j n j n j n j n j n ( ( ) ∂ ∂ = α ∂ ∂ − Δ = α Δ + − + + + − + + ) In terms of the format used earlier for algebraic equations, this can be rewritten as 1 1 1 1 1 b u d u a u c j j n j j n j j n j + + = − + + + +- eBook - PDF
- Elizabeth S. Meckes, Mark W. Meckes(Authors)
- 2018(Publication Date)
- Cambridge University Press(Publisher)
Extending this intuition, we also have the following terminology. Definition An m × n linear system is called overdetermined if m > n. The idea here is that an overdetermined system – which has more equations than variables – typically contains too many restrictions to have any solution at all. We will return to this idea later. QA #10: For example, x + y + z = 0 and x + y + z = 1. 1.2 Gaussian Elimination 21 KEY IDEAS • A matrix is a doubly indexed collection of numbers: A = [a ij ] 1≤i≤m 1≤j≤n . • Linear systems can be written as augmented matrices, with the matrix of coefficients on the left and the numbers b i on the right. • Gaussian Elimination is a systematic way of using row operations on an aug- mented matrix to solve a linear system. The allowed operations are: adding a multiple of one row to another (R1), multiplying a row by a nonzero number (R2), or swapping two rows (R3). • A matrix is in row-echelon form (REF) if zero rows are at the bottom, the first nonzero entry in every row is a 1, and the first nonzero entry in any row is to the right of the first nonzero entry in the rows above. • A pivot is the first nonzero entry in its row in a matrix in REF. The matrix is in reduced row-echelon form (RREF) if every pivot is alone in its column. • Columns (except the last one) in the augmented matrix of a linear system corre- spond to variables. Pivot variables are those whose columns are pivot columns after row-reducing the matrix; the other variables are called free variables. • A system is consistent if and only if there is no pivot in the last column of the RREF of its augmented matrix. A consistent system has a unique solution if and only if there is a pivot in every column but the last, i.e., if every variable is a pivot variable. EXERCISES 1.2.1 Identify which row operations are being performed in each step below. - eBook - PDF
- Martin Anthony, Michele Harvey(Authors)
- 2012(Publication Date)
- Cambridge University Press(Publisher)
2 Systems of linear equations Being able to solve systems of many linear equations in many unknowns is a vital part of linear algebra. We use matrices and vectors as essential elements in obtaining and expressing the solutions. We begin by expressing a system in matrix form and defining ele- mentary row operations on a related matrix, known as the augmented matrix. These operations mimic the standard operations we would use to solve systems of equations by eliminating variables. We then learn a precise algorithm to apply these operations in order to put the matrix in a special form known as reduced echelon form, from which the general solution to the system is readily obtained. The method of manipulat- ing matrices in this way to obtain the solution is known as Gaussian Elimination. We then examine the forms of solutions to systems of linear equa- tions and look at their properties, defining what is meant by a homoge- neous system and the null space of a matrix. 2.1 Systems of linear equations A system of m linear equations in n unknowns x 1 , x 2 , . . . , x n is a set of m equations of the form a 11 x 1 + a 12 x 2 + · · · + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + · · · + a 2n x n = b 2 . . . . . . a m1 x 1 + a m2 x 2 + · · · + a mn x n = b m . The numbers a i j are known as the coefficients of the system. 60 Systems of linear equations Example 2.1 The set of equations x 1 + x 2 + x 3 = 3 2x 1 + x 2 + x 3 = 4 x 1 − x 2 + 2x 3 = 5 is a system of three linear equations in the three unknowns x 1 , x 2 , x 3 . Systems of linear equations occur naturally in a number of applications. We say that s 1 , s 2 , . . . , s n is a solution of the system if all m equa- tions hold true when x 1 = s 1 , x 2 = s 2 , . . . , x n = s n . Sometimes a system of linear equations is known as a set of simul- taneous equations; such terminology emphasises that a solution is an assignment of values to each of the n unknowns such that each and every equation holds with this assignment. - eBook - PDF
- D. Vaughan Griffiths, I.M. Smith(Authors)
- 2006(Publication Date)
- Chapman and Hall/CRC(Publisher)
Chapter 2 Linear Algebraic Equations 2.1 Introduction One of the commonest numerical tasks facing engineers is the solution of sets of linear algebraic equations of the form, a 11 x 1 + a 12 x 2 + a 13 x 3 = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 = b 2 (2.1) a 31 x 1 + a 32 x 2 + a 33 x 3 = b 3 commonly written [ A ] { x } = { b } (2.2) where [ A ] is a “matrix” and { x } and { b } are “vectors”. In these equations the a ij are constant known quantities, as are the b i . The problem is to determine the unknown x i . In this chapter we shall consid-er two different solution techniques, usually termed “direct” and “iterative” methods. The direct methods are considered first and are based on row by row “elimination” of terms, a process usually called “Gaussian Elimination”. 2.2 Gaussian Elimination We begin with a specific set of equations 10 x 1 + x 2 − 5 x 3 = 1 (a) − 20 x 1 + 3 x 2 + 20 x 3 = 2 (b) (2.3) 5 x 1 + 3 x 2 + 5 x 3 = 6 (c) To “eliminate” terms, we could, for example, multiply equation (a) by two and add it to equation (b). This would produce an equation from which the term in x 1 had been eliminated. Similarly, we could multiply equation (a) by 0.5 and subtract it from equation (c). This would also eliminate the term in x 1 leaving an equation in (at most) x 2 and x 3 . 15 16 Numerical Methods for Engineers We could formally write this process as (b) − − 20 10 × (a) −→ 5 x 2 + 10 x 3 = 4 (d) (2.4) (c) − 5 10 × (a) −→ 2 . 5 x 2 + 7 . 5 x 3 = 5 . 5 (e) One more step of the same procedure would be (e) − 2 . 5 5 × (d) −→ 2 . 5 x 3 = 3 . 5 (2.5) Thus, for sets of n simultaneous equations, however big n might be, after n steps of this process a single equation involving only the unknown x n would remain. Working backwards from equation (2.5), a procedure usually called “back-substitution”, x 3 can first be found as 3.5/2.5 or 1.4. Knowing x 3 , substitution in equation 2.4(d) gives x 2 as 2.0 and finally substitution in equation 2.3(a) gives x 1 as 1.0. - eBook - PDF
Classical and Modern Numerical Analysis
Theory, Methods and Practice
- Azmy S. Ackleh, Edward James Allen, R. Baker Kearfott, Padmanabhan Seshaiyer(Authors)
- 2009(Publication Date)
- Chapman and Hall/CRC(Publisher)
This strategy requires row and column interchanges. Note: In practice, partial pivoting in most cases is adequate. Note: For some classes of matrices, no pivoting strategy is required for a stable elimination procedure. For example, no pivoting is required for a real symmetric positive definite matrix or for a strictly diagonally dominant ma-trix [99]. We now present a formal algorithm for Gaussian Elimination with partial pivoting. In reading this algorithm, recall that a 11 x 1 + a 12 x 2 + · · · + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + · · · + a 2 n x n = b 2 . . . a n 1 x 1 + a n 2 x 2 + · · · + a nn x n = b n . ALGORITHM 3.3 (Solution of a linear system of equations with Gaussian Elimination with par-tial pivoting and back-substitution) Numerical Linear Algebra 115 INPUT: A ∈ L ( R n ) and b ∈ R n . OUTPUT: An approximate solution 9 x to Ax = b . FOR k = 1 , 2 , · · · , n − 1 1. Find such that | a k | = max k ≤ j ≤ n | a jk | ( k ≤ ≤ n ). 2. Interchange row k with row ⎧ ⎨ ⎩ c j ← a kj a kj ← a j a j ← c j ⎫ ⎬ ⎭ for j = 1 , 2 , . . ., n , and ⎧ ⎨ ⎩ d ← b k b k ← b b ← d ⎫ ⎬ ⎭ . 3. FOR i = k + 1 , · · · , n (a) m ik ← a ik /a kk . (b) FOR j = k, k + 1 , · · · , n a ij ← a ij − m ik a kj . END FOR (c) b i ← b i − m ik b k . END FOR 4. Backssubstitution: (a) x n ← b n /a nn and (b) x k ← b k − n j = k +1 a kj x j a kk , for k = n − 1 , n − 2 , · · · , 1 . END FOR END ALGORITHM 3.3. REMARK 3.27 In Algorithm 3.3, the computations are arranged “se-rially,” that is, they are arranged so each individual addition and multipli-cation is done separately. However, it is efficient on modern machines, that have “pipelined” operations and usually also have more than one processor, to think of the operations as being done on vectors. Furthermore, we don’t necessarily need to change entire rows, but just keep track of a set of indices indicating which rows are interchanged; for large systems, this saves a signif-icant number of storage and retrieval operations.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.










