Mathematics

Iterative Methods

Iterative methods in mathematics are algorithms used to approximate solutions to equations or systems of equations. Instead of finding an exact solution in a single step, iterative methods repeatedly update an initial guess to approach the true solution. These methods are commonly used in numerical analysis and are particularly useful for solving large, complex problems.

Written by Perlego with AI-assistance

4 Key excerpts on "Iterative Methods"

  • Book cover image for: Applied Linear Algebra and Optimization Using MATLAB
    Chapter 2
    Iterative Methods for Linear Systems
    2.1 Introduction
    The methods discussed in Chapter 1 for the solution of the system of linear equations have been direct, which required a finite number of arithmetic operations. The elimination methods for solving such systems usually yield sufficiently accurate solutions for approximately 20 to 25 simultaneous equations, where most of the unknowns are present in all of the equations. When the coefficients matrix is sparse (has many zeros), a considerably large number of equations can be handled by the elimination methods. But these methods are generally impractical when many hundreds or thousands of equations must be solved simultaneously.
    There are, however, several methods that can be used to solve large numbers of simultaneous equations. These methods, called Iterative Methods, are methods by which an approximation to the solution of a system
    of linear equations may be obtained. The Iterative Methods are used most often for large, sparse systems of linear equations and they are efficient in terms of computer storage and time requirements. Systems of this type arise frequently in the numerical solutions of boundary value problems and partial differential equations. Unlike the direct methods, the Iterative Methods may not always yield a solution, even if the determinant of the coefficients matrix is not zero.
    The Iterative Methods to solve the system of linear equations
    start with an initial approximation x(0) to the solution x of the linear system (2.1) and generate a sequence of vectors {x( k)}1
    k=0
    that converges to x. Most of these Iterative Methods involve a process that converts the system (2.1) into an equivalent system of the form
    for some square matrix T and vector c. After the initial vector x(0)
  • Book cover image for: Applied Linear Algebra and Optimization Using MATLAB
    Chapter 2 Iterative Methods for Linear Systems 2.1 Introduction The methods discussed in Chapter 1 for the solution of the system of linear equations have been direct, which required a finite number of arithmetic operations. The elimination methods for solving such systems usually yield sufficiently accurate solutions for approximately 20 to 25 simulta- neous equations, where most of the unknowns are present in all of the equations. When the coefficients matrix is sparse (has many zeros), a con- siderably large number of equations can be handled by the elimination methods. But these methods are generally impractical when many hun- dreds or thousands of equations must be solved simultaneously. There are, however, several methods that can be used to solve large numbers of simultaneous equations. These methods, called iterative meth- ods, are methods by which an approximation to the solution of a system 243 244 Applied Linear Algebra and Optimization using MATLAB of linear equations may be obtained. The Iterative Methods are used most often for large, sparse systems of linear equations and they are efficient in terms of computer storage and time requirements. Systems of this type arise frequently in the numerical solutions of boundary value problems and partial differential equations. Unlike the direct methods, the Iterative Methods may not always yield a solution, even if the determinant of the coefficients matrix is not zero. The Iterative Methods to solve the system of linear equations Ax = b (2.1) start with an initial approximation x (0) to the solution x of the linear system (2.1) and generate a sequence of vectors {x (k) } ∞ k=0 that converges to x. Most of these Iterative Methods involve a process that converts the system (2.1) into an equivalent system of the form x = T x + c (2.2) for some square matrix T and vector c.
  • Book cover image for: Nonlinear Ordinary Differential Equations in Transport Processes
    4 APPROXIMATE METHODS Introduction By approximate methods we shall mean analytical procedures for developing solutions in the form of functions which are close, in some sense, to the exact, but usually unknown, solution of the nonlinear prob- lem. Therefore numerical methods fall into a separate category (see Chapter 5) since they result in tables of values rather than functional forms. Approximate methods may be divided into three broad interrelated categories; “iterative,” “asymptotic,” and “weighted residual.” The Iterative Methods include the development of series, methods of successive approximation, rational approximations, and the like. Some form of repetitive calculation via some operation F whose character is u , + ~ = F[u,, u,- . . .] successively improves the approximation. Transformation of the equation to an integral equation leads to a natural iterative method. Asymptotic procedures have at their foundation a desire to develop solutions that are approximately valid when a physical parameter (or variable) of the problem is very small, very large or in close proximity to some characteristic value. Typical of these methods are the perturbation procedures both regular and singular. The weighted residual methods, probably originating in the calculus of variations, require that the approximate solution be close to the exact solution in the sense that the difference between them (residual) is somehow minimized. Collocation insists that the residual vanish at a predetermined set of points while Galerkin’s method is so formulated that weighted integrals of the residual vanish. These error distribution techniques are 135 136 4 Approximate Methods sometimes called direct methods of the calculus of variations although they need not be related to a variational problem. Since these procedures are approximate an important question con- cerning the accuracy of approximation must be asked.
  • Book cover image for: Parallel Iterative Algorithms
    eBook - PDF

    Parallel Iterative Algorithms

    From Sequential to Grid Computing

    • Jacques Mohcine Bahi, Sylvain Contassot-Vivier, Raphael Couturier(Authors)
    • 2007(Publication Date)
    Direct algorithms lead to the solution after a finite number of elementary operations. The exact solution is theoretically reached if we sup-pose that there is no round-off error. In direct algorithms, the number of elementary operations can be predicted independently of the precision of the approximate solution. Iterative algorithms proceed by successive approximations and consist in the construction of a sequence braceleftbig x ( k ) bracerightbig k ∈ N the limit of which is the solution of (2.1) lim k →∞ x ( k ) = A − 1 b. The iterations are stopped when the desired precision is obtained. See Chap-ter 4 for more developments. Linear iterative algorithms can be expressed in the form x ( k +1) = T x ( k ) + c with a known initial guess x (0) . (2.2) Jacobi, Gauss-Seidel, overrelaxation and Richardson algorithms are linear it-erative algorithms. If the mapping T does not depend on the current iteration k, then the algorithm is called stationary . In the opposite case, the algorithm is called nonstationary . The iterations generated by (2.2) correspond to the Picard successive approximations method associated to T . To obtain such algorithms, the fixed point of T has to coincide with the solution of (2.1). For that, the matrix A is partitioned into A = M − N (2.3) where M is a nonsingular matrix. The linear system (2.1) can thus be written Mx = Nx + b or equivalently to the fixed point equation x = M − 1 Nx + M − 1 b. (2.4) From this last equation, the following iterative algorithm is deduced x ( k +1) = M − 1 Nx ( k ) + M − 1 b, with a given x (0) , which has the form of the linear iterative algorithm (Algorithm 2.2). DEFINITION 2.1 The linear iterative algorithm (Algorithm 2.2) is con-vergent to the solution of the linear system (2.1) if given any x (0) ∈ R n , lim k → + ∞ x ( k ) = A − 1 b. The following theorem whose proof is a deduction of Theorem 1.1 of Chap-ter 1 is essential for the study of the convergence of iterative algorithms; see, e.g., [113].
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.