Mathematics
Newton's Method
Newton's Method is an iterative numerical technique used to find successively better approximations to the roots of a real-valued function. It starts with an initial guess and then uses the function's derivative to refine the estimate. This method is widely used for solving equations and optimization problems due to its rapid convergence.
Written by Perlego with AI-assistance
Related key terms
1 of 5
10 Key excerpts on "Newton's Method"
- eBook - ePub
Maths in Chemistry
Numerical Methods for Physical and Analytical Chemistry
- Prerna Bansal(Author)
- 2020(Publication Date)
- De Gruyter(Publisher)
Chapter 9 Numerical root-finding methods9.1 Introduction
In chemistry, we often come across lengthy and complicated polynomial equations, which are difficult to solve analytically. According to algebra, a root is the zero of the function, that is, where the function f(x) is zero. There are three ways to solve the equations, namely analytically, graphically and numerically. Numerical methods of finding roots of the equations is the most robust way of solving even very complicated equation with a great degree of ease. The most important technique in any numerical method is the iteration. Generally, an approximation of an expected value is taken and an algorithm is applied which further improves the approximation. This step is repeated until the approximation yields almost the same value. Numerical methods are particularly useful while solving the intensive polynomial for their roots.These are the following numerical methods to find roots of an equation:- Newton–Raphson method
- Iteration method
- Binary bisection method
- Secant method
- Regula-Falsi method
9.2 Newton–Raphson method
Newton–Raphson (also called Newton’s iteration or Newton’s technique) is the most widely used root-finding algorithm of nonlinear equations or real-valued single variable functions (f(x) = 0). It uses an iterative method to approach the root of equation by arbitrarily choosing any root which is close to the real root.N-R method converges quadratically as we approach the root. It needs only one initial guess value for the root. This method involves expansion of Taylor series of a function f(x - eBook - PDF
- Bellman(Author)
- 1970(Publication Date)
- Academic Press(Publisher)
The Newton-Raphson Method: Square Roots 63 This method is a very famous one, called the Newton-Raphson method. Sometimes, and quite unjustly as far as Raphson is concerned, it is abbre- viated and called merely “Newton’s method.” We shall assume an elementary knowledge of differential calculus in what follows, namely that required to determine the equation of a tangent to the curve y = f ( x ) at a specified point. We confess that it is a bit embarrassing that the simplest examples we can conjure up to illustrate the method of successive approximations re- quire more mathematical training than we require for the subsequent treat- ment of Eq. (1.1). This illustrates, however, a fundamental point that “elementary” is not synonymous with “simple. ” In compensation, however, we feel that it is important for the student both to appreciate the versatility of the method of successive approxima- tions and to become aware of the problems usually encountered in its ap- plication. 7. The Newton-Raphson Method : Square Roots To illustrate the method in one of its simplest and most ancient forms, let us consider the problem of finding the square root of a positive number a. In analytic terms, we wish to find the positive solution of the equation x2 = a ( 1 ) y = x 2 -a ( 2 ) In Fig. 3, we have a graph of the parabola, Our objective is to determine numerically the coordinates of P, the point (d7, 0). Since (1) does not have the form x = f ( x ) , we cannot im- mediately apply the algorithm discussed in 5 6 . I Fig. 3 64 The Method of Successive Approximations Suppose we make an initial guess to the x-coordinate of P, say xI. For example, if a = 5 , we might guess x, = 3 (see Fig. 4). How do we find a better estimate? We suppose that the parabola is approximately, for our purposes, the straight line which is tangent to it at the point Q,. If the first guess is good enough, this is a reasonable procedure as we shall see. - James F. Epperson(Author)
- 2014(Publication Date)
- Wiley(Publisher)
Even if we could theoretically write a formula for / from which / ' could be constructed, is this a practical task to set for ourselves? One obvious way to deal with this problem is to use an approximation to the derivative in the Newton formula. For example, in §2.2 we saw that so we could use this in (3.7) to get the new iteration h ^ + — % Jv^n) f{x n + h)- /(x„) We will, in fact, analyze the convergence of this method in §3.11.2. In this section, we want to pursue a similar, but slightly different idea. Newton's Method is derived, geometrically, by drawing a tangent line from the current approximate root down to the axis. The derivative is required because we use the tangent line. If, instead, we used a secant line (i.e., one passing through two points on the curve instead of just one), then no derivative would be required. Let xo and xi be given—thus we have two initial guesses—and consider Figure 3.6. Construct the line that passes through (xo, /(xo)) and (xi, /(xi )) and use its root to define the next iterate, x-i. The line is defined by the formula y - /(si) = / ( X I ) - / ( X Q ) X — Xi Xi — Xo so that the next iterate is given by (setting y = 0, and solving for x = X2 in the equation above): Xl - X0 Xi =X\ - f{?\) / ( x i ) - / ( x 0 ) . THE SECANT METHOD: DERIVATION AND EXAMPLES 123 10 8 6 =- 4 2 0 0 0.5 1 1.5 2 2.5 3 3.5 4 X Figure 3.6 Illustration of secant method. More generally, then, we have Xn+l — Xn - f(Xn) This is the secant method. Note that it is consistent with Newton's Method, where we use the approximation ,/, v _ f(Xn) - f{Xn-l) J {Xn) « — — Xn Xn—1 The error in this derivative approximation is proportional to x n — x n _ 1. Thus, if we assume that the iteration is converging (so that x„ - x n -\ -> 0), then the secant method becomes more and more like Newton's Method. Hence, we expect rapid convergence for XQ near a. The secant method has a number of advantages over Newton's Method.- eBook - PDF
- L.T. Watson, J.A. Ford, M. Bartholomew-Biggs(Authors)
- 2001(Publication Date)
- North Holland(Publisher)
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS ELSEVIER Journal of Computational and Applied Mathematics 124 (2000) 25-44 www.elsevier.nl/locate/cam The theory of Newton's Method A. Galantai Institute of Mathematics, The University of Miskolc, 3515 Miskolc-Egyetemvaros, Hungary Received 18 June 1999; received in revised form 31 January 2000 Abstract We review the most important theoretical results on Newton's Method concerning the convergence properties, the error estimates, the numerical stability and the computational complexity of the algorithm. We deal with the convergence for smooth and nonsmooth equations, underdetermined equations, and equations with singular Jacobians. Only those extensions of the Newton method are investigated, where a generalized derivative and or a generalized inverse is used. © 2000 Elsevier Science B.V. All rights reserved. 1. Introduction The Newton or Newton-Raphson method has the form x k +i=x k -[F'(x k )r l F(x k ) 9 £ = 0,1,... (1) for the solution of the nonlinear equation F(*) = 0 ( F : * -7 ) , (2) where X and Y are Banach spaces and F' is the Frechet derivative of F. The geometric interpretation of the Newton method is well known, if F is a real function. In such a case x k + x is the point where the tangential line y — F(x k ) = F'(x k )(x - x k ) of function F(x) at point (x k ,F(x k )) intersects the x-axis. The geometric interpretation of the complex Newton method (F : C —► C) is given by Yau and Ben-Israel [70]. In the general case F(x) is approximated at point x k as F(x) « L k (x) = F(x k ) + Fx k )(x - x k ). (3) The zero of L k {x) = 0 defines the new approximation x M . Variants of the Newton method are the damped Newton method x k+l =x k -t k [F'(x k )T l F(x k ) (fc>0, £ = 0,1,2,...) (4) E-mail address: [email protected] (A. Galantai). 0377-0427/00/$-see front matter © 2000 Elsevier Science B.V. All rights reserved. PIT: S0377-0427(00)00435-0 - Owen Jones, Robert Maillardet, Andrew Robinson(Authors)
- 2014(Publication Date)
- Chapman and Hall/CRC(Publisher)
THE NEWTON–RAPHSON METHOD 187 At iteration 10 value of x is: 20.39797 At iteration 11 value of x is: 23.4134 At iteration 12 value of x is: 26.56671 At iteration 13 value of x is: 29.84637 At iteration 14 value of x is: 33.24243 At iteration 15 value of x is: 36.74626 At iteration 16 value of x is: 40.3503 At iteration 17 value of x is: 44.04789 At iteration 18 value of x is: 47.83317 At iteration 19 value of x is: 51.70089 At iteration 20 value of x is: 55.64637 Algorithm failed to converge NULL This example illustrates that as a method for finding roots, the fixed-point method has some disadvantages. One needs to convert the problem into fixed-point form, but there are many ways to do this, each of which will have different convergence properties and some of which will not converge at all. We consider the question of the best way of converting a root-finding problem to a fixed-point problem in Exercise 7. It also turns out that the fixed-point method is relatively slow, in that the error is usually divided by a constant factor at each iteration. Both of our next two algorithms, the Newton–Raphson method and the secant method, converge more quickly because they make informed guesses as to where to find a better approximation to the root. 10.3 The Newton–Raphson method Suppose our function f is differentiable with continuous derivative f ′ and a root a . Let x 0 ∈ R and think of x 0 as our current ‘guess’ at a . Now the straight line through the point ( x 0 ,f ( x 0 )) with slope f ′ ( x 0 ) is the best straight line approximation to the function f ( x ) at the point x 0 (this is the meaning of the derivative). The equation of this straight line is given by f ′ ( x 0 ) = f ( x 0 ) − y x 0 − x . Now this straight line crosses the x -axis at a point x 1 , which should be a better approximation than x 0 to a . To find x 1 we observe f ′ ( x 0 ) = f ( x 0 ) − 0 x 0 − x 1 and so x 1 = x 0 − f ( x 0 ) f ′ ( x 0 ) .- eBook - PDF
Optimization
Foundations and Applications
- H. Ronald Miller(Author)
- 2011(Publication Date)
- Wiley-Interscience(Publisher)
Note that the proce-dure breaks down if/'(x*) = 0, reflecting exactly the same kind of geometric problem as was identified earlier for the secant approach. The Modified Newton Method While evaluation of the derivative at each step may not be difficult for f(x), it can become computationally burdensome for a function of several variables,/(X). For that reason, the modified Newton method replaces/'(x*) at each step with/'(x°), so (5.3) is simplified to (5.4) (d) The series of increasingly wide initial intervals of uncertainty that was used in the previous example. As in Table 5.2, comparable starting values for x°, x', and x 2 are employed for Brent's method, and the first element in each column of the table summarizes the iteration count for the initial points from Table 5.3. (Again, the iteration counts reported are values of k-1 for all methods except Brent's, where it is ifc- 2.) You can make the same kinds of comparisons as were done with the results in Tables 5.1 and 5.2. For this function and these starting points, the secant method is usually best, followed fairly closely sometimes by bisection and sometimes by Brent; and regula falsi comes in (again) a poor fourth. SOLVING NONLINEAR EQUATIONS f(x) (b) FIGURE 5.5 (a) The Newton-Raphson method; (b) the modified Newton method. This may not seem intuitively so appealing, but Newton methods are most effectively em-ployed near to a root, when dx - - x k is small, so that the first-order Taylor series approximation is relatively accurate. In that case, it is quite possible that the derivative does not change greatly for successive steps. This method is illustrated in Figure 5.5i>. [For reasons made clear by the figure, this is also known as the parallel chord method.] The Quasi-Newton Method This variation on Newton-Raphson can be described as building on the first-order Taylor series approximation, replacing the derivative in (5.3) with a forward-difference approximation, but evaluated at Λ * -1 . - Willi-Hans Steeb, Yorick Hardy;Alexandre Hardy;Ruedi Stoop;(Authors)
- 2004(Publication Date)
- WSPC(Publisher)
Consider the equation f(x) = 0 (1) where it is assumed that / is at least twice differentiable. Let / be some in-terval containing a root of /. The Newton-Raphson method can be derived by taking the tangent line to the curve y = f(x) at the point (x n , f(x n )) corresponding to the current estimate, x n , of the root. The intersection of this line with the rr-axis gives the next estimate to the root, x n +i-The gra-dient of the curve y = f(x) at the point (x n , f(x n )) is f'{x n ). The tangent line at this point has the form y — f'(x n )x + b. Since it passes through (x n ,f{x n )) we see that b = f(x n ) — x n f'(x n ). Therefore the tangent line is y = f'(x n )x + f(x n ) — x n f'(x n ). To determine where this line cuts the rc-axis, we set y = 0. Taking this point of intersection as the next estimate, x n +i, to the root we have 0 = f'(x n )x n+1 + f(x n ) = —x n f'(x n ). It follows that -» -* -$ & • < 2 ) (2) 206 Problems and Solutions This is the Newton-Raphson method. This method has the form next estimate = current estimate + correction term. The correction term is —f(x n )/f'(x n ) and this must be small when x n is close to the root if convergence is to be achieved. This will depend on the behavior of f'(x) near the root and, in particular, difficulty will be encountered when f'(x) and f(x) have roots close together. Since the method is of the form x n+ i = g(x n ) with g(x) = x — f(x)/f'(x) the order of the method can be examined. Differentiating g leads to g'(x) — (f(x)f(x))/(f'(x)) 2 . For convergence we require that (fix)) 2 < l (3) for all x in some interval I containing the root. Since f(a) = 0, the above condition is satisfied at the root x = a provided that f'(a) =£ 0. Then, pro-vided that g{x) is continuous, an interval / must exist in the neighbourhood of the root and over which (3) is satisfied. Difficulty is sometimes encoun-tered when the interval / is small because the initial guess must be taken from the interval.- eBook - ePub
An Introduction to Numerical Methods
A MATLAB® Approach
- Abdelwahab Kharab, Ronald Guenther(Authors)
- 2023(Publication Date)
- Chapman and Hall/CRC(Publisher)
Rather than using a secant line, the method uses a tangent line to the curve. Figure 3.8 gives a graphical interpretation of the method. To use the method, we begin with an initial guess x 0, sufficiently close to the root α. The next approximation x 1 is given by the point at which the tangent line to f at f (x 0, f (x 0)) crosses the x -axis. It is clear that the value x 1 is much closer to α than the original guess x 0. If x n +1 denotes the value obtained by the succeeding iterations, that is the x -intercept of the tangent line to f at (x n, f (x n)), then a formula relating x n and x n +1, known as Newton’s method, is given by FIGURE 3.8 Newton’s method and the first two approximations to its zero α. x n + 1 = x n - f (x n) f ′ (x n), n ≥ 0 (3.12) provided f ′(x n) is not zero. To derive Eqn. (3.12), notice that the equation of the tangent line at (x 0, f (x 0)) is y - f (x 0) = f ′ (x 0) (x - x 0). If x 1 denotes the point where this line intersects the x -axis, then y = 0 at x = x 1, that is - f (x 0) = f ′ (x 0) (x 1 - x 0), and solving. for x 1 gives x 1 = x 0 - f (x 0) f ′ (x 0). By repeating the process using the tangent line at (x 1, f (x 1)), we obtain for x 2 x 2 = x 1 - f (x 1) f ′ (x 1). By writing the preceding equation in more general terms, one gets Eqn. (3.12). A suitable way of terminating the iterations is by using one of the stopping criteria suggested for the secant method. An algorithmic statement of Newton’s method is given below. Given a continuously differentiable function f and an initial value x 0 for n = 0, 1, … , ITMAX ⌊ x n + 1 ← x n - f (x n) f ′ (x n) EXAMPLE 3.9 Use Newton’s method to compute a root of x 3 - x 2 - 1 = 0 to an accuracy of 10 −4. Use x 0 = 1. The derivative of f is f ′(x) = 3 x 2 − 2 x - eBook - PDF
- Endre Süli, David F. Mayers(Authors)
- 2003(Publication Date)
- Cambridge University Press(Publisher)
Instead, the paper contains an ex-ample of a cubic polynomial whose roots are found by purely algebraic and rather complicated substitutions. In 1690, Joseph Raphson (1648– 1715) in the Preface to his Analysis aequationum universalis describes his version of Newton’s method as ‘not only, I believe, not of the same origin, but also, certainly, not with the same development’ as Newton’s method. Further improvements to the method, and its form as we know it today, were given by Thomas Simpson in his Essays in Mathematicks (1740). Simpson presents it as ‘a new method for the solution of equa-tions’ using the ‘method of fluxions’, i.e. , derivatives. It is argued in Ypma’s article that Simpson’s contributions to this subject have been underestimated, and ‘it would seem that the Newton–Raphson–Simpson method is a designation more nearly representing facts of history of this method which lurks inside millions of modern computer programs and is printed with Newton’s name attached in so many textbooks’. The convergence analysis of Newton’s method was initiated in the first half of the twentieth century by L.V. Kantorovich. 1 More recently, Smale, 2 Dedieu and Shub, 3 and others have provided significant insight into the properties of Newton’s method. A full discussion of the global behaviour of the logistic equation (1.33), and other examples, will be found in P.G. Drazin, Nonlinear Systems , Cambridge University Press, Cambridge, 1992, particularly Chapters 1 and 3. The secant method is also due to Newton (cf. Section 3 of Ypma’s paper cited above), and is found in a collection of unpublished notes termed ‘Newton’s Waste Book’ written around 1665. In this chapter, we have been concerned with the iterative solution of equations for a real-valued function of a single real variable. In Chapter 4, we shall discuss the iterative solution of nonlinear systems of equations 1 L.V. Kantorovich, Functional analysis and applied mathematics, Uspekhi Mat. - Ioannis K. Argyros, Yeol J. Cho, Saïd Hilout(Authors)
- 2012(Publication Date)
- CRC Press(Publisher)
Chapter 5 Gauss–Newton Method Gauss–Newton method is also an alternative method for Newton’s method. We study in this chapter the convergence of Gauss–Newton method under Lipschitz and average Lipschitz–type conditions. 5.1 Convergence We establish in this section a new semilocal convergence analysis of the Gauss–Newton method GNM for solving nonlinear equation in the Euclidean space. Using our new idea of recurrent functions and a combination of center–Lipschitz, Lipschitz conditions, we provide under the same or weaker hypotheses than before (cf. [226], [469]), a tighter convergence analysis. The results can be extented in case outer or generalized inverses are used. We are concerned with the problem of finding x ⋆ ∈ R i , minimizing the objective function given by G ( x ) : = 1 2 ∥ F ( x ) ∥ 2 = 1 2 F ( x ) T F ( x ) , (5.1) where ∥ . ∥ denotes the Euclidean norm and F is a Fr´ echet–differentiable function, defined on a convex subset D of R i with value in R j ( i ≤ j ). Many problems in applied mathematics and also in engineering are solved by finding such solutions x ⋆ (cf. [95], [586]). Except in special cases, the most commonly used solution methods are iterative, when starting from one or several initial approximations a sequence is constructed that converges to the solution of the equation. Iteration methods are also used for solving optimization problems like (5.1). Iteration sequences converge to an optimal solution of the problem at hand. In particular, here for x ⋆ to be a local minimum it is necessary to be a zero of the gradien ∇ G of G , too: ∇ G ( x ⋆ ) = J T ( x ⋆ ) F ( x ⋆ ) = 0 , (5.2) with J ( x ) = F ′ ( x ) ( x ∈ D ) . (5.3) The iterative method for computing such zero is so–called Gauss–Newton method GNM, as introduced by Ben–Israel (cf. [226]) x n +1 = x n − J + ( x n ) F ( x n ) ( n ≥ 0 , x 0 ∈ D ) , (5.4) 220 5.1 Convergence 221 where J + denotes the well known Moore–Penrose–pseudoinverse of J (cf. [163]) (see also Definition 5.1).
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.









