Computer Science

Secant Method

The Secant Method is a numerical method used to find the roots of a function. It is similar to the Newton-Raphson method, but instead of using the derivative of the function, it approximates it using two points on the curve. The method is iterative and can converge to the root of the function with a good initial guess.

Written by Perlego with AI-assistance

10 Key excerpts on "Secant Method"

  • Book cover image for: Numerical Analysis
    eBook - ePub

    Numerical Analysis

    An Introduction

    • Timo Heister, Leo G. Rebholz, Fei Xue(Authors)
    • 2019(Publication Date)
    • De Gruyter
      (Publisher)
    −1 . This defines the following algorithm:
    Algorithm 46 (Secant Method).
    Given: f , tol, x 0 , x 1
    while( |
    xk
    +1
    xk
    | > tol):
    x
    k + 1
    =
    x k
    f
    (
    x k
    )
    f
    (
    x k
    )
    f
    (
    x
    k 1
    )
    x k
    x
    k 1
    .
    Hence we may think of the Secant Method as Newton’s method, but with the derivative term f '(
    xk
    ) replaced by the backward difference
    f
    (
    x k
    )
    f
    (
    x
    k 1
    )
    x k
    x
    k 1
    .
    It is tedious (but not hard) to prove that the Secant Method converges superlinearly, with rate
    p =
    1 +
    5
    2
    1.618.

    5.6Comparing bisection, Newton, Secant Method

    We now compare the bisection, Newton, and Secant Methods in the following table.
    (1 )
    The Secant Method can be extended to higher dimensions, but the approximation of the derivative requires more work. There are several similar methods known as “Quasi-Newton” or “Jacobian-free Newton” methods.
    (2 )
    The user needs to supply the derivative to the algorithm, which can be problematic, for example, if it is difficult to compute by hand or not accessible.
    (3 )
    To achieve the given convergence rate there are more technical requirements on f
  • Book cover image for: Maths in Chemistry
    eBook - ePub

    Maths in Chemistry

    Numerical Methods for Physical and Analytical Chemistry

    • Prerna Bansal(Author)
    • 2020(Publication Date)
    • De Gruyter
      (Publisher)
    =PLGO-SEPARATOR=--]1.73205391 1.07471E-05 a = c 16 1.73202344 1.73205391 −9.48033E-05 1.07471E-05 1.732038675 −4.20283E-05 b = c Hence, in the 16th iteration, the root converges to 1.7320 which is correct up to fourth decimal place. 9.5 Secant Method As the name suggests, secant implies a line which passes through two points of the curve. It is an algorithm for finding the roots of scalar-valued function of a single variable x when no information of derivative is given. Secant Method is an algorithm used to find the roots of nonlinear functions. Let x 0 and x 1 be the two initial guesses for the root of f (x) = 0, then f (x 0) and f (x 1), respectively, are their function values. A line is drawn between the two guess approximations and the point where it crosses the x -axis (Figure 9.6) will be the approximate root of f (x) using approximate guesses (x 0 and x 1). Secant Method assumes that the equation or function is approximately linear in the region of interest. Figure 9.6: Secant Method. Hence, the slope m can be written as (9.27) m = y − f (x 1) x − x 1 = f (x 1) − f (x 0) x 1 − x 0 So, in the above figure, it can be seen that the line between the two guess values touches x -axis at x 2, so here, value of y = 0 (root is the value of a variable where the function becomes zero);. therefore (9.28) m = 0 − f (x 1) x 2 − x 1 = f (x 1) − f (x 0) x 1 − x 0 (9.29) − f (x 1) = f (x 1) − f (x 0) x 1 − x 0 (x 2 − x 1) which on rearranging. gives (9.30) x 2 = x 1 − f (x 1) x 1 − x 0 f (x 1) − f (x 0) Hence, x 2 is the approximate root yet not the original root. A root for the function f(x) is approximated which is not really a root but is quite near to the original root. Figure 9.7: Secant Method. To do the same, a new secant would be drawn (Figure 9.7)
  • Book cover image for: Numerical Methods for Equations and its Applications
    • Ioannis K. Argyros, Yeol J. Cho, Saïd Hilout(Authors)
    • 2012(Publication Date)
    • CRC Press
      (Publisher)
    3.5 Directional Secant Methods A semilocal convergence analysis for directional Secant–type methods in n –variables is provided in this section. Using weaker hypotheses than in the related work by An and Bai (cf. [10]), and motivated by optimization considerations, we provide under the same computational cost a semilocal convergence analysis with the following advantages: weaker convergence conditions; larger convergence domain; finer error estimates on the distances involved and an at least as precise information on the location of the solution. A numerical example where our results apply to solve an equation but not the ones in (cf. [10]) is also provided. In a second example, we show how to implement the method. We are concerned with the problem of approximating a solution x ⋆ of equation (2.1), where F is a differentiable mapping defined on a convex subset D of R n ( n a natural number) with values in R . In computer graphics, we usually compute the intersection C = A ∩ B of two surfaces A and B in R 3 (cf. [228], [510]). If the two surfaces are explicitly given by A = { ( u, v, w ) T : w = F 1 ( u, v ) } and B = { ( u, v, w ) T : w = F 2 ( u, v ) } , 3.5 Directional Secant Methods 177 then, the solution x ⋆ = ( u ⋆ , v ⋆ , w ⋆ ) T ∈ C must satisfy the nonlinear equation F 1 ( u ⋆ , v ⋆ ) = F 2 ( u ⋆ , v ⋆ ) and w ⋆ = F 1 ( u ⋆ , v ⋆ ) . Hence, we must solve a nonlinear equation in two variables x = ( u, v ) T of the form F ( x ) = F 1 ( x ) − F 2 ( x ) = 0 , which is a special case of equation (2.1). In mathematical programming (cf. [592]), for an equality–constraint optimization problem, e.g., min ψ ( x ) s.t. F ( x ) = 0 , where ψ , F : D ⊆ R n −→ R are nonlinear mappings, we usually seek a feasible point to start a numerical algorithm, which again requires the determination of x ⋆ .
  • Book cover image for: Introduction to Scientific Programming and Simulation Using R
    • Owen Jones, Robert Maillardet, Andrew Robinson(Authors)
    • 2014(Publication Date)
    If the derivative is hard to compute or does not exist, then we can use the Secant Method, which only requires that the function f is continuous. Like the Newton–Raphson method, the Secant Method is based on a linear approximation to the function f . Suppose that f has a root at a . For this THE Secant Method 191 method we assume that we have two current ‘guesses’, x 0 and x 1 , for the value of a . We will think of x 0 as an older guess and we want to replace the pair x 0 ,x 1 by the pair x 1 ,x 2 , where x 2 is a new guess. To find a good new guess x 2 we first draw the straight line from ( x 0 ,f ( x 0 )) to ( x 1 ,f ( x 1 )), which is called a secant of the curve y = f ( x ). Like the tangent, the secant is a linear approximation of the behaviour of y = f ( x ), in the region of the points x 0 and x 1 . As the new guess we will use the x -coordinate x 2 of the point at which the secant crosses the x -axis (Figure 10.4). Now the equation of the secant is given by y − f ( x 1 ) x − x 1 = f ( x 0 ) − f ( x 1 ) x 0 − x 1 and so x 2 can be found from 0 − f ( x 1 ) x 2 − x 1 = f ( x 0 ) − f ( x 1 ) x 0 − x 1 which gives x 2 = x 1 − f ( x 1 ) x 0 − x 1 f ( x 0 ) − f ( x 1 ) . Repeating this we get a second-order recurrence relation (each new value de-pends on the previous two): Secant Method x n +1 = x n − f ( x n ) x n − x n − 1 f ( x n ) − f ( x n − 1 ) . Note that if x n and x n − 1 are close together, then f ′ ( x n ) ≈ f ( x n ) − f ( x n − 1 ) x n − x n − 1 so we can view the Secant Method as an approximation of the Newton– Raphson method, where we substitute ( f ( x n ) − f ( x n − 1 )) / ( x n − x n − 1 ) for f ′ ( x n ). The convergence properties of the Secant Method are similar to those of the Newton–Raphson method. If f is well behaved at a and you start with x 0 and x 1 sufficiently close to a , then x n will converge to a quickly, though not quite as fast as the Newton–Raphson method. As for the Newton–Raphson method, we cannot guarantee convergence.
  • Book cover image for: Numerical Methods in Engineering and Science
    No longer available |Learn more
    x) is approximated by a secant line but at each iteration, two most recent approximations to the root are used to find the next approximation. Also it is not necessary that the interval must contain the root.
    Taking x0 , x1 as the initial limits of the interval, we write the equation of the chord joining these as
    Then the abscissa of the point where it crosses the x-axis (y = 0) is given by
    which is an approximation to the root. The general formula for successive approximations is, therefore, given by
    Rate of Convergence. If at any interation f(xn ) = f(x
    n-1
    ), this method fails and shows that it does not converge necessarily. This is a drawback of Secant Method over the method of false position which always converges. But if the Secant Method once converges, its rate of convergence is 1.6 which is faster than that of the method of false position.
    EXAMPLE 2.23
    Find a root of the equation x3 − 2x − 5 = 0 using the Secant Method correct to three decimal places.
    Solution:
    Let f(x) = x3 − 2x − 5 so that f(2) = − 1 and f(3) = 16.
    ∴ Taking initial approximations x0 = 2 and x1 = 3, by the Secant Method, we have
    Now                f(x2 ) = − 0.390799
    Hence the root is 2.094 correct to three decimal places
    EXAMPLE 2.24
    Find the root of the equation xex = cos x using the Secant Method correct to four decimal places.
    Solution:
    Let                f(x) = cos xxex = 0.
    Taking the initial approximations x0 = 0, x1 = 1
    so that f (x0 ) = 1, f (x1 ) = cos 1 − e
  • Book cover image for: Introduction to Numerical Programming
    eBook - PDF

    Introduction to Numerical Programming

    A Practical Guide for Scientists and Engineers Using Python and C/C++

    • Titus A. Beu(Author)
    • 2014(Publication Date)
    • CRC Press
      (Publisher)
    A significant simplification is that the derivative no longer enters the successive corrections, which are, nevertheless, obtained from a recurrence relation involving three (instead of two) successive approximations of the root. The function f ( x ) is assumed to be nearly linear in the vicinity of the root and the chord determined by the points ( x i − 1 , f ( x i − 1 )) and ( x i , f ( x i )) corresponding to the most recent approximations (see Figure 6.7) is used instead of the tangent line to provide the new approximation x i + 1 by its x -intercept. Alternatively, the recurrence relation for the Secant Method can be obtained by simply approximating the derivative in the Newton–Raphson formula, x i + 1 = x i − f ( x i ) f ( x i ) , Algebraic and Transcendental Equations 143 y = f ( x ) y a b x x i +1 x i –1 x i FIGURE 6.7 Sequence of approximate roots in the Secant Method. by the finite (forward) difference expression involving the most recent approximations: f ( x i ) ≈ f ( x i ) − f ( x i − 1 ) x i − x i − 1 . (6.28) The recurrence relation thus obtained, x i + 1 = x i − f ( x i ) x i − x i − 1 f ( x i ) − f ( x i − 1 ) , (6.29) enables the calculation of the approximation x i + 1 of the root on the basis of the previous approximations x i and x i − 1 . The iterative process is terminated, as for all other methods described in this chapter, when the relative correction of the root decreases below a predefined tolerance ε : | x i + 1 − x i | ≤ ε | x i + 1 | . (6.30) For initiating the recurrence, one can take, in principle, any arbitrary initial approximations x 0 and x 1 . However, starting from a single initial approximation x 0 , one can obtain the second approximation by applying a single step of the method of successive approximations: x 1 = x 0 − f ( x 0 ) .
  • Book cover image for: An Introduction to Numerical Methods and Analysis
    • James F. Epperson(Author)
    • 2014(Publication Date)
    • Wiley
      (Publisher)
    In addition, we can easily establish that a relationship similar to (3.9) holds for the secant iterates as well as the Newton iterates (the definition of the constant C n is slightly different); thus, we can use the difference of consecutive approximations as a stopping THE Secant Method: DERIVATION AND EXAMPLES 125 criterion. Algorithm 3.3 Secant Method input xO, x l e x t e r n a l f fO = f l = for endf , t o i , f(xO) f ( x l ) i=l t o x = fx xO xl fO f l if n n do = x l - f l = f(x) = xl = X = f l = fx a b s ( x l - r o o t = s t o p endif or *(xl xO) x l - x O ) / ( f l - < t o i t h e n fO) We close this section with an informal statement of the convergence result for the Secant Method. If /, /', and /" are all continuous near the root, and if /' does not equal zero at the root, then the Secant Method will converge whenever the initial guess is sufficiently close to the root. Moreover, this convergence will be superlinear, in the sense that lim ?LZ*!±L = 0. n-toa a — X n Note that, fundamentally, this is almost the same as for Newton's method. Exercises: 1. Do three steps of the Secant Method for f(x) = x 3 — 2, using x 0 = 0 and x\ = 1. 2. Repeat the above using x 0 = l i x 1 = 0. Comment. 3. Apply the Secant Method to the same functions as in Problem 3 of §3.1, using xo, %i equal to the endpoints of the given interval. Stop the iteration when the error as estimated by |x n - x n -\ | is less than 10 - 6 . Compare to your results for Newton and bisection in the earlier exercises. 4. For the Secant Method, prove that & ~ %n+l = Cn(x n +i — X n ), where C n -* 1 as n -> oo, so long as the iteration converges. Hint: Follow what we did in §3.3 for Newton's method. 5. Assume (3.29) and prove that if the Secant Method converges, then it is superlinear. 126 ROOT-FINDING 6. Assume (3.29) and consider a function / such that: (a) There is a unique root on the interval [0,4]; (b) |/"(a;)| < 2 for all a; e [0,4]; (c) f'(x) > 5 for all s e [0,4].
  • Book cover image for: Classical and Modern Numerical Analysis
    eBook - PDF

    Classical and Modern Numerical Analysis

    Theory, Methods and Practice

    • Azmy S. Ackleh, Edward James Allen, R. Baker Kearfott, Padmanabhan Seshaiyer(Authors)
    • 2009(Publication Date)
    Geometrically, (see figure 2.7), to obtain x k +1 , the secant to the curve through ( x k − 1 , f ( x k − 1 )) and ( x k , f ( x k )) is followed to the x-axis. x y ( x k − 1 , f ( x k − 1 )) ( x k , f ( x k )) x k +1 + FIGURE 2.7 : Geometric interpretation of the Secant Method. 60 Classical and Modern Numerical Analysis REMARK 2.13 The Secant Method is not a fixed-point method, since x k +1 = g ( x k , x k − 1 ). We now consider convergence of the Secant Method. THEOREM 2.8 (Convergence of the Secant Method) Let G be a subset of R containing a zero z of f ( x ) . Assume f ∈ C 2 ( G ) and there exists an M ≥ 0 such that M = max x ∈ G | f ( x ) | 2 min x ∈ G | f ( x ) | . Let x 0 and x 1 be two initial guesses to z and let K ( z ) = ( z − , z + ) ∈ G, where = δ M and δ < 1 . Let x 0 , x 1 ∈ K ( z ) . Then, the iterates x 2 , x 3 , x 4 , · · · remain in K ( z ) and converge to z with error | x k − z | ≤ 1 M δ ( 1+ √ 5 2 ) k . REMARK 2.14 (1 + √ 5) / 2 ≈ 1 . 618, a fractional order of convergence between 1 and 2. For Newton’s method | x k − z | ≤ q 2 k with q < 1. Compare this with the preceding bound. PROOF (of Theorem 2.8) Subtracting z from both sides of the Secant Method and multiplying by − 1, z − x k +1 = z − x k + f ( x k ) x k − x k − 1 f ( x k ) − f ( x k − 1 ) . This can be written as: ( z − x k +1 ) = [ − ( z − x k − 1 )( z − x k )] C k D k , where (since f ( z ) = 0) C k = f ( z ) − f ( x k ) z − x k − f ( x k ) − f ( x k − 1 ) x k − x k − 1 ( z − x k − 1 ) and D k = f ( x k ) − f ( x k − 1 ) x k − x k − 1 . Now, let e k = z − x k . Then from the above equation, | e k +1 | ≤ | e k − 1 | | e k | | C k | | D k | . Numerical Solution of Nonlinear Equations of One Variable 61 However, the Mean Value Theorem gives D k = f ( ξ k ) for some ξ k in the interval containing x k − 1 and x k . Similarly, C k = f ( ζ k ) / 2 for some ζ k in the interval containing x k , x k − 1 , and z . To see this, suppose that z > x k > x k − 1 (although true in general).
  • Book cover image for: Elements of Numerical Analysis
    The next iteration will give the same value of x as in the previous iteration. Or we may stop at third iteration. 3.6 Secant Method In the Secant Method also two values x 1 and x 2 are taken in the neighbourhood of the root but they are not ought to be on the opposite sides of the root like Regula–Falsi method. That is, f ( x 1 ) and f ( x 2 ) may have same sign, or opposite signs. Then a straight line (secant) is drawn through ( x 1 , y 1 ) and ( x 2 , y 2 ) intersecting the x -axis at a point x . One of the points, say ( x 1 , y 1 ) is discarded and again a line is drawn through ( x 2 , y 2 ) and ( x , y ) . In actual computations ( x 1 , y 1 ) is replaced by ( x 2 , y 2 ) and the new point ( x , y ) replaces ( x 2 , y 2 ) . 114 • Elements of Numerical Analysis The process is repeated until two successive values of x agree within desired accuracy. To start with we may choose x 2 to be closer to the root. See Fig. 3.2. Figure 3.2 Secant Method (superscript shows iteration number). Example 3.4 Using Secant Method find the positive root of x 2 -6 e -x = 0 correct up to two places of decimal. Take the initial values x 1 = 2 . 5 and x 2 = 2. Solution It is the same problem as Example 3.3. We will compute up to 3 decimals and check the accuracy rounding to 2 decimal places. Iteration x 1 y 1 = f ( x 1 ) x 2 y 2 = f ( x 2 ) x y = f ( x ) 1 2.5 5.758 2 3.188 1.380 0.395 2 2 3.188 1.380 0.395 1.292 0.021 3 1.380 0.395 1.292 0.021 1.287 -0.0002 4 1.292 0.021 1.287 -0.0002 1.287 As discussed in Example 3.3, the answer has been obtained correct up to 3 decimal places, i.e., the root is 1.287. 3.7 Convergence of Secant/Regula–Falsi Methods Let x n denote the n th iterate for the root of f ( x ) = 0 or zero of the function y = f ( x ) . If α is the exact root, let α = x n + ε n where ε n is the error in x n .
  • Book cover image for: Numerical Analysis for Engineers and Scientists
    Example 6.5 Find a root of f (x ) (6.5) with the Secant Method, using starting guesses x (0) = 10 and x (1) = 0.5. Solution The results of the sequence are given in Table 6.3. The order obtained is approximately equal to the theoretical value, as shown in Figure 6.3. Table 6.3: Secant solution to root of (x − 1)(x − 10 −3 ) with x (0) = 10 and x (1) = 0.5; Example 6.5. 6.4 The Secant Method 161 n x (n) f (x (n) ) 2 +5.2626592272870831e-01 -2.4883636722593816e-01 3 +1.0374960937662511e+01 +9.7255478559422812e+01 4 +5.5140033335736582e-01 -2.4690940606410897e-01 5 +5.7627694986754130e-01 -2.4375810386877206e-01 6 +2.5005217525919123e+00 +3.7505867608408248e+00 7 +6.9370553462383866e-01 -2.1217187139071661e-01 8 +7.9044510768471543e-01 -1.6543208452969868e-01 9 +1.1328478078844779e+00 +1.5036350013630473e-01 10 +9.6981554600972641e-01 -2.9243168273592260e-02 11 +9.9636010535285280e-01 -3.6230059194576137e-03 12 +1.0001138323706684e+00 +1.1373149610647453e-04 13 +9.9999958377831788e-01 -4.1580528708736776e-07 14 +9.9999999995257838e-01 -4.7374090327706453e-11 15 +9.9999999999999989e-01 -6.5052130349130266e-19 10 −20 10 −15 10 −10 10 −5 10 0 10 −16 10 −12 10 −8 10 −4 10 0 e n+1  e n  secant slope (1 + √ 5)/2 Figure 6.3: Order of convergence for Secant Method; Example 6.5. Note that in Figure 6.3 the first few iterations do not lie on the line of slope 1.618 because we are not yet in the asymptotic regime (i.e., we are not in a regime where the assumptions of the local convergence rate analysis apply, because higher-order terms cannot be neglected). Note also that effective order near convergence is also not 1.618, because numerical errors dominate. The theory presented above did not account for numerical error. 162 Iterative methods and the roots of polynomials 6.5 Newton–Raphson The Newton–Raphson iteration is designed to exploit the Taylor series analysis of local convergence described in Section 6.1.
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.