Technology & Engineering

Bisection Method

The bisection method is a numerical technique used to find the root of a continuous function within a given interval. It works by repeatedly halving the interval and selecting the subinterval in which the root must lie. By iteratively narrowing down the interval, the method efficiently converges to the root, making it a valuable tool in numerical analysis and engineering applications.

Written by Perlego with AI-assistance

3 Key excerpts on "Bisection Method"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • MATLAB® Essentials
    eBook - ePub

    MATLAB® Essentials

    A First Course for Engineers and Scientists

    • William Bober(Author)
    • 2017(Publication Date)
    • CRC Press
      (Publisher)

    ...In this book, we will give a brief discussion of the Bisection Method, but emphasize MATLAB’s fzero and roots functions. The search method is especially useful if there is more than one real root. The equation whose roots are to be determined should be put into the following standard form: f (x) = 0 (6.4) We proceed as follows: first we subdivide the x domain into N equal subdivisions of width Δ x ​, giving x 1, x 2, x 3, … x N + 1 with x i + 1 = x i + Δ x Then, determine where f (x) changes sign (see Figure 6.1). This occurs when the signs of two consecutive values of f (x) are different, that. is, f (x i) f (x i + 1) < 0 The sign change usually indicates that a real root has been passed. However, it may also indicate a discontinuity in the function. (Example: tan x is discontinuous at x = π / 2.) A brief description of the Bisection Method follows: FIGURE 6.1 The root of f (x) lies between x 2 and x 3. 6.3 Bisection Method Suppose it has been established by the search method, that a root lies between x i and x i + 1. The concept in the Bisection Method is to cut the interval containing the root in half, determine which half contains the root, cut that interval in half, determine which half contains the root, and continue the process until the interval containing the root is sufficiently small, so that any point within the last interval is a very good approximation for the root. A more detailed description follows: Let x i + 1 2 be the midpoint position of the first cut, then x i + 1 2 = x i + (Δ x / 2) (see Figure 6.2). Now compute f (x i) f (x i + 1 2) : Case 1: If f (x i) f (x i + 1 2) < 0, then the root lies between x i and x i + 1 2 Case 2: If f (x i) f (x i + 1 2) > 0, then the root lies between x i + 1 2 and x i + 1 Case 3:. If f (x i) f (x i + 1 2) = 0, then x i or x i + 1 2 is a real root For cases 1 and 2, select the interval containing the root and repeat the process...

  • Linear Optimization and Duality
    eBook - ePub

    ...Chapter 17 Introduction to Nonlinear Programming Algorithms Preview: The standard elementary nonlinear unconstrained optimization algorithms search for local optima, by either setting the derivative to zero, or iteratively improving their solution. The former depend on a method such as bisection or Newton’s that numerically solves an equation g (x) = 0. The latter, in each iteration, compute a direction d to move from the current solution y, choose a step size α, and update the solution to y + α d. For optimization problems subject to linear equality constraints, a promising direction d that violates feasibility can be altered by projection to retain feasibility. For problems with nonlinear or inequality constraints, the KKT conditions provide a finite number of cases to check for local optima. This chapter desecribes bisection, Newton (or Newton-Raphson), quasi-Newton, and gradient algorithms for single and multi-variable problems. 17.1 Bisection Search Bisection is a basic search method used in both continuous and discrete settings. In discrete settings it is usually called binary search. Suppose we seek a zero of a continuous function f : ℜ ↦ ℜ, that is, an x such that f (x) = 0. Suppose further we have numbers a, b such that f (a) < 0 and f (b) > 0. Bolzano’s theorem [ 30 ] states the seemingly obvious fact that f (x) = 0 for at least one x between a and b. Bisection defines c = a + b 2 and computes f (c). In the unlikely event f (c) = 0, terminate. Otherwise, if f (c) < 0 set a to c ; if f (c) > 0 set b to c. In either case the inequalities f (a) < 0 < f (b) are preserved. Bisection halves | b − a | at each iteration. Hence the value c must be within 2 − n | b − a | of a zero of f during the n th iteration. Hence, bisection guarantees one more place of binary accuracy per iteration. A rigorous proof is not trivial...

  • Theory of Waveguides and Transmission Lines
    • Edward F. Kuester(Author)
    • 2020(Publication Date)
    • CRC Press
      (Publisher)

    ...Suppose we know that a root x e of F (x) lies between the lower limit x l and the upper limit x u. We might know this if the values of F (x l) and F (x u) have opposite signs, but this might not be the case if the function approaches infinity within the interval (x l, x u) (as for the function tan x at x = π /2, for example). We therefore restrict consideration to functions that are bounded and continuous on the interval (x l, x u). Now we consider the new point x n = (x l + x u)/2 at the center of the interval. If F (x n) has the same sign as F (x l), then a root of F must lie within the smaller interval (x n, x u); if F (x n) has the same sign as F (x u), then a root of F must lie within (x l, x n). In either case, the region containing a root has been cut in half, and we can repeat this process until the length of the interval containing a root has become small enough to determine the root to a prespecified accuracy. K.2 Newton’s Method The bisection process is simple and reliable, but not particularly rapidly convergent. A more rapidly convergent (but sometimes less reliable) method is Newton’s method. Here, we do not need an interval known to contain a root, but only an initial estimate x 0 of the location of a root. Suppose that we can compute not only the value of the function F at x 0, but also its derivative F ′(x 0). This can be done either analytically if the form of the function is available, or by the finite difference approximation F ′ (x) ≃ F (x + δ) − F (x) δ (K.1) for some sufficiently small quantity δ (though not too small, so that we avoid problems with computer roundoff error). Then the function F is approximated near x 0 by a linear function obtained from the first two terms of its Taylor series...