Bisection and Fixed Point Iteration
Bisection and fixed-point iteration are the first two root-finding methods worth separating carefully. Bisection asks for very little: a continuous function and a sign change. In exchange, it gives a guaranteed shrinking interval. Fixed-point iteration can be much faster, but only after the equation has been rearranged into a map whose repeated application actually pulls guesses toward the solution.
The pair also introduces a pattern that appears throughout numerical analysis. A robust method usually has a theorem that protects it from bad initial guesses, while a fast method usually needs local assumptions. Good practical solvers often combine both ideas: first locate a safe region, then switch to a faster local iteration.
Definitions
A root of is a number such that . A bracket for a scalar root is an interval with . If is continuous, the Intermediate Value Theorem guarantees at least one root in . The guarantee is existential: it does not say the root is unique, and it does not say the graph behaves nicely inside the interval.
The bisection method forms midpoints
and keeps the half interval on which the sign change remains. The method does not estimate the slope or curvature of ; it only asks which subinterval still contains a guaranteed sign change. That is why it is slow but difficult to break.
A fixed point of a function is a number such that . To solve , one may rewrite the equation as and iterate
The rearrangement matters. The same equation can produce a convergent fixed-point map, a divergent map, or an oscillating map depending on how is chosen. Fixed-point iteration is therefore a method for a pair consisting of the equation and a map, not merely for the equation alone.
Key results
For bisection, after iterations the root is contained in an interval of length
The midpoint approximation therefore satisfies
To guarantee , it is enough to choose so that
Taking logarithms gives a practical iteration estimate,
This is a worst-case guarantee, not merely an observed trend. It remains valid even if the function is flat, steep, or badly scaled, provided the sign change and continuity assumptions hold.
For fixed-point iteration, a standard contraction theorem says: if maps into itself and there is a constant with
then has a unique fixed point in , and iteration from any starting value in the interval converges to it. A local version says that if , convergence is expected near ; if , the fixed point is locally repelling. When , fixed-point iteration can be faster than linear, which is one way to understand why Newton's method has quadratic convergence at simple roots.
A reliable way to use these results is to keep the analysis tied to the actual numerical question rather than to the formula alone. For bisection and fixed-point iteration, the input record should include the bracket or fixed-point map, the starting value, and the stopping rule. Without that record, two computations that look similar on paper may have different numerical meanings. The same formula can be a safe production tool in one scaling and a fragile experiment in another. This is why the examples on this page show the intermediate arithmetic: the goal is not only to reach a number, but to expose what assumptions made that number meaningful.
The next record is the verification record. Useful diagnostics for this topic include interval width, residual size, iterate change, and derivative or contraction estimates. A diagnostic should be chosen before the computation is trusted, not after a pleasing answer appears. When an exact answer is unavailable, compare two independent approximations, refine the mesh or tolerance, check a residual, or test the method on a neighboring problem with known behavior. If several diagnostics disagree, treat the disagreement as information about conditioning, stability, or implementation rather than as a nuisance to be averaged away.
The cost record matters as well. In this topic the dominant costs are usually function evaluations and guaranteed digits gained per iteration. Numerical analysis is full of methods that are mathematically attractive but computationally mismatched to the problem size. A dense factorization may be acceptable for a classroom matrix and impossible for a PDE grid. A high-order rule may use fewer steps but more expensive stages. A guaranteed method may take many iterations but provide a bound that a faster method cannot. The right comparison is therefore cost to reach a verified tolerance, not order or elegance in isolation.
Finally, every method here has a recognizable failure mode: missing sign changes, noncontractive maps, and domain violations in the rearranged function. These failures are not edge cases to memorize; they are signals that the hypotheses behind the result have been violated or that a different numerical model is needed. A good implementation makes such failures visible through exceptions, warnings, residual reports, or conservative stopping rules. A good hand solution does the same thing in prose by naming the assumption being used and checking it at the point where it matters.
For study purposes, the most useful habit is to separate four layers: the continuous mathematical problem, the discrete approximation, the algebraic or iterative algorithm used to compute it, and the diagnostic used to judge the result. Many mistakes come from mixing these layers. A small algebraic residual may not mean a small modeling error. A small step-to-step change may not mean the discrete equations are solved. A high-order truncation formula may not help when the data are noisy or the arithmetic is unstable. Keeping the layers separate makes the results on this page portable to larger examples.
Visual
| Method | Required input | Guarantee | Typical rate | Main weakness |
|---|---|---|---|---|
| Bisection | sign-changing bracket | interval always shrinks | linear, factor in width | slow near simple roots |
| Fixed point | map and starting value | contraction gives convergence | linear unless special structure | bad rearrangements diverge |
| Hybrid use | bracket plus local iteration | safety plus speed | method-dependent | more implementation logic |
Worked example 1: bisection for a cubic
Problem. Solve
on by bisection until the interval width is at most .
Method. Track signs and keep the subinterval with a sign change.
- Start with
So a root lies in .
- First midpoint:
Since , keep .
- Second midpoint:
Now the sign change is in .
- Third midpoint:
Keep .
Checked answer. The interval width is , so the root is in and the midpoint has error at most . Continuing the same process gives the more accurate root .
Worked example 2: choosing a fixed-point map
Problem. Use a fixed-point map for the same cubic and check whether convergence is plausible near the root.
Method. Rearrange the equation as
- Evaluate from :
- Next iterate:
- The derivative is
At the root ,
- Because this value is well below , errors are reduced by about half per iteration once the iterates are near the fixed point.
Checked answer. The derivative magnitude is below near the solution, so local convergence is plausible. It will be linear and slower than Newton's method, but it is a valid fixed-point rearrangement near this root.
Code
import math
def bisection(f, a, b, tol=1e-12, max_iter=100):
fa = f(a)
fb = f(b)
if fa * fb > 0:
raise ValueError("bisection requires a sign change")
for iteration in range(1, max_iter + 1):
p = 0.5 * (a + b)
fp = f(p)
if abs(fp) < tol or 0.5 * (b - a) < tol:
return p, iteration
if fa * fp < 0:
b, fb = p, fp
else:
a, fa = p, fp
return 0.5 * (a + b), max_iter
def fixed_point(g, p0, tol=1e-12, max_iter=100):
p = float(p0)
for iteration in range(1, max_iter + 1):
q = g(p)
if abs(q - p) < tol:
return q, iteration
p = q
raise RuntimeError("fixed-point iteration did not converge")
f = lambda x: x**3 + 4 * x**2 - 10
g = lambda x: math.sqrt((10 - x**3) / 4)
print(bisection(f, 1.0, 2.0))
print(fixed_point(g, 1.5))
Common pitfalls
- Starting bisection without a sign change. Continuity plus opposite signs is the guarantee.
- Assuming bisection finds every root in the interval. If there are multiple roots, it only guarantees at least one sign-changing root.
- Using alone for fixed-point iteration. Slow or stalled iterations can make this test misleading.
- Choosing a fixed-point form without checking or a contraction bound.
- Forgetting that a repeated root may not create a sign change, so bisection may not apply even when a root is present.
- Ignoring domain restrictions in . A square-root rearrangement, for example, can become undefined even when the original polynomial is defined.