Adaptive and Gaussian Quadrature
Composite quadrature spends evaluation points uniformly. That is reasonable when the integrand is equally easy everywhere, but wasteful when most of the interval is smooth and only a small region is difficult. Adaptive quadrature refines the interval only where an error indicator says refinement is needed.
Gaussian quadrature takes a different approach. Instead of equally spacing the nodes, it chooses both nodes and weights to maximize polynomial exactness. For smooth integrands, a small number of Gaussian nodes can outperform much denser Newton-Cotes rules. The price is that the nodes are less intuitive and basic Gauss rules are not naturally nested.
Definitions
Adaptive Simpson quadrature compares one Simpson estimate on with two Simpson estimates on and , where . If
is small enough, the refined estimate is accepted. Otherwise the interval is subdivided again. The correction
comes from the fourth-order error model for Simpson's rule.
The -point Gauss-Legendre rule on has the form
where the nodes are roots of the Legendre polynomial and the weights are chosen for exactness. On a general interval , use the change of variables
Key results
An -point Gauss-Legendre rule is exact for every polynomial of degree at most . This is much higher than what is possible with fixed equally spaced nodes. Orthogonality of Legendre polynomials is the reason: the error component orthogonal to all lower-degree polynomials is pushed to a higher degree.
Adaptive methods depend on local error estimation. The estimate is not a proof unless the smoothness assumptions behind the model are satisfied. A narrow spike, endpoint singularity, or oscillation between sample points can fool a local comparison. Therefore robust adaptive codes include recursion limits, minimum interval widths, and diagnostic failure returns.
Gaussian quadrature is excellent for smooth integrands on finite intervals. When the integrand has singular behavior, discontinuities, or strong endpoint features, a change of variables or adaptive subdivision may be needed first. Gauss-Kronrod rules extend Gauss nodes with extra nodes to produce nested pairs, which is why many production quadrature routines use adaptive Gauss-Kronrod rather than plain Gauss-Legendre alone.
A reliable way to use these results is to keep the analysis tied to the actual numerical question rather than to the formula alone. For adaptive and Gaussian quadrature, the input record should include the integrand, interval, smoothness expectations, and cost of each function evaluation. Without that record, two computations that look similar on paper may have different numerical meanings. The same formula can be a safe production tool in one scaling and a fragile experiment in another. This is why the examples on this page show the intermediate arithmetic: the goal is not only to reach a number, but to expose what assumptions made that number meaningful.
The next record is the verification record. Useful diagnostics for this topic include a paired-rule difference, recursion depth, and comparison with a known special case when one is available. A diagnostic should be chosen before the computation is trusted, not after a pleasing answer appears. When an exact answer is unavailable, compare two independent approximations, refine the mesh or tolerance, check a residual, or test the method on a neighboring problem with known behavior. If several diagnostics disagree, treat the disagreement as information about conditioning, stability, or implementation rather than as a nuisance to be averaged away.
The cost record matters as well. In this topic the dominant costs are usually number of integrand evaluations and how they are distributed over the interval. Numerical analysis is full of methods that are mathematically attractive but computationally mismatched to the problem size. A dense factorization may be acceptable for a classroom matrix and impossible for a PDE grid. A high-order rule may use fewer steps but more expensive stages. A guaranteed method may take many iterations but provide a bound that a faster method cannot. The right comparison is therefore cost to reach a verified tolerance, not order or elegance in isolation.
Finally, every method here has a recognizable failure mode: hidden spikes, endpoint singularities, and non-nested Gaussian rules. These failures are not edge cases to memorize; they are signals that the hypotheses behind the result have been violated or that a different numerical model is needed. A good implementation makes such failures visible through exceptions, warnings, residual reports, or conservative stopping rules. A good hand solution does the same thing in prose by naming the assumption being used and checking it at the point where it matters.
For study purposes, the most useful habit is to separate four layers: the continuous mathematical problem, the discrete approximation, the algebraic or iterative algorithm used to compute it, and the diagnostic used to judge the result. Many mistakes come from mixing these layers. A small algebraic residual may not mean a small modeling error. A small step-to-step change may not mean the discrete equations are solved. A high-order truncation formula may not help when the data are noisy or the arithmetic is unstable. Keeping the layers separate makes the results on this page portable to larger examples.
Visual
| Method | Node placement | Error information | Strength | Weakness |
|---|---|---|---|---|
| Composite Simpson | uniform | global order estimate | simple and predictable | wastes points on easy regions |
| Adaptive Simpson | refined locally | local embedded estimate | targets difficult regions | can miss hidden features |
| Gauss-Legendre | optimal interior nodes | polynomial exactness | very efficient for smooth functions | basic rules not nested |
| Gauss-Kronrod | nested paired nodes | difference of paired rules | practical adaptive driver | tabulated rules needed |
Worked example 1: three-point Gauss rule for
Problem. Use the three-point Gauss-Legendre rule on to integrate .
Method. The three-point nodes and weights are
- Evaluate the function values:
- Apply the rule:
- Simplify:
- The exact integral is
Checked answer. The three-point Gauss rule gives the exact value because it is exact through degree five.
Worked example 2: adaptive Simpson decision
Problem. For on , compare one Simpson panel with two half panels.
Method. The exact integral is , but the endpoint derivative is singular, so adaptation is useful.
- One Simpson panel gives
Numerically, .
- On ,
- On ,
- The refined sum is , so the difference from the coarse panel is about .
Checked answer. The large difference signals that the interval should be subdivided, especially near where the derivative is unbounded.
Code
import numpy as np
from numpy.polynomial.legendre import leggauss
def simpson_panel(f, a, b):
m = 0.5 * (a + b)
return (b - a) * (f(a) + 4.0 * f(m) + f(b)) / 6.0
def adaptive_simpson(f, a, b, tol=1e-10, max_depth=20):
whole = simpson_panel(f, a, b)
def recurse(left, right, whole_value, local_tol, depth):
mid = 0.5 * (left + right)
s_left = simpson_panel(f, left, mid)
s_right = simpson_panel(f, mid, right)
delta = s_left + s_right - whole_value
if depth == 0 or abs(delta) <= 15.0 * local_tol:
return s_left + s_right + delta / 15.0
return (recurse(left, mid, s_left, local_tol / 2.0, depth - 1)
+ recurse(mid, right, s_right, local_tol / 2.0, depth - 1))
return recurse(a, b, whole, tol, max_depth)
def gauss_legendre(f, a, b, n):
nodes, weights = leggauss(n)
mapped = 0.5 * (b - a) * nodes + 0.5 * (a + b)
return 0.5 * (b - a) * np.dot(weights, f(mapped))
print(gauss_legendre(lambda x: x**4, -1.0, 1.0, 3))
print(adaptive_simpson(np.sqrt, 0.0, 1.0), 2.0 / 3.0)
Common pitfalls
- Assuming an adaptive error estimate is a rigorous bound. It is a model-based indicator.
- Allowing unbounded recursion near singularities without reporting failure.
- Using Gaussian quadrature on a nonsmooth integrand without subdivision or transformation.
- Forgetting the interval change-of-variables factor .
- Comparing Gaussian rules only by node count. Smoothness, cost per function evaluation, and nesting also matter.