Mathematical Preliminaries and Error Analysis
Numerical analysis begins with a simple tension: calculus and linear algebra describe exact objects, while a computer stores finitely many numbers and performs finitely many operations. The purpose of error analysis is not to make this tension disappear. It is to make the size, direction, and consequences of the error visible enough that an algorithm can be trusted for the intended problem.
This page sits before root finding, interpolation, quadrature, differential equations, and matrix algorithms because every later method uses the same vocabulary. We compare exact and approximate quantities, track how local truncation errors become global errors, and distinguish mathematical convergence from practical reliability.
Definitions
An exact value is the mathematical quantity being approximated, usually denoted , , , or . An approximation is a computed value such as , , , or . The absolute error and relative error are
Relative error is usually more meaningful when the scale of the exact answer matters. An absolute error of is excellent for a number near and poor for a number near . When or is very small, relative error can be undefined or misleading, so the absolute scale must be reported.
A sequence converges to if for every tolerance there is an index such that for all . In computations, this definition is turned into a stopping rule such as , , or an estimated error bound. These tests are not equivalent, so the chosen test should match the question being answered.
The notation means that, as , the magnitude of the term is bounded by a constant times . More precisely, if there are constants and such that whenever . This notation is central to Taylor formulas, finite differences, interpolation remainders, and quadrature error estimates.
Key results
Taylor's theorem is the main bridge between exact analysis and numerical formulas. If has continuous derivatives near , then
where one common form of the remainder is
for some between and . The theorem explains why the forward difference has first-order truncation error, the centered difference has second-order truncation error, and Simpson's rule is exact for cubic polynomials.
A typical convergence statement has three parts: the hypotheses under which the result is true, the limiting object, and the rate. If
then convergence is linear. If
then the method has order near the limit. These asymptotic rates describe late-stage behavior; early iterations can be dominated by poor scaling, bad starting values, or roundoff.
Error analysis also separates truncation error from roundoff error. Truncation error is caused by replacing an infinite or exact mathematical process with a finite one, such as replacing a derivative by a difference quotient. Roundoff error is caused by storing and operating on finite precision numbers. Reducing the step size often lowers truncation error but can increase roundoff amplification, so the best step is frequently a balance rather than the smallest representable number.
A reliable way to use these results is to keep the analysis tied to the actual numerical question rather than to the formula alone. For mathematical preliminaries and error analysis, the input record should include the exact quantity, approximation, norm, scale, and stopping criterion. Without that record, two computations that look similar on paper may have different numerical meanings. The same formula can be a safe production tool in one scaling and a fragile experiment in another. This is why the examples on this page show the intermediate arithmetic: the goal is not only to reach a number, but to expose what assumptions made that number meaningful.
The next record is the verification record. Useful diagnostics for this topic include absolute error, relative error, residuals, and observed convergence rates. A diagnostic should be chosen before the computation is trusted, not after a pleasing answer appears. When an exact answer is unavailable, compare two independent approximations, refine the mesh or tolerance, check a residual, or test the method on a neighboring problem with known behavior. If several diagnostics disagree, treat the disagreement as information about conditioning, stability, or implementation rather than as a nuisance to be averaged away.
The cost record matters as well. In this topic the dominant costs are usually iterations, function evaluations, and the balance between truncation and roundoff. Numerical analysis is full of methods that are mathematically attractive but computationally mismatched to the problem size. A dense factorization may be acceptable for a classroom matrix and impossible for a PDE grid. A high-order rule may use fewer steps but more expensive stages. A guaranteed method may take many iterations but provide a bound that a faster method cannot. The right comparison is therefore cost to reach a verified tolerance, not order or elegance in isolation.
Finally, every method here has a recognizable failure mode: using the wrong error scale, overreading asymptotic notation, and ignoring roundoff. These failures are not edge cases to memorize; they are signals that the hypotheses behind the result have been violated or that a different numerical model is needed. A good implementation makes such failures visible through exceptions, warnings, residual reports, or conservative stopping rules. A good hand solution does the same thing in prose by naming the assumption being used and checking it at the point where it matters.
For study purposes, the most useful habit is to separate four layers: the continuous mathematical problem, the discrete approximation, the algebraic or iterative algorithm used to compute it, and the diagnostic used to judge the result. Many mistakes come from mixing these layers. A small algebraic residual may not mean a small modeling error. A small step-to-step change may not mean the discrete equations are solved. A high-order truncation formula may not help when the data are noisy or the arithmetic is unstable. Keeping the layers separate makes the results on this page portable to larger examples.
Visual
| Error idea | Typical symbol | What it measures | Common control knob | Warning sign |
|---|---|---|---|---|
| Absolute error | Physical distance from exact value | More iterations, smaller step | Misleading across scales | |
| Relative error | Error compared with answer size | Scaling and normalization | Bad when | |
| Truncation error | Error from a finite formula | Decrease or raise order | Competes with roundoff | |
| Roundoff error | about per rounded operation | Error from finite precision | Stable formula, rescaling | Cancellation, overflow |
| Residual | or | Equation mismatch | Iteration refinement | Small residual can hide ill-conditioning |
Worked example 1: absolute and relative error
Problem. A computation gives for . Find the absolute and relative error, and interpret the result.
Method. Use the definitions directly.
- Compute the absolute error:
- Compute the relative error:
- Convert to a percent if desired:
Checked answer. The approximation has absolute error about and relative error about . The final digit is not exact, but the first five significant digits are reliable for ordinary reporting.
Worked example 2: Taylor order check
Problem. Show that
as .
Method. Expand about through the fourth-degree term.
- Taylor's theorem gives
for some between and .
- Add to both sides:
- Since ,
Checked answer. The remainder is bounded by a constant times , so . This check is typical: identify the first nonzero neglected Taylor term, then bound its coefficient.
Code
import math
def absolute_error(true_value, approx):
return abs(true_value - approx)
def relative_error(true_value, approx):
if true_value == 0:
raise ValueError("relative error is undefined when the exact value is zero")
return abs(true_value - approx) / abs(true_value)
def observed_order(errors, hs):
"""Estimate q from error ~= C h**q using consecutive data."""
orders = []
for e1, e2, h1, h2 in zip(errors, errors[1:], hs, hs[1:]):
orders.append(math.log(e2 / e1) / math.log(h2 / h1))
return orders
p = math.pi
p_hat = 3.1416
print("absolute", absolute_error(p, p_hat))
print("relative", relative_error(p, p_hat))
hs = [0.2, 0.1, 0.05, 0.025]
errors = [abs((math.cos(h) + 0.5 * h * h) - 1.0) for h in hs]
print("errors", errors)
print("observed orders", observed_order(errors, hs))
Common pitfalls
- Reporting a residual as if it were automatically a forward error. A small residual only says the equation is nearly satisfied; conditioning determines how far the computed answer may be from the exact answer.
- Using relative error when the exact value is zero or close to zero. In that case, report absolute error or use a problem-specific scale.
- Treating as an equality. It describes an asymptotic bound, not the exact leading constant.
- Decreasing without considering roundoff. Difference formulas often get worse after becomes too small.
- Stopping an iteration because consecutive iterates are close even though the residual is still large. A stalled sequence can pass an increment test.