Eigenvalue Methods
Eigenvalue algorithms approximate scalars and vectors satisfying . They are central because eigenvalues describe stability, vibration modes, principal components, graph connectivity, and convergence rates of iterative methods. Numerical eigenvalue methods are built from the same matrix transformations used in linear systems, but their goal is spectral information rather than a solution to one right-hand side.
The power method finds a dominant eigenvalue. Inverse iteration targets an eigenvalue near a shift. Householder reductions and QR iteration form the backbone of dense eigenvalue software. The algorithms differ in scope, but they all try to reveal eigenvalues while controlling the effect of roundoff.
Definitions
An eigenpair of is a scalar-vector pair with such that
The Rayleigh quotient of a nonzero vector is
For symmetric matrices, the Rayleigh quotient is a natural eigenvalue estimate associated with .
The power method repeats
Inverse iteration with shift solves
and normalizes . It converges toward an eigenvector whose eigenvalue is near when the shifted system is well defined.
The QR algorithm forms
Key results
If has eigenvalues ordered by magnitude with
and the starting vector has a nonzero component in the dominant eigenvector direction, the power method converges linearly with asymptotic factor
A small spectral gap leads to slow convergence.
QR iteration preserves eigenvalues because
so is orthogonally similar to . Orthogonal similarity is preferred because it preserves norms and is numerically stable. Practical QR algorithms first reduce a dense matrix to Hessenberg form, or to tridiagonal form in the symmetric case, and then use shifts to accelerate convergence.
For any computed eigenpair , the residual
is an essential diagnostic. A small residual is necessary, but eigenvalue sensitivity also depends on eigenvector conditioning, especially for nonsymmetric matrices.
A reliable way to use these results is to keep the analysis tied to the actual numerical question rather than to the formula alone. For eigenvalue methods, the input record should include the target part of the spectrum, symmetry, starting vector, and shift strategy. Without that record, two computations that look similar on paper may have different numerical meanings. The same formula can be a safe production tool in one scaling and a fragile experiment in another. This is why the examples on this page show the intermediate arithmetic: the goal is not only to reach a number, but to expose what assumptions made that number meaningful.
The next record is the verification record. Useful diagnostics for this topic include eigenpair residuals, Rayleigh quotients, and changes in invariant subspaces. A diagnostic should be chosen before the computation is trusted, not after a pleasing answer appears. When an exact answer is unavailable, compare two independent approximations, refine the mesh or tolerance, check a residual, or test the method on a neighboring problem with known behavior. If several diagnostics disagree, treat the disagreement as information about conditioning, stability, or implementation rather than as a nuisance to be averaged away.
The cost record matters as well. In this topic the dominant costs are usually matrix-vector products, factorizations, and orthogonal reductions. Numerical analysis is full of methods that are mathematically attractive but computationally mismatched to the problem size. A dense factorization may be acceptable for a classroom matrix and impossible for a PDE grid. A high-order rule may use fewer steps but more expensive stages. A guaranteed method may take many iterations but provide a bound that a faster method cannot. The right comparison is therefore cost to reach a verified tolerance, not order or elegance in isolation.
Finally, every method here has a recognizable failure mode: small spectral gaps, nonnormal sensitivity, and unshifted iterations that stagnate. These failures are not edge cases to memorize; they are signals that the hypotheses behind the result have been violated or that a different numerical model is needed. A good implementation makes such failures visible through exceptions, warnings, residual reports, or conservative stopping rules. A good hand solution does the same thing in prose by naming the assumption being used and checking it at the point where it matters.
For study purposes, the most useful habit is to separate four layers: the continuous mathematical problem, the discrete approximation, the algebraic or iterative algorithm used to compute it, and the diagnostic used to judge the result. Many mistakes come from mixing these layers. A small algebraic residual may not mean a small modeling error. A small step-to-step change may not mean the discrete equations are solved. A high-order truncation formula may not help when the data are noisy or the arithmetic is unstable. Keeping the layers separate makes the results on this page portable to larger examples.
Eigenvalue computations should also report normalization and phase conventions for eigenvectors. The vector and any nonzero multiple of represent the same eigenvector, so comparisons must use angles, residuals, or normalized signs rather than raw component equality. This is especially important when testing iterative methods against library routines.
Visual
| Method | Target | Main cost | Convergence driver | Warning |
|---|---|---|---|---|
| Power method | largest magnitude eigenvalue | matrix-vector products | fails if no dominant magnitude | |
| Inverse iteration | eigenvalue near shift | linear solves | closeness to shift | shifted system may be ill-conditioned |
| Rayleigh quotient iteration | symmetric eigenpair | linear solves | often cubic locally | needs good starting vector |
| QR algorithm | many eigenvalues | factorizations | similarity transformations | unshifted version can be slow |
Worked example 1: one power iteration
Problem. Apply one power-method step to
from using the Euclidean norm.
Method. Multiply and normalize.
- Compute
- Its norm is
- Normalize:
- The Rayleigh quotient is
Checked answer. The dominant eigenvalue is , with eigenvector proportional to . The first Rayleigh estimate is already moving toward .
Worked example 2: one QR similarity step
Problem. For
state why a QR step preserves eigenvalues.
Method. Factor with orthogonal, then set .
- Since , multiply by on the left:
- The next QR iterate is
-
This is a similarity transformation because .
-
Similar matrices have the same characteristic polynomial:
Checked answer. A QR step changes the matrix entries but preserves the eigenvalues. Repeated shifted QR steps drive the matrix toward triangular or diagonal form where eigenvalues are visible.
Code
import numpy as np
def power_method(A, x0=None, tol=1e-12, max_iter=1000):
A = np.asarray(A, dtype=float)
n = A.shape[0]
x = np.ones(n) if x0 is None else np.asarray(x0, dtype=float)
x = x / np.linalg.norm(x)
last = None
for k in range(1, max_iter + 1):
y = A @ x
x = y / np.linalg.norm(y)
lam = float(x @ A @ x)
if last is not None and abs(lam - last) < tol:
return lam, x, k
last = lam
return last, x, max_iter
def qr_algorithm(A, tol=1e-12, max_iter=1000):
Ak = np.array(A, dtype=float, copy=True)
for _ in range(max_iter):
Q, R = np.linalg.qr(Ak)
Ak = R @ Q
if np.linalg.norm(np.tril(Ak, -1)) < tol:
break
return np.diag(Ak), Ak
A = np.array([[2.0, 1.0], [1.0, 2.0]])
print(power_method(A, [1.0, 0.0]))
print(qr_algorithm(A)[0])
Common pitfalls
- Using the power method when the dominant eigenvalue is not unique in magnitude.
- Reporting an eigenvalue without an eigenvector residual.
- Forgetting that nonsymmetric eigenvalue problems can be much more sensitive than symmetric ones.
- Running unshifted QR and expecting production-level performance.
- Solving shifted inverse iteration systems from scratch when repeated factorizations can be reused.