Eigenvalues and Eigenvectors
Eigenvectors reveal directions that a matrix does not turn. Along those directions, the matrix only stretches, compresses, or reverses orientation. Eigenvalues measure that scalar action. This idea explains long-term dynamics, diagonalization, powers of matrices, differential equations, Markov chains, and spectral geometry.
The definition is simple, but its consequences are broad. Instead of asking how a matrix acts on every vector at once, eigenanalysis looks for special directions where the action is one-dimensional. If enough such directions exist, the matrix can be understood by studying independent scalar multiplications.
Definitions
Let be an matrix. A nonzero vector is an eigenvector of if
for some scalar . The scalar is the corresponding eigenvalue.
Equivalently,
Thus is an eigenvalue exactly when is singular:
The polynomial
or equivalently up to sign, is the characteristic polynomial. The eigenspace for is
Because eigenvectors are required to be nonzero, the zero vector is not itself an eigenvector. However, the eigenspace includes the zero vector because it is a subspace.
Key results
The eigenvalues of are the roots of the characteristic polynomial. For an matrix, the characteristic polynomial has degree , though it may have repeated roots or complex roots.
If is triangular, its eigenvalues are the diagonal entries. This follows because is triangular, and its determinant is the product of diagonal entries:
Eigenvectors corresponding to distinct eigenvalues are linearly independent. A proof sketch for two eigenvectors is direct. Suppose and , with . If , apply and compare with multiplying the original equation by :
and
Subtracting gives , so , and then . The general case is similar.
The determinant and trace encode eigenvalue information. For a matrix, the characteristic polynomial can be written
so the sum of eigenvalues is the trace and the product is the determinant, counting algebraic multiplicity.
The eigenspace is a subspace because it is the null space of . This matters when solving by hand: after finding an eigenvalue, one should row-reduce and describe the full solution space, not just one vector. Any nonzero vector in that subspace is an eigenvector, and any scalar multiple represents the same eigendirection.
Eigenvalues can be zero. A zero eigenvalue means there is a nonzero vector such that . Therefore has a nontrivial null space and is singular. Conversely, if is singular, then , so is a root of up to sign. This gives a useful bridge between invertibility and spectral information.
Complex eigenvalues also have real consequences. A real rotation matrix by an angle that is not or has no real eigenvectors because no real direction is left on its own line. Over the complex numbers it has complex eigenvalues. In applications, complex eigenvalues often indicate rotation or oscillation, while their magnitudes indicate growth or decay.
For repeated eigenvalues, the characteristic polynomial alone does not tell the whole story. If has algebraic multiplicity , its eigenspace might have dimension , , or . The dimension of the eigenspace determines how many independent eigenvectors are available for diagonalization. Thus the workflow is always: find eigenvalues, then find eigenspaces, then count independent eigenvectors.
In dynamical systems, the absolute values of eigenvalues control long-term behavior when the matrix is diagonalizable or nearly so. Eigenvalues with correspond to decaying modes. Eigenvalues with correspond to growing modes. Eigenvalues with negative sign introduce alternating direction. This modal interpretation is one of the main reasons eigenvectors are more than a calculation exercise.
Visual
| Object | How it is found | Meaning |
|---|---|---|
| Eigenvalue | root of | scalar stretch factor |
| Eigenvector | nonzero solution of | direction preserved by |
| Eigenspace | null space of | all vectors stretched by plus zero |
| Algebraic multiplicity | multiplicity as polynomial root | characteristic-polynomial count |
| Geometric multiplicity | number of independent eigenvectors for |
Worked example 1: Find eigenvalues and eigenvectors of a 2 by 2 matrix
Problem: find the eigenvalues and eigenspaces of
Step 1: compute the characteristic equation.
Thus
Step 2: factor.
So the eigenvalues are and .
Step 3: find . Solve :
The equation is , so . Therefore
Step 4: find . Solve :
The equation is , so . Therefore
Checked answer: and .
Worked example 2: Interpret eigenvectors in a repeated process
Problem: suppose a process updates states by
Find a steady direction, meaning an eigenvector for .
Step 1: solve .
The equation is
Step 2: clear decimals:
Step 3: choose , giving . A steady eigenvector is
Step 4: normalize to a probability vector if desired:
Checked answer:
The direction is unchanged by the update.
Code
import numpy as np
A = np.array([[4, 1],
[2, 3]], dtype=float)
values, vectors = np.linalg.eig(A)
print(values)
print(vectors)
for i, lam in enumerate(values):
v = vectors[:, i]
print(np.allclose(A @ v, lam * v))
Numerical eigenvectors are usually scaled differently from hand-computed eigenvectors. Any nonzero scalar multiple of an eigenvector is still an eigenvector for the same eigenvalue.
Common pitfalls
- Allowing the zero vector as an eigenvector. It is in every eigenspace but is not called an eigenvector.
- Solving and stopping before finding eigenspaces.
- Confusing algebraic multiplicity with geometric multiplicity.
- Assuming every real matrix has real eigenvalues. Some real matrices have complex eigenvalues.
- Forgetting that eigenvectors are directions: scalar multiples represent the same eigendirection.
- Using instead of . The identity matrix is required.
A reliable eigenvalue workflow is sequential. First compute the characteristic polynomial carefully. Second solve for candidate eigenvalues. Third, for each eigenvalue, row-reduce and describe the null space. Fourth, check at least one eigenvector by multiplying and comparing it with . Skipping the final check is risky because sign errors in characteristic polynomials are common.
When a problem asks for "the eigenvectors" for an eigenvalue, it usually expects the eigenspace, not just one vector. Since every nonzero scalar multiple is also an eigenvector, listing a single vector without saying "span" hides the full answer. A clean response is
Then the eigenvectors are the nonzero vectors in that span.
Repeated eigenvalues require extra care. A repeated root of the characteristic polynomial may produce one independent eigenvector or several. The only way to know is to solve the null-space problem. This distinction controls diagonalization, powers of matrices, and qualitative dynamics.
In applied settings, eigenvectors often represent modes. A mode is a pattern that keeps its shape while its size changes. In a population model, a steady distribution is an eigenvector for eigenvalue . In a vibration model, eigenvectors describe natural shapes of motion. In a data covariance matrix, eigenvectors identify principal directions of variation. The same algebraic equation supports all of these interpretations.
Eigenvalue checks can use determinant and trace for small matrices. For a matrix, if you find eigenvalues and , then their sum should equal the trace and their product should equal the determinant. These checks do not replace the computation, but they are excellent at catching arithmetic errors.
For triangular matrices, do not expand the characteristic determinant unnecessarily. The eigenvalues are already on the diagonal because remains triangular. This includes diagonal matrices as the simplest case. The eigenspaces still require solving ; only the eigenvalue step is immediate.
If a matrix is symmetric, its eigenvectors have extra structure: eigenvectors from distinct eigenspaces are orthogonal, and an orthonormal eigenbasis exists. That stronger result belongs to the spectral theorem, but it is useful to remember because symmetric eigenvalue problems are common and especially well behaved.