Algebraic Graph Theory Basics
Algebraic graph theory studies graphs through matrices, eigenvalues, vector spaces, and polynomials. The main idea is to encode adjacency or incidence in a matrix and then use linear algebra to reveal structure: walks, connected components, spanning trees, expansion, regularity, and symmetry.
This viewpoint does not replace combinatorial reasoning; it complements it. A path proof may show why a graph is connected, while a Laplacian eigenvalue proves the same fact numerically. Matrix methods become especially powerful when graphs are large, regular, or generated by algebraic rules.
Definitions
For a graph with vertices , the adjacency matrix is the matrix with
For a simple undirected graph, is symmetric.
The degree matrix is diagonal, with . The Laplacian matrix is
The spectrum of a matrix is its multiset of eigenvalues. The adjacency spectrum and Laplacian spectrum are graph invariants: isomorphic graphs have the same spectra. The converse is false; nonisomorphic graphs can be cospectral.
An incidence matrix records which vertices are incident with which edges. For oriented edges, the signed incidence matrix has one and one in each column, and
Key results
Walk-counting theorem. The entry equals the number of walks of length from to .
Proof sketch: matrix multiplication sums over all possible intermediate vertices. Each product term records a valid next step exactly when the corresponding adjacencies exist.
Laplacian component theorem. The multiplicity of as a Laplacian eigenvalue equals the number of connected components of .
In particular, a graph is connected if and only if the Laplacian has exactly one zero eigenvalue.
Matrix-tree theorem. If has Laplacian , then the number of spanning trees is any cofactor of : delete one row and the corresponding column, then take the determinant.
Regular graph eigenvalue fact. If is -regular, then the all-ones vector is an adjacency eigenvector with eigenvalue .
Quadratic form of the Laplacian. For any vector ,
This identity explains why is positive semidefinite. It also explains the component theorem: exactly when is constant on every connected component.
Algebraic connectivity. The second-smallest Laplacian eigenvalue is often denoted . It is positive exactly when the graph is connected. Larger values indicate stronger connectivity in a spectral sense: the graph is harder to separate into weakly connected pieces. This number is also called the Fiedler value.
Trace and closed walks. Since counts closed walks of length starting and ending at , the trace
counts all closed walks of length . For example, in a simple graph, each triangle contributes closed walks of length : three choices of starting vertex and two directions.
Normalized matrices. For irregular graphs, the adjacency matrix can overemphasize high-degree vertices. The normalized Laplacian and random-walk matrix correct for degree. The random-walk matrix gives transition probabilities for moving from a vertex to a uniformly chosen neighbor. This connects algebraic graph theory with Markov chains and spectral clustering.
Cospectral caution. Spectra are powerful invariants because they are independent of vertex labels, but they are not complete invariants. Two nonisomorphic graphs can have the same adjacency spectrum or Laplacian spectrum. Therefore eigenvalues can disprove isomorphism when they differ, but matching eigenvalues do not prove isomorphism.
When matrices help. Matrix methods are especially effective when a question involves repeated walks, global connectivity, numbers of spanning trees, or regular patterns. They are less convenient for a single hand-drawn path or a local bridge check. Choosing between combinatorial and algebraic methods is part of the skill.
Visual
For this graph, with vertex order ,
| Matrix | Encodes | Common use |
|---|---|---|
| adjacency | walk counts, spectra | |
| degrees | normalization | |
| differences across edges | connectivity, spanning trees | |
| signed incidence | oriented edge endpoints |
Worked example 1: Count walks with
Problem. For the graph with edges , count the number of walks of length from vertex to vertex .
Method.
Vertex can move in one step to vertices or . To arrive at vertex in exactly two steps, the intermediate vertex must be adjacent to .
- The neighbors of are .
- The only neighbor of is .
- The common vertices in these two sets are
- Therefore there is exactly one walk of length from to :
Matrix check:
Only contributes , so
Checked answer. There is exactly one length- walk from to .
The same matrix can count other walks at the same time. For instance, equals the number of length- closed walks from vertex . Since vertex has two neighbors, the walks and show .
Worked example 2: Use the matrix-tree theorem on
Problem. Compute the number of spanning trees of using the Laplacian.
Method.
In , every vertex has degree , and every pair of distinct vertices is adjacent. Thus
Delete the fourth row and fourth column:
Compute the determinant:
Check. Cayley's formula gives , so the matrix-tree result agrees.
The full Laplacian determinant would not work here:
That zero is not a failure; it reflects the fact that the rows of sum to zero. The matrix-tree theorem deliberately removes one row and one column to get a nonsingular cofactor for connected graphs.
Code
from fractions import Fraction
def determinant(matrix):
A = [[Fraction(x) for x in row] for row in matrix]
n = len(A)
det = Fraction(1)
for i in range(n):
pivot = next((r for r in range(i, n) if A[r][i] != 0), None)
if pivot is None:
return 0
if pivot != i:
A[i], A[pivot] = A[pivot], A[i]
det *= -1
det *= A[i][i]
pivot_value = A[i][i]
for j in range(i, n):
A[i][j] /= pivot_value
for r in range(i + 1, n):
factor = A[r][i]
for j in range(i, n):
A[r][j] -= factor * A[i][j]
return det
minor = [
[3, -1, -1],
[-1, 3, -1],
[-1, -1, 3],
]
print(determinant(minor))
For larger graphs, one would normally use a numerical or symbolic linear algebra library. The fraction-based determinant above is included to keep the snippet exact and dependency-free. Exact arithmetic matters for matrix-tree computations because the final answer is an integer, and floating-point roundoff can obscure that integrality.
Before applying a matrix theorem, verify the matrix convention. Some texts put loops, multiple edges, or directed arcs into adjacency matrices differently. The theorem being used determines the convention; for instance, the simple undirected Laplacian in the matrix-tree theorem assumes degrees and adjacencies are counted consistently.
Also check dimensions early. An adjacency or Laplacian matrix for an -vertex graph must be , while an incidence matrix has one column per edge. Dimension mismatches usually indicate that vertices and edges have been mixed.
This simple bookkeeping catches many matrix setup errors before any determinant or eigenvalue is computed.
For matrix-based problems, write the vertex order next to every matrix. Relabelling vertices permutes rows and columns, so two correct matrices for the same graph may look different. The order also controls how vector entries should be interpreted. Without it, a computed walk count, eigenvector, or cofactor cannot be matched back to the graph reliably.
Common pitfalls
- Forgetting that adjacency matrix order depends on the chosen vertex order. The matrix changes under relabelling, though spectra do not.
- Assuming identical spectra imply isomorphism. Cospectral nonisomorphic graphs exist.
- Using the adjacency matrix instead of the Laplacian in the matrix-tree theorem.
- Taking the determinant of the full Laplacian for a connected graph. It is ; use a cofactor.
- Counting paths when counts walks. Walks may repeat vertices and edges.
- Ignoring multiple edges or loops when building matrices. Matrix conventions must match the graph class.