Fourier Series
Fourier series represent periodic functions as sums of sines and cosines. The engineering interpretation is modal decomposition: a complicated periodic signal, force, temperature distribution, or vibration shape is built from pure harmonic components. Each coefficient measures how much of one frequency is present.
The same idea solves boundary-value PDEs. If the boundary conditions match sine or cosine modes, the initial shape can be expanded in that basis and each mode evolves independently. Fourier series therefore connect approximation, signal analysis, heat flow, wave motion, and Sturm-Liouville theory.
Definitions
For a -periodic function,
The coefficients are
For period ,
where
A half-range sine series on uses odd extension. A half-range cosine series uses even extension.
Key results
Orthogonality is the source of the coefficient formulas. On ,
and similar identities hold for sine-sine and sine-cosine products. Orthogonality lets each coefficient be computed independently by projection.
If is even, then all sine coefficients vanish because the product of an even function and is odd. If is odd, then all cosine coefficients and the constant term vanish. Symmetry can cut the work in half and reduce errors.
At a point where is sufficiently smooth, the Fourier series converges to . At a jump discontinuity, it converges to the midpoint of the one-sided limits:
Near jumps, partial sums show Gibbs overshoot. The overshoot does not disappear in height as more terms are added; it becomes narrower. This distinction matters in signal reconstruction and PDE solutions with discontinuous initial data.
Fourier coefficients also measure smoothness. Smooth periodic functions usually have rapidly decaying coefficients. Functions with jumps have slower decay, often like . Corner-like derivative jumps give intermediate decay. Coefficient decay is therefore a diagnostic for how many modes are needed.
The partial Fourier sum is the best approximation in the mean-square sense among trigonometric polynomials of the same degree. This does not guarantee uniform accuracy near discontinuities, but it explains why Fourier series are so useful for energy and least-squares approximations.
Boundary conditions determine which family of modes is natural. Fixed-end string or zero-temperature boundary conditions use sine modes because they vanish at endpoints. Insulated-end heat problems use cosine modes because their derivatives vanish at endpoints. Choosing modes that already satisfy the boundary conditions leaves only the initial data to expand.
The coefficient is the average value of the function over one period. In signal language it is the DC component. Removing the mean before analyzing oscillations is often useful because it separates the constant offset from the dynamic harmonic content. In heat problems, the average temperature may be the steady mode while higher modes decay.
Fourier series are periodic by construction. A function given only on an interval must be extended periodically once a full-range or half-range choice is made. Discontinuities can appear at the endpoints of the periodic extension even if the original function is smooth inside the interval. These endpoint jumps are responsible for many surprising plots of partial sums.
The sine-cosine form can also be written in amplitude-phase form. A combination
can be expressed as
where . This form is common in signal processing because it separates magnitude and phase. The coefficient pair contains the same information; the best representation depends on the application.
Complex Fourier series use exponentials:
This notation is compact and connects directly to Fourier transforms, but the real sine-cosine form is often easier for real boundary-value problems. Euler's formula translates between the two. Complex coefficients also make convolution and frequency shifting cleaner in later analysis.
The convergence theorem is not a license to ignore regularity. The usual elementary result assumes piecewise smoothness or similar hypotheses. Wild functions may require more advanced notions of convergence. Engineering data are usually finite-resolution measurements, so the practical issue is often not whether an infinite series converges, but how many modes should be kept and how noise affects high-frequency coefficients.
Truncation acts like filtering. Keeping only low-frequency terms smooths sharp features, while high-frequency terms represent rapid variation. In numerical PDEs, unresolved high frequencies can produce oscillations or aliasing. Sampling theory explains how discrete data can misrepresent frequencies if the grid is too coarse, a theme that continues in Fourier transform and numerical analysis.
Parseval's identity connects coefficients to energy. For suitable -periodic functions,
This says that mean-square size in physical space equals size in coefficient space. It justifies interpreting coefficient magnitudes as modal energy.
Visual
| Function feature | Coefficient effect | Practical meaning |
|---|---|---|
| Even symmetry | Cosine series only | |
| Odd symmetry | Sine series only | |
| Jump discontinuity | Slow decay | Gibbs phenomenon |
| Smooth periodic data | Fast decay | Few modes may suffice |
| Endpoint boundary condition | Select sine or cosine | Built-in PDE constraints |
Worked example 1: Sine series for
Problem. Find the sine series of on .
Method.
- Use a half-range sine series:
- Coefficients are
- Integrate by parts with and :
- Then
- Therefore
Answer.
Check. The sine series vanishes at the endpoints in the periodic odd extension, so at it converges to the midpoint of the jump, not to .
This example is a useful warning about endpoint interpretation. The original problem asks only for , where the series represents at interior points. The sine expansion implicitly creates an odd -periodic extension, which jumps from to at the endpoint. The formula is correct, but the periodic extension must be remembered when evaluating limits at the boundary.
Worked example 2: Cosine series for
Problem. Find the half-range cosine series of on .
Method.
- Write
- Compute
- For ,
- Integrate:
- Evaluate endpoints:
Thus
Answer.
Check. A constant function is already the zero-frequency cosine mode.
This example looks trivial, but it explains the normalization. The coefficient is , yet the series uses , so the constant term is . Many coefficient mistakes are off by a factor of two precisely at the zero-frequency term.
Code
import numpy as np
def sine_series_x(x, N):
n = np.arange(1, N + 1)[:, None]
coeff = 2.0 * (-1.0)**(n + 1) / n
return np.sum(coeff * np.sin(n * x), axis=0)
grid = np.linspace(0.05, np.pi - 0.05, 200)
approx = sine_series_x(grid, 20)
print(np.max(np.abs(approx - grid)))
The error is largest near endpoints because the odd periodic extension jumps there. Increasing improves the interior approximation faster than it removes endpoint oscillation. This is the numerical signature of Gibbs behavior.
For a fair numerical test, avoid evaluating exactly at discontinuities of the periodic extension unless the midpoint value is intended. A grid that includes endpoints may show a large error even though the series is behaving correctly according to the convergence theorem.
Common pitfalls
- Using full-range coefficient formulas for a half-range problem.
- Forgetting the factor or in arbitrary-period formulas.
- Assuming a Fourier series equals the original function at a jump rather than the midpoint value.
- Ignoring parity and doing twice as much integration as necessary.
- Choosing cosine modes for fixed-zero endpoint conditions or sine modes for insulated endpoint conditions without checking the boundary behavior.
- Treating pointwise convergence and mean-square convergence as the same concept.
- Plotting too few modes and mistaking truncation artifacts for features of the function.
- Forgetting that the constant term is , not .
- Comparing coefficient magnitudes across different normalizations without adjusting for the interval length.
- Assuming high-frequency coefficients are always meaningful when the input data are noisy or undersampled.
- Forgetting that a half-range expansion chooses an extension outside the original interval.
- Using degrees instead of radians in computational sine and cosine functions.
- Assuming a finite partial sum automatically preserves positivity, monotonicity, or other shape constraints of the original function.
- Dropping units when interpreting frequency; is an index, while has spatial frequency units.
- Ignoring phase information in sine-cosine pairs.
- Overrounding coefficients.