Laplace Transfer Functions and Linearization
Transfer functions are the main algebraic language of classical control. Nise introduces them after a Laplace-transform refresher because the transform converts constant-coefficient differential equations into polynomial equations in . Once inputs and outputs are expressed in the same domain, subsystems can be multiplied, divided, and connected with block-diagram rules.
This page combines the frequency-domain modeling foundation with the small-signal linearization step needed before transfer functions are valid. The unspoken discipline is important: a transfer function is not the physical system itself. It is a linear, time-invariant, zero-initial-condition input-output model that is useful only over the operating range where the assumptions are defensible.
Definitions
The one-sided Laplace transform of a causal signal is
For linear time-invariant systems described by a differential equation,
the transfer function is the ratio of the Laplace transform of the output to the Laplace transform of the input under zero initial conditions:
Roots of are poles and roots of are zeros. Poles determine the modes available in the natural response. Zeros shape how the input excites and combines those modes.
A system is linear if it satisfies superposition and homogeneity. Superposition means the response to is the sum of the separate responses. Homogeneity means the response to is times the response to . Real components often violate linearity through saturation, dead zone, backlash, Coulomb friction, or geometric nonlinearities.
Linearization replaces a nonlinear relation with its first-order Taylor approximation about an operating point :
If and , then
Key results
The transform derivative property explains why Laplace methods simplify modeling. For zero initial conditions,
Thus each derivative becomes multiplication by . A differential equation becomes an algebraic equation, and an input-output relation becomes a rational function.
Common transform pairs used throughout control include:
| Time signal | Laplace transform | Control use |
|---|---|---|
| impulse response, natural modes | ||
| step commands | ||
| ramp tracking | ||
| first-order decay | ||
| frequency response | ||
| sinusoidal testing |
For physical modeling, the transfer-function procedure is:
- Choose input and output variables.
- Write the governing differential equation using physical laws.
- Take the Laplace transform with zero initial conditions.
- Solve algebraically for .
- Factor or inspect the numerator and denominator to identify poles and zeros.
For nonlinear systems, insert a preliminary step: choose the equilibrium or nominal operating point and express variables as nominal values plus small deviations. Only the deviation variables belong in the transfer function. For example, if and , the small-signal transfer function relates and , not the total variables.
Linearization can change stability conclusions depending on the operating point. A pendulum near the downward equilibrium behaves like a stable small-angle oscillator, while a pendulum near the upright equilibrium has a locally unstable linear model. The same nonlinear system can therefore yield different transfer functions around different equilibria.
A transfer function also hides initial-condition effects by definition. If a capacitor begins charged, or a mass begins with nonzero velocity, the Laplace-transformed differential equation contains extra terms. Those terms act like additional inputs to the same zero-state model, but they are not part of . This distinction matters in experiments: an impulse-like test may be used to excite natural modes, while a transfer-function calculation from command to output assumes the stored energy is initially zero.
Properness is another important modeling check. A physically realizable causal transfer function normally has denominator degree at least numerator degree. If the numerator degree is larger, the model differentiates the input more times than the plant dynamics can support, which usually indicates that the chosen idealization is too aggressive. A differentiator may appear as part of a controller design, but real implementations filter it because high-frequency noise would otherwise be amplified without bound.
Pole-zero cancellation should be treated carefully. Algebra may simplify to , but a physical cancellation requires exact matching of dynamics. If the cancelled pole is stable and far from the dominant response, the simplification may be useful. If the cancelled pole is unstable, cancellation hides an internal mode that can grow even though the simplified input-output transfer function looks stable. This is one reason state-space realizations and robustness checks remain important.
Linearization with several variables uses partial derivatives. For a nonlinear relation near , the small-signal approximation is
The coefficients are slopes evaluated at the operating point. They are not universal constants. For example, aerodynamic drag proportional to has small-signal slope around speed , so the linear damping changes with operating speed.
When using a transfer function in design, always keep three questions attached to it: what input and output does it relate, what operating point and amplitude range justify it, and what dynamics were intentionally neglected? These questions prevent the common mistake of treating as a timeless property of the hardware rather than as a scoped model created for a particular analysis.
A final check is dimensional consistency. The variable has units of inverse time, so polynomial coefficients carry units that make each term compatible. Normalizing a denominator into standard form is fine, but the resulting constants still represent time constants, natural frequencies, or gains with physical dimensions. Unit mistakes often show up as impossible pole locations or gains that cannot match measured hardware.
Visual
| Nonlinearity | Physical example | Small-signal handling |
|---|---|---|
| Saturation | amplifier rail limit | linear only below the rail |
| Dead zone | motor static friction | invalid for commands inside the dead band |
| Backlash | loose gears | approximate only for one contact direction or very small range |
| Trigonometric geometry | pendulum torque | use slope at the operating angle |
| Product terms | fluid or aerodynamic force | linearize partial derivatives around nominal flow |
Worked example 1: transfer function from a differential equation
Problem: A system is governed by
Find under zero initial conditions and identify poles and zeros.
Method:
- Apply the Laplace transform to each derivative:
- Transform the differential equation:
- Factor and :
- Divide by :
- Factor:
Checked answer: the zero is at and the poles are at and . The poles are in the left half-plane, so this standalone transfer function is stable.
Worked example 2: linearizing a nonlinear spring
Problem: A mass is attached to a nonlinear spring whose force is
The operating displacement is m. Find the small-signal spring stiffness and write the linearized force relation.
Method:
- Evaluate the force at the operating point:
- Differentiate with respect to :
- Evaluate the slope at :
- Let and . The first-order approximation is
Checked answer: the nonlinear spring behaves locally like an N/m spring around m. The total approximate force is , valid only for small excursions around m.
Code
import sympy as sp
s, x = sp.symbols("s x")
G = (2*s + 8) / (s**2 + 5*s + 6)
print("factored transfer function:", sp.factor(G))
print("poles:", sp.solve(sp.denom(G), s))
print("zeros:", sp.solve(sp.numer(G), s))
f = 3*x**2 + 2*x
x0 = 1
k_small = sp.diff(f, x).subs(x, x0)
linearized = f.subs(x, x0) + k_small * (x - x0)
print("small-signal stiffness:", k_small)
print("linearized force:", sp.expand(linearized))
Common pitfalls
- Forgetting the zero-initial-condition assumption. Nonzero initial energy appears as additional terms, not in the transfer function itself.
- Treating as valid for every amplitude. Transfer functions from linearization apply near the chosen operating point.
- Linearizing the total variable but then interpreting the result as a large-signal model.
- Dropping numerator dynamics. Zeros can strongly affect overshoot and inverse response even though poles dominate stability.
- Mixing one-sided and two-sided transform conventions without checking initial-condition terms.
- Assuming all positive coefficients imply stability for high-order systems. Routh-Hurwitz is needed beyond simple cases.
Connections
- Engineering math Laplace transform gives the broader transform toolkit.
- Complex functions and analyticity supports the -plane viewpoint.
- Physical system modeling applies these ideas to electrical, mechanical, and motor systems.
- State-space modeling provides an alternative model form when internal variables matter.
- Signals and systems studies the same transforms with a signal-processing emphasis.