Power Series and Taylor Polynomials
Power series represent functions as infinite polynomials. Taylor polynomials use derivatives at a point to build finite polynomial approximations. Together they turn difficult functions into algebraic objects that can be differentiated, integrated, and evaluated approximately.
The key idea is local representation. A Taylor polynomial near is designed to match a function's value and derivatives at . A Taylor series asks whether the infinite version actually converges back to the function, and on which interval that is true.
Definitions
A power series centered at has form
For each fixed , it becomes an ordinary numerical series. The set of values for which it converges is the interval of convergence. The radius of convergence is the number such that the series converges for and diverges for . Endpoints must be tested separately.
The Taylor series of a function centered at is
The th Taylor polynomial is
When , the Taylor series is called a Maclaurin series.
Common Maclaurin series include
Key results
The Ratio Test often finds the radius of convergence. For
compute
The series converges when and diverges when . This usually produces an inequality involving .
Inside its interval of convergence, a power series can be differentiated and integrated term by term:
and
The radius of convergence stays the same after differentiation or integration, although endpoint behavior may change.
Taylor's theorem gives an error formula. One common form is
where
for some between and , assuming the required derivatives exist. This formula explains why Taylor approximations improve near the center and why higher-degree polynomials usually improve accuracy when derivatives remain controlled.
For alternating Taylor series such as and , the alternating series error estimate often gives a simple bound: the error is no larger than the first omitted term, provided the term magnitudes decrease.
A function may have a Taylor series that converges but not to the function at every point. Taylor series representation requires more than having derivatives of all orders. In first calculus examples, standard functions behave well, but the distinction matters in advanced analysis.
Power series behave like polynomials inside their intervals of convergence. They can be added, multiplied, differentiated, and integrated term by term, provided the operations are performed where convergence is valid. This makes them useful for creating new series from known ones. For example, starting from
substituting gives
Integrating term by term produces the Maclaurin series for :
Endpoint behavior can change after integration. The geometric series diverges at , but the integrated series for can converge at . This is why endpoints must always be tested separately for the specific series, not copied from a related series without checking.
Taylor polynomials are local approximations. The center is the point where the polynomial matches the function most tightly. Moving farther from usually increases error. Choosing a center near the input value can dramatically reduce the number of terms needed. For example, approximating with a series centered near is much faster than using a series far from the value.
The order of contact explains why Taylor polynomials are powerful. If is the Taylor polynomial for at , then
The polynomial is not just close in value; its first derivatives match at the center.
Radius of convergence is controlled by coefficient growth. Factorials in the denominator, as in , usually lead to convergence for all real . Powers such as in the denominator often lead to a finite radius. Coefficients that grow too quickly can make the radius zero. The Ratio Test turns these growth rates into an explicit interval.
Power series also provide a systematic way to approximate integrals. If an integrand has a known series, integrate term by term and then evaluate the resulting polynomial or series. For example,
has no elementary antiderivative, but
can be approximated by integrating several terms. The accuracy can often be bounded using alternating or absolute convergence estimates.
Remainder bounds should be tied to the requested accuracy. If a problem asks for four decimal places, the error should be below or another appropriate tolerance. Giving many polynomial terms without an error bound may produce a good number, but it does not prove the approximation is good enough.
Algebraic manipulation of series should stay inside common convergence intervals. If two series are multiplied, the resulting expression is guaranteed by the standard power-series rules only where both original series converge absolutely. Endpoint questions may need fresh testing after the manipulation.
A clean final answer should state center, radius, interval, approximation, and error bound when those items are relevant to the problem being solved.
This prevents local approximations from being mistaken for global identities later on.
Visual
| Series | Formula | Interval | Use |
|---|---|---|---|
| Geometric | builds many other series | ||
| Exponential | all real | growth and differential equations | |
| Sine | all real | oscillation approximation | |
| Cosine | all real | oscillation approximation | |
| Logarithm | for | slow-growth approximation |
Worked example 1: radius and interval of convergence
Problem. Find the radius and interval of convergence of
Method.
- Let
- Use the Ratio Test:
- Simplify:
- Take the limit:
- Convergence requires
Thus
- Test the left endpoint :
which converges by the Alternating Series Test.
- Test the right endpoint :
which diverges.
Checked answer. The radius is , and the interval of convergence is .
Worked example 2: Taylor approximation for
Problem. Use a Taylor polynomial to approximate with error less than .
Method.
- Use the Maclaurin series:
- At , the term magnitudes decrease quickly:
- Compute through the term:
- Evaluate pieces:
and
- Combine:
- The next omitted term has magnitude
This is less than .
Checked answer. , with error below by the alternating series estimate.
The approximation is efficient because is close to the center and factorials grow quickly. If the input were much larger, reducing the angle using periodicity or choosing another method might be better. Taylor polynomials are local tools, so the distance from the center is part of the error analysis.
For a nonalternating series, Taylor's theorem with remainder or another bound on derivatives may be needed instead. The method changes, but the goal is the same: control the part of the function not captured by the finite polynomial.
Code
from math import factorial, sin
def sin_taylor(x, degree):
total = 0.0
for n in range((degree + 1) // 2):
total += (-1)**n * x**(2*n + 1) / factorial(2*n + 1)
return total
x = 0.2
approx = sin_taylor(x, 5)
print(approx, sin(x), abs(approx - sin(x)))
Common pitfalls
- Finding the radius of convergence but forgetting to test endpoints separately.
- Assuming a power series converges at both endpoints because it converges inside the radius.
- Applying term-by-term differentiation at an endpoint without checking endpoint behavior.
- Confusing Taylor polynomial degree with number of nonzero terms.
- Forgetting factorials in Taylor coefficients.
- Using an alternating error bound when the term magnitudes are not decreasing.
- Assuming every smooth function equals its Taylor series everywhere.
Connections
- Sequences and Series: power series convergence uses series tests.
- Applications of Derivatives: Taylor polynomials refine linear approximation and L'Hopital-style limits.
- Integration Techniques and Improper Integrals: power series can approximate integrals without elementary antiderivatives.
- Exponential Log and Inverse Functions: standard Taylor series include , , sine, and cosine.