Vectors in Rn
Vectors in are ordered lists of real numbers, but they should be read as more than lists. They encode displacement, data, forces, coefficients, states, and solutions. Linear algebra studies operations that preserve the two essential vector actions: adding vectors and scaling them.
This page is the concrete starting point for the rest of the subject. Later pages abstract these same operations to general vector spaces, but the intuition usually comes from and : arrows can be added tip-to-tail, stretched by scalars, and combined to reach new locations. The same algebra also describes columns of a matrix, feature vectors in data, and state vectors in applied models.
Definitions
A vector in is
Vector addition and scalar multiplication are componentwise:
A linear combination of vectors is
The span of a set of vectors is the set of all their linear combinations. In , one nonzero vector spans a line through the origin. Two nonparallel vectors span the whole plane.
The standard basis of is , where has a in position and zeros elsewhere. Every vector has coordinates relative to this basis:
The dot product and norm are
Key results
The vector operations in satisfy the usual algebraic laws: addition is commutative and associative, there is a zero vector, every vector has an additive inverse, scalar multiplication distributes over vector addition, and scalar multiplication is compatible with multiplication of scalars.
Every vector in has a unique standard-basis expansion:
Proof sketch: the right-hand side has th component , so it equals . Uniqueness follows because equality of vectors means equality component by component.
The line through a point in direction is
The plane through in directions is
provided and are not scalar multiples.
Linear dependence in means that at least one vector in a list can be built from the others. Equivalently, the equation
has a solution where not all are zero. Dependence is a geometric warning that the listed directions contain redundancy.
Visual
ASCII sketch of span in :
one nonzero vector two nonparallel vectors
^ ^
| span{v} | / u+v
| | /
--------+---------> --------+-----/------>
/ v | /
/ | / v
| /
u ----+ /
line through origin whole plane by combinations su + tv
| Concept | Algebraic test | Geometric meaning |
|---|---|---|
| Scalar multiple | same or opposite line direction | |
| Span of one nonzero vector | line through origin | |
| Span of two nonparallel vectors in | entire plane | |
| Linear dependence | nontrivial combination gives | redundant directions |
Worked example 1: Combine vectors and check a span
Problem: let
Compute , and decide whether
lies in .
Step 1: compute the linear combination componentwise.
Step 2: set up the span equation.
Step 3: compare components:
The first equation gives . The second gives
The third equation would require
The two required values of disagree. Checked answer: is not in the span of and .
Worked example 2: Parametrize a line and a plane
Problem: find a parametric equation for the line through
in the direction
then test whether lies on it.
Step 1: write the vector equation.
Step 2: write component equations:
Step 3: test . From ,
With , the coordinate is , and the coordinate is . Both match .
Checked answer: lies on the line, and it corresponds to .
Code
import numpy as np
u = np.array([2, -1, 3], dtype=float)
v = np.array([0, 4, 1], dtype=float)
w = np.array([5, 1, 4], dtype=float)
print(3 * u - 2 * v)
A = np.column_stack([u, v])
coeffs, residuals, rank, singular_values = np.linalg.lstsq(A, w, rcond=None)
print(coeffs)
print(A @ coeffs)
print(np.allclose(A @ coeffs, w))
print(rank, singular_values)
The least-squares call returns the closest vector in the span. allclose checks whether the match is exact up to floating-point tolerance; here it reports that the span equation is not exactly solvable.
Common pitfalls
- Treating vectors as points only. A vector can represent a point, displacement, data record, or coefficient list depending on context.
- Adding vectors of different dimensions. Operations in require matching lengths.
- Confusing span with a finite list of combinations. Span contains all scalar combinations, usually infinitely many.
- Assuming two nonzero vectors in span all of . Two directions can span at most a plane.
- Forgetting that the zero vector cannot determine a line direction by itself.
- Deciding span membership by visual guess instead of solving the coefficient equation.
A strong habit is to translate every span question into a linear system. To ask whether lies in the span of , set
and solve for the coefficients. If the system is consistent, the vector is in the span. If it is inconsistent, the vector is outside the span. This method scales from pictures in to high-dimensional data where visualization is impossible.
Linear combinations also provide the first intuition for coordinates. In the standard basis, the coordinates of are its entries. In a nonstandard basis, coordinates are the weights needed to build the same vector from different basis directions. This distinction becomes essential when studying change of basis, diagonalization, and linear transformations.
The zero vector deserves special attention. It belongs to every span because all coefficients can be chosen as zero. It is also linearly dependent with any list that contains it, because is a nontrivial relation. But it cannot serve as a direction vector for a line, since never moves away from the starting point.
In applications, vector entries should have a consistent interpretation. A vector might represent measurements, but adding two measurement vectors only makes sense if corresponding entries refer to the same quantities and units. Linear algebra supplies formal operations; modeling supplies the meaning of the components.
Vector equations are often cleaner than coordinate equations. The parametric line
encodes three component equations in and makes the direction visible. Similarly, a plane equation using two direction vectors shows immediately which movements stay inside the plane. Coordinate equations are useful for solving; vector equations are useful for structure.
The idea of a linear combination is the seed of matrix multiplication. If has columns , then
Thus every matrix-vector product is a column combination. This viewpoint makes systems, column spaces, rank, and linear transformations feel like continuations of vector arithmetic rather than separate topics.
Magnitude and direction should also be kept separate. Scaling by a positive scalar changes length but not direction; scaling by a negative scalar reverses direction; adding vectors changes both. Later, normalization will create unit vectors that preserve direction while simplifying length calculations.
A final useful distinction is between equality of vectors and equality of representations. In , equality means every corresponding component is equal. But the same geometric vector may be represented with different coordinates after a change of basis. The entries are not the essence of the vector; they are its coordinates in a chosen coordinate system.
Many later computations are just organized vector arithmetic. Solving a system asks whether one vector is a linear combination of others. Finding a basis asks for a nonredundant spanning list. Orthogonal projection asks for the closest vector inside a span. Keeping the basic vector operations clear makes those later topics much easier to connect.
Before moving on, be able to move between three descriptions of the same object: a column of entries, an arrow or displacement, and a linear combination of basis vectors. Most later errors come from forgetting which description is being used in a given argument.
Also practice checking dimensions aloud: only vectors in the same can be added, and only matching component positions should be compared or combined.
That discipline prevents silent coordinate mistakes.