Summarizing Distributions
Numerical summaries compress a distribution into a few interpretable quantities. The Lane text treats central tendency, variability, percentiles, and shapes of distributions as foundational because later inference depends on them. A confidence interval for a mean uses and ; a z-score uses a mean and a standard deviation; an ANOVA partitions variability; regression measures how much variability is explained by a line.
Compression always loses information. A mean does not show skewness, a standard deviation does not show outliers, and a percentile does not show whether nearby values are common or rare. Good summary work therefore pairs numbers with a graph and chooses statistics whose meanings fit the distribution shape and measurement level.
Definitions
The mean of observations is
It is the balance point of the data and uses every value. The median is the middle value after sorting, or the average of the two middle values when is even. It is resistant to extreme values. The mode is the most frequent value or category. A distribution can be unimodal, bimodal, multimodal, or have no repeated value.
The range is . The interquartile range is , the width of the middle half of the data. The sample variance is
and the sample standard deviation is . The denominator gives the usual unbiased estimator of population variance under random sampling.
A percentile indicates relative standing. The 80th percentile is a value at or below which about 80% of observations fall. A z-score standardizes an observation:
for a population, or approximately
when using sample summaries descriptively. Positive z-scores are above the mean; negative z-scores are below.
Skewness describes asymmetry. A right-skewed distribution has a long right tail and often has mean greater than median. A left-skewed distribution has a long left tail and often has mean less than median. Kurtosis describes tail heaviness or peakedness relative to a reference distribution, though in introductory work it is more important to recognize outliers and tail behavior visually than to memorize a single kurtosis rule.
Key results
The mean is sensitive to linear transformations. If every observation is transformed by , then
The standard deviation is affected by scale but not by shifts:
Adding 10 points to every exam score raises the mean and median by 10 but leaves standard deviation and unchanged. Multiplying every measurement by 2 doubles the mean, median, standard deviation, and .
The sample variance can be computed from deviations:
This identity explains why squared deviations are used: simple deviations always cancel around the mean. Squaring makes distances positive and gives larger penalties to observations far from the center.
For symmetric unimodal data without strong outliers, mean and standard deviation are usually informative. For skewed data or data with outliers, the median and often describe typical values better. For nominal categorical data, the mode and proportions are meaningful, while means and standard deviations of arbitrary category codes are not.
The empirical rule applies approximately to bell-shaped distributions:
This rule is descriptive and approximate. It should not be applied blindly to strongly skewed, bounded, discrete, or multimodal distributions.
Summary choice should also follow the decision being made. If a city reports household income to describe the "typical" resident, the median is often preferred because a few very high incomes can pull the mean upward. If a factory monitors fill weights from a stable machine, the mean and standard deviation are often more useful because symmetric random variation around a target is expected. If a teacher reports class performance, the median, quartiles, and score distribution may be more informative than the mean alone. The right summary is the one that preserves the feature of the data most relevant to the question while making its limitations visible.
For any numerical summary, attach units and sample context. A standard deviation of 4 means very different things if the unit is seconds, dollars, kilograms, or points on a 5-point scale. Likewise, a median computed from 12 observations should be presented more cautiously than a median computed from 12,000 observations collected by a careful sampling design.
Visual
| Summary | Formula or rule | Resistant to outliers? | Best use |
|---|---|---|---|
| Mean | No | Symmetric quantitative data | |
| Median | middle sorted value | Yes | Skewed quantitative or ordinal data |
| Mode | most frequent value | Often | Categorical data, repeated values |
| Range | No | Quick total spread | |
| IQR | Yes | Middle spread, box plots | |
| Standard deviation | No | Typical distance for roughly symmetric data | |
| z-score | No | Relative standing on a common scale |
Mean vs median in a right-skewed distribution
frequency
^
| #####
| #########
| ############
| ##########
| ######
| ###
| #
+--------------------------------> value
median mean
Worked example 1: Mean, median, variance, and standard deviation
Problem: A small lab records the number of minutes needed to process seven samples:
Find the mean, median, sample variance, and sample standard deviation. Comment on the high value 19.
Method:
- Add the observations:
- Divide by :
- The data are already sorted. With seven values, the median is the 4th value:
- Compute deviations from the mean and square them:
| 9 | -3 | 9 |
| 10 | -2 | 4 |
| 10 | -2 | 4 |
| 11 | -1 | 1 |
| 12 | 0 | 0 |
| 13 | 1 | 1 |
| 19 | 7 | 49 |
- Sum squared deviations:
- Divide by :
- Take the square root:
Answer: The mean is 12 minutes, the median is 11 minutes, the sample variance is about 11.33 square minutes, and the sample standard deviation is about 3.37 minutes. The value 19 pulls the mean above the median and contributes of the squared-deviation total, so it strongly affects the standard deviation.
Checked answer: The deviations add to , which confirms the mean arithmetic.
Worked example 2: Percentiles and z-scores
Problem: A student scored 86 on an exam. The class mean was 74 and the sample standard deviation was 8. Another exam in a different course had mean 62 and standard deviation 12, and the same student scored 80. On which exam was the student farther above the class average?
Method:
- Standardize the first score:
- Standardize the second score:
- Compare the z-scores, not the raw score differences.
Answer: The student was equally far above average on both exams: 1.5 standard deviations above the mean. The first raw difference was 12 points and the second was 18 points, but the second course had more spread, so the standardized standing is the same.
Checked answer: Both standardized differences reduce to . If the distributions are roughly bell-shaped, a score 1.5 standard deviations above the mean is around the upper tail but not extremely rare.
Code
import numpy as np
from scipy import stats
x = np.array([9, 10, 10, 11, 12, 13, 19])
mean = x.mean()
median = np.median(x)
sample_var = x.var(ddof=1)
sample_sd = x.std(ddof=1)
z_scores = (x - mean) / sample_sd
print({"mean": mean, "median": median, "variance": sample_var, "sd": sample_sd})
print("z-scores:", np.round(z_scores, 2))
print("skewness:", stats.skew(x, bias=False))
print("kurtosis excess:", stats.kurtosis(x, bias=False))
The argument ddof=1 requests the sample variance denominator . Without it, NumPy uses the population denominator , which is appropriate only when the data are the whole population being summarized.
Common pitfalls
- Reporting only the mean for a skewed distribution where the median better represents a typical case.
- Forgetting that variance is in squared units while standard deviation is in the original units.
- Mixing population and sample formulas without thinking about whether the data are a full population or a sample.
- Treating percentiles as percentages correct. A percentile is a location in a distribution, not a test score scale.
- Applying the empirical rule to a strongly skewed or multimodal distribution.
- Deleting high or low observations because they affect the mean, instead of investigating whether they are valid.