ANOVA
Analysis of variance, or ANOVA, compares means by partitioning variability. Instead of testing every pair of means separately at the start, a one-way ANOVA asks whether group membership explains a meaningful share of the total variation in a quantitative response. The Lane text presents ANOVA after tests for means because ANOVA generalizes mean comparison to three or more groups and to factorial designs.
The name can be confusing: ANOVA tests hypotheses about means by analyzing variances. If group means are far apart relative to variation within groups, the between-group mean square becomes large compared with the within-group mean square, producing a large statistic. If group means differ only by the amount expected from within-group noise, the statistic tends to be near 1.
Definitions
In a one-way ANOVA, one categorical explanatory variable, called a factor, has levels or groups. The response variable is quantitative. The null hypothesis is
The alternative is that not all group means are equal.
Let be observation in group . Let be the mean of group , be its sample size, and be the grand mean across all observations. Total variation is measured by
Between-group variation is
Within-group variation, also called error variation, is
These satisfy the decomposition
Mean squares divide sums of squares by degrees of freedom:
The ANOVA test statistic is
Under the null hypothesis and assumptions, follows an distribution with and degrees of freedom.
Key results
ANOVA assumptions include independent observations, approximately normal response distributions within groups or sufficient sample sizes, and reasonably similar within-group variances for the classical fixed-effects one-way ANOVA. The method is fairly robust to mild normality departures when group sizes are balanced, but strong skewness, outliers, dependence, or severe unequal variances can distort conclusions.
A significant one-way ANOVA tells us that at least one population mean differs from another. It does not identify which means differ. Follow-up comparisons, such as Tukey's HSD, planned contrasts, or adjusted pairwise tests, are needed for specific group differences.
Two-way ANOVA includes two factors and can test main effects and an interaction. A main effect asks whether the mean response differs across levels of one factor, averaging over the other factor. An interaction asks whether the effect of one factor depends on the level of the other. Interactions are often more scientifically interesting than main effects because they reveal conditional patterns.
An effect-size measure for one-way ANOVA is eta squared:
It estimates the proportion of observed total variation accounted for by group membership. Like all effect sizes, it should be interpreted in context, not by a universal label alone.
ANOVA is closely related to regression. A one-way ANOVA can be written as a linear model with indicator variables for the groups, and the omnibus test compares a model with group indicators against a model with only an intercept. This connection matters because it unifies methods that can otherwise seem separate. Regression, ANOVA, and many designed-experiment analyses all ask whether a model explains enough variation to justify its complexity. The difference is mainly in how predictors are coded and which comparisons are planned in advance.
Planned comparisons should be distinguished from exploratory comparisons. If researchers specify before collecting data that Method C should exceed the average of Methods A and B, that contrast can be tested directly. If researchers inspect all group means and then test only the largest-looking difference, the Type I error rate is no longer the one advertised by a single comparison. ANOVA workflows therefore need both statistical calculation and a transparent comparison plan.
Visual
| Source | Sum of squares | df | Mean square | Role |
|---|---|---|---|---|
| Between groups | variation explained by group | |||
| Within groups | residual variation | |||
| Total | not used in | total observed variation |
Worked example 1: One-way ANOVA by hand
Problem: Three study methods are compared with four students per method. Scores are:
| Method A | Method B | Method C |
|---|---|---|
| 78 | 85 | 88 |
| 82 | 86 | 90 |
| 80 | 84 | 92 |
| 79 | 87 | 91 |
Compute the ANOVA table and test whether the means differ.
Method:
- Group means:
- Grand mean:
- Between-group sum of squares:
- Within-group sum of squares:
For A, squared deviations from 79.75 are , summing to 8.75.
For B, squared deviations from 85.5 are , summing to 5.00.
For C, squared deviations from 90.25 are , summing to 8.75.
Thus
- Degrees of freedom:
- Mean squares:
- statistic:
Answer: The statistic is about 44.23 with , giving a very small p-value. Reject the null hypothesis of equal means. The study methods differ in mean score.
Checked answer: Within-group variation is small compared with the separation among group means, so a large statistic is expected.
Worked example 2: Interpreting a two-way interaction
Problem: A two-way study compares mean task score by training type, online versus in-person, and practice schedule, spaced versus massed. The cell means are:
| Spaced practice | Massed practice | |
|---|---|---|
| Online | 84 | 78 |
| In-person | 88 | 87 |
Decide whether the pattern suggests an interaction.
Method:
- Compute the effect of practice schedule within online training:
Spaced practice is 6 points higher than massed practice online.
- Compute the effect of practice schedule within in-person training:
Spaced practice is 1 point higher than massed practice in person.
- Compare these simple effects:
- Because the practice effect differs by training type, the means suggest an interaction.
Answer: Yes, the table suggests an interaction: spaced practice appears much more beneficial in the online condition than in the in-person condition. Formal significance would require the within-cell variability, sample sizes, and ANOVA interaction test.
Checked answer: Main effects alone would average over the other factor and could hide the conditional pattern. The interaction check compares differences of differences.
Code
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
scores = [78, 82, 80, 79, 85, 86, 84, 87, 88, 90, 92, 91]
method = ["A"] * 4 + ["B"] * 4 + ["C"] * 4
df = pd.DataFrame({"score": scores, "method": method})
model = ols("score ~ C(method)", data=df).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
print(anova_table)
ss_between = anova_table.loc["C(method)", "sum_sq"]
ss_total = ss_between + anova_table.loc["Residual", "sum_sq"]
print("eta squared:", ss_between / ss_total)
Statsmodels uses formula notation. C(method) tells the model to treat method as a categorical factor rather than a numeric predictor.
Common pitfalls
- Running many unadjusted pairwise tests instead of starting with a planned ANOVA or adjusted comparisons.
- Thinking a significant ANOVA identifies exactly which groups differ.
- Ignoring interactions in two-way designs and interpreting main effects alone.
- Forgetting that ANOVA assumes independent observations within and across groups.
- Treating ordinal ratings with few levels as if they were precise normal measurements without checking whether that is defensible.
- Reporting only the p-value and omitting group means, variability, and an effect-size measure.