Bayesian Learning
Bayesian learning is both an algorithmic family and an analytical lens. Mitchell uses it to describe classifiers that explicitly compute probabilities, but also to explain why some non-Bayesian procedures can be interpreted as optimizing a probabilistic objective. This chapter links prior beliefs, observed data, likelihood, posterior probability, maximum a posteriori hypotheses, maximum likelihood hypotheses, and minimum description length.
Historically, this treatment is important because it places machine learning inside probability theory rather than only symbolic search or numerical optimization. Modern probabilistic modeling, Bayesian networks, topic models, Gaussian mixtures, and calibrated classifiers all build on the same foundations, even when implemented with newer algorithms.
Definitions
Bayes' theorem states:
Here is a hypothesis and is observed data.
| Term | Meaning |
|---|---|
| Prior probability of hypothesis before observing | |
| Likelihood of observing if were true | |
| Evidence or marginal likelihood | |
| Posterior probability after observing |
A maximum a posteriori hypothesis is:
Because is constant with respect to :
A maximum likelihood hypothesis is:
Maximum likelihood is the special case of MAP when all hypotheses have equal prior probability.
The Bayes optimal classifier predicts the class with greatest posterior probability after averaging across all hypotheses:
The Gibbs algorithm samples one hypothesis from the posterior and uses it to classify. Mitchell presents it because, under suitable assumptions, it has an expected error at most twice that of the Bayes optimal classifier.
Key results
Bayesian analysis clarifies concept learning. If the learner assumes deterministic, noise-free labels and assigns equal prior probability to each hypothesis, then all consistent hypotheses have equal posterior probability and all inconsistent hypotheses have posterior probability zero. Under those assumptions, the version space is precisely the set of hypotheses with nonzero posterior probability.
MAP and ML also connect to common loss functions. Suppose examples have real-valued targets and independent Gaussian noise with constant variance:
Maximizing likelihood is equivalent to minimizing sum of squared errors:
This gives a probabilistic justification for the squared-error objectives used with linear units and many neural-network examples.
For classification with Bernoulli outputs, maximizing likelihood leads to cross-entropy-style objectives. Mitchell's treatment anticipates modern logistic classification, although the notation and applications are smaller in scale.
Minimum description length (MDL) is another face of Bayesian preference. It chooses the hypothesis that gives the shortest combined encoding of the hypothesis and the data given the hypothesis:
This connects Occam's razor to probability: simpler hypotheses are preferred when they compress the data well, not simply because they are short.
Bayesian learning also clarifies what it means to combine prior knowledge with data. The prior can encode a preference for simpler hypotheses, smoother functions, smaller weights, known causal structures, or expert rules. The likelihood measures how well each hypothesis predicts the observed evidence. A strong prior can dominate when data are scarce; abundant data can overwhelm a weak prior when the likelihood strongly favors another hypothesis.
The difference between MAP and Bayes optimal prediction is a difference between choosing and averaging. MAP chooses the single most probable hypothesis and predicts with it. Bayes optimal prediction averages over the posterior distribution of hypotheses. Averaging accounts for uncertainty, but it is often computationally expensive because it requires summing or integrating over many hypotheses. This is why MAP and approximations such as Gibbs sampling appear in practical discussions.
Mitchell's connection between likelihood and squared error is a recurring bridge across chapters. The LMS rule from Chapter 1 and the delta rule from Chapter 4 can be interpreted as optimizing a likelihood under Gaussian noise assumptions. This does not mean squared error is always right. It means each loss function carries an implicit noise model. Choosing a loss is partly choosing a belief about how targets are generated around the ideal function.
Bayesian classifiers also invite a distinction between decision making and probability estimation. A classifier may choose the right class even if its probabilities are poorly calibrated. Conversely, a well-calibrated model may sometimes choose the wrong class because the evidence is genuinely ambiguous. Mitchell's presentation of Bayes optimal classification focuses on minimizing expected classification error, but the posterior probabilities can also support decisions with unequal costs, such as medical screening where false negatives may be more expensive than false positives.
The assumption that the hypothesis space is known is another important simplification. In practice, model structure, features, and priors are designed by the engineer. A Bayesian calculation can be exact relative to those choices and still miss the real data-generating process. That is why the Bayesian view complements, rather than replaces, evaluation on held-out data.
This is also why Bayesian results in the book often serve two roles at once. They give algorithms for learners that explicitly manipulate probabilities, and they give explanations for why other algorithms behave sensibly under certain assumptions. Reading the chapter both ways helps connect symbolic consistency, squared-error training, probabilistic classification, and Occam-style simplicity.
Visual
| Principle | Optimization form | Intuition | Mitchell connection |
|---|---|---|---|
| MAP | Fit data while respecting prior belief | Bayesian concept learning | |
| ML | Choose the hypothesis that makes data most probable | Squared-error and likelihood derivations | |
| Bayes optimal | Average predictions over all hypotheses | Use posterior uncertainty instead of one winner | Theoretical ideal classifier |
| Gibbs | Sample from posterior | Cheaper randomized approximation | Error bound relative to Bayes optimal |
| MDL | Prefer concise total explanations | Occam-style bias and pruning |
Worked example 1: Compute a MAP hypothesis
Problem: Two hypotheses can explain a dataset .
| Hypothesis | Prior | Likelihood |
|---|---|---|
| 0.70 | 0.20 | |
| 0.30 | 0.60 |
Find the MAP hypothesis and normalized posterior probabilities.
Method:
- Compute unnormalized posterior scores.
- Compute evidence by summing scores over the two hypotheses.
- Normalize.
- Choose the largest posterior.
Answer: is the MAP hypothesis with posterior probability . The data favor enough to overcome its smaller prior.
Worked example 2: Show why Gaussian noise gives squared error
Problem: Suppose and with independent Gaussian noise . Show why maximizing likelihood is equivalent to minimizing squared error.
Method:
- Write the likelihood for one example.
- Independence makes the full likelihood a product.
- Take log likelihood.
- Separate constants from the hypothesis-dependent part.
-
Maximize the log likelihood.
Since and do not depend on , maximizing the expression is equivalent to minimizing:
Answer: Under independent constant-variance Gaussian noise, maximum likelihood learning is least-squares learning.
Code
import numpy as np
priors = np.array([0.70, 0.30])
likelihoods = np.array([0.20, 0.60])
scores = priors * likelihoods
posteriors = scores / scores.sum()
map_index = int(np.argmax(posteriors))
print("posterior probabilities:", posteriors)
print("MAP hypothesis:", f"h{map_index + 1}")
Common pitfalls
- Confusing with . The posterior and likelihood answer different questions.
- Dropping the prior without noticing. Maximum likelihood is not always appropriate when prior knowledge is meaningful.
- Treating the evidence as irrelevant in all contexts. It is irrelevant for comparing fixed hypotheses by MAP, but important for normalized probabilities and model comparison.
- Saying "Bayesian" when only a point estimate is used. MAP is Bayesian in its objective, but it still returns one hypothesis.
- Interpreting MDL as "shortest hypothesis wins." MDL minimizes the code length of hypothesis plus remaining data, so an overly simple hypothesis can lose if it fails to explain the data.
- Assuming all posterior computations are tractable. Bayes optimal classification is often a theoretical standard rather than a practical algorithm.