Skip to main content

Machine Learning

These notes follow the chapter scope of Tom M. Mitchell's Machine Learning (McGraw-Hill, 1997), a foundational pre-deep-learning textbook. The book presents machine learning as the study of programs that improve with experience, then develops that idea through symbolic concept learning, decision trees, neural networks, statistical evaluation, Bayesian learning, computational learning theory, instance-based methods, evolutionary search, rule learning, analytical learning, and reinforcement learning.

Mitchell's treatment sits historically before today's large neural networks, GPU training, foundation models, and deep reinforcement learning systems. That context matters: the examples are smaller and the algorithms are often presented in their clean classical forms. Many ideas, however, remain central. Backpropagation is still backpropagation. Decision trees still matter. PAC learning and VC dimension still teach sample complexity. Bayesian modeling still provides a language for uncertainty, prior knowledge, and likelihood. Reinforcement learning still revolves around delayed reward, exploration, and value functions.

Use this section as a classic machine-learning map. For modern extensions, cross-reference the deeper SJ Wiki sections on deep learning, reinforcement learning, data mining, probability, and statistics.

  1. Learning Problems and System Design
  2. Concept Learning and Version Spaces
  3. Decision Tree Learning
  4. Artificial Neural Networks
  5. Evaluating Hypotheses
  6. Bayesian Learning
  7. Bayesian Classifiers, Networks, and EM
  8. Computational Learning Theory
  9. Instance-Based Learning
  10. Genetic Algorithms
  11. Rule Learning and ILP
  12. Analytical Learning
  13. Combining Inductive and Analytical Learning
  14. Reinforcement Learning