Cambridge University Press, 2002. — 123 p. — ISBN: 0-521-81357-3.
The first version of these lecture notes was composed for a last-year undergraduate course at Chalmers University of Technology, in the spring semester 2000. I wrote a revised and expanded version for the same course one year later. This is the third and final (?) version.
The notes are intended to be sufficiently self-contained that they can be read without any supplementary material, by anyone who has previously taken (and passed) some basic course in probability or mathematical statistics, plus some introductory course in computer programming.
The core material falls naturally into two parts: Chapters 2–6 on the basic theory of Markov chains, and Chapters 7–13 on applications to a number of randomized algorithms.
Markov chains are a class of random processes exhibiting a certain “memoryless property”, and the study of these – sometimes referred to as Markov theory – is one of the main areas in modern probability theory. This area cannot be avoided by a student aiming at learning how to design and implement randomized algorithms, because Markov chains are a fundamental ingredient in the study of such algorithms. In fact, any randomized algorithm can (often fruitfully) be viewed as a Markov chain.
Basics of probability theory
Markov chains
Computer simulation of Markov chains
Irreducible and aperiodic Markov chains
Stationary distributions
Reversible Markov chains
Markov chain Monte Carlo
Fast convergence of MCMC algorithms
Approximate counting
The Propp–Wilson algorithm
Sandwiching
Propp–Wilson with read-once randomness
Simulated annealing
Further reading