Lulu.com, 2008. — 250 p. — ISBN10: 0557019907; ISBN13: 978-0557019908.
There is no such thing as a law of averages. If you are watching a roulette wheel, and it has just come up red twelve times in a row, the in no way is black “due” to show. That wheel has no memory; it cannot recall that it was just red for so long. There are no hidden forces that will nudge it back to black. Evidence, and logic, tell us that the probability black will be next is the same no matter how many times we saw red.
Real life events, like a ball landing on a certain color in roulette, will not always “even out.” Just because it’s possible to win the lottery does not mean, unfortunately, that if you keep playing you will eventually win. No, there is no law of averages. But there is such a thing as being too sure of yourself—as you will be if you try to make decisions under this mythical law.
You might then be surprised to learn that much of probability and statistics—as taught in college courses all over the world—are designed
around a law-of-averages-like set of procedures. This means that if you use those traditional methods, then you will be too sure of your results, just as when you were too certain that black would show next.
This book is different than others in two major ways. The first is the focus on what is called objective Bayesian probability. This is a logical, non-
subjective, evidence-driven way to understand probability. Chapter 1 details the merits of this approach, and the demerits of older ideas.
The second difference is more difficult to explain, and which will become clearer as you progress. Briefly, to create a probability requires fixing certain mathematical objects called parameters. Almost every statistical method in use focuses solely on these parameters. But here’s the thing. These parameters do not exist, they cannot be measured, seen, touched, or tasted. Isn’t is strange, then, that nearly every statistical method in use is designed to make statements about parameters. This book will show you how to remove the influence of parameters and bring the focus back to reality, to real, tangible, measurable, observables. Things you can touch and see. Doing so will give us a much clearer—and fairer—picture of what is going on in any problem.
Incidentally, in mathematical circles, this approach goes by the fancy term predictive inference (e.g. Geisser, 1993; Lee et al., 1996).
Probability, and its stepchild statistics, exist to do one thing: help us to understand and quantify uncertainty. A lot of uncertainty can be quantified, and some cannot. We’ll learn how to indentify both situations, to know when we can use math and computers and when we will be left with nothing but our intuitions.