John Wiley & Sons, 2006. — 192 p. — ISBN: 0471667269, 978-0471667261.
This publication examines the distinct philosophical foundations of different statistical modes of parametric inference. Unlike many other texts that focus on methodology and applications, this book focuses on a rather unique combination of theoretical and foundational aspects that underlie the field of statistical inference. Readers gain a deeper understanding of the evolution and underlying logic of each mode as well as each mode's strengths and weaknesses.
The book begins with fascinating highlights from the history of statistical inference. Readers are given historical examples of statistical reasoning used to address practical problems that arose throughout the centuries. Next, the book goes on to scrutinize four major modes of statistical inference: frequentist, likelihood, fiducial, bayesian. The author provides readers with specific examples and counterexamples of situations and datasets where the modes yield both similar and dissimilar results, including a violation of the likelihood principle in which Bayesian and likelihood methods differ from frequentist methods. Each example is followed by a detailed discussion of why the results may have varied from one mode to another, helping the reader to gain a greater understanding of each mode and how it works. Moreover, the author provides considerable mathematical detail on certain points to highlight key aspects of theoretical development.
The author's writing style and use of examples make the text clear and engaging. This book is fundamental reading for graduate-level students in statistics as well as anyone with an interest in the foundations of statistics and the principles underlying statistical inference, including students in mathematics and the philosophy of science. Readers with a background in theoretical statistics will find the text both accessible and absorbing.
A ForerunnerProbabilistic Inference — An Early Example
Frequentist AnalysisTesting Using Relative Frequency, Principles Guiding Frequentism, Further Remarks on Tests of Significance
LikelihoodLaw of Likelihood, Forms of the Likelihood Principle (LP), Likelihood and Significance Testing,
The 2 x 2 Table, Sampling Issues, Other Principles
Testing HypothesesHypothesis Testing via the Repeated Sampling Principle, Remarks on Size, Uniformly Most Powerful Tests,
Neyman-Pearson Fundamental Lemma, Monotone Likelihood Ratio Property, Decision Theory, Two-Sided Tests
Unbiased and Invariant TestsUnbiased Tests, Admissibility and Tests Similar on the Boundary, Neyman Structure and Completeness,
Invariant Tests, Locally Best Tests, Test Construction, Remarks on N-P Theory, Further Remarks on N-P Theory,
Law of the Iterated Logarithm (LIL), Sequential Analysis, Sequential Probability Ratio Test (SPRT),
Elements of BayesianismBayesian Testing, Testing a Composite vs. a Composite, Some Remarks on Priors for the Binomial,
Coherence, Model Selection
Theories of EstimationElements of Point Estimation, Point Estimation, Estimation Error Bounds, Efficiency and Fisher Information,
Interpretations of Fisher Information, The Information Matrix, Sufficiency, The Blackwell-Rao Result,
Bayesian Sufficiency, Maximum Likelihood Estimation, Consistency of the MLE,
Asymptotic Normality and Efficiency of the MLE, Sufficiency Principles
Set and Interval EstimationConfidence Intervals (Sets), Criteria for Confidence Intervals, Conditioning, Bayesian Intervals (Sets),
Highest Probability Density (HPD) Intervals, Fiducial Inference, Relation Between Fiducial and Bayesian Distributions,
Several Parameters, The Fisher-Behrens Problem, Confidence Solutions, The Fieller-Creasy Problem