3rd ed. — Springer, 2005. — 800 p. — ISBN 0387988645.
The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760.
Problems
Notes
Uniformly Most Powerful Tests:
Stating The Problem
The Neyman–Pearson Fundamental Lemma
p-values
Distributions with Monotone Likelihood Ratio
Confidence Bounds
A Generalization of the Fundamental Lemma
Two-Sided Hypotheses
Least Favorable Distributions
Applications to Normal Distributions
Univariate Normal Models
Multivariate Normal Models
Problems
Notes
Unbiasedness. Theory and First Applications:
Unbiasedness For Hypothesis Testing
One-Parameter Exponential Families
Similarity and Completeness
UMP Unbiased Tests for Multiparameter Exponential Families
Comparing Two Poisson or Binomial Populations
Testing for Independence in a × Table
Alternative Models for × Tables
Some Three-Factor Contingency Tables
The Sign Test
Problems
Notes
Unbiasedness. Applications to Normal Distributions:
Statistics Independent of a Sufficient Statistic
Testing the Parameters of a Normal Distribution
Comparing the Means and Variances of Two Normal Distributions
Confidence Intervals and Families of Tests
Unbiased Confidence Sets
Regression
Bayesian Confidence Sets
Permutation Tests
Most Powerful Permutation Tests
Randomization As A Basis For Inference
Permutation Tests and Randomization
Randomization Model and Confidence Intervals
Testing for Independence in a Bivariate Normal Distribution
Problems
Notes
Invariance:
Symmetry and Invariance
Maximal Invariants
Most Powerful Invariant Tests
Sample Inspection by Variables
Almost Invariance
Unbiasedness and Invariance
Admissibility
Rank Tests
The Two-Sample Problem
The Hypothesis of Symmetry
Equivariant Confidence Sets
Average Smallest Equivariant Confidence Sets
Confidence Bands for a Distribution Function
Problems
Notes
Linear Hypotheses:
A Canonical Form
Linear Hypotheses and Least Squares
Tests of Homogeneity
Two-Way Layout: One Observation per Cell
Two-Way Layout: m Observations Per Cell
Regression
Random-Effects Model: One-way Classification
Nested Classifications
Multivariate Extensions
Problems
Notes
The Minimax Principle:
Tests with Guaranteed Power
Examples
Comparing Two Approximate Hypotheses
Maximin Tests and Invariance
The Hunt–Stein Theorem
Most Stringent Tests
Problems
Notes
Multiple Testing and Simultaneous Inference:
Introduction and the FWER
Maximin Procedures
The Hypothesis of Homogeneity
Scheff´e’s S-Method: A Special Case
Scheff´e’s S-Method for General Linear Models
Problems
Notes
Conditional Inference:
Mixtures of Experiments
Ancillary Statistics
Optimal Conditional Tests
Relevant Subsets
Problems
Notes
II Large-Sample Theory
Basic Large Sample Theory:
Basic Convergence Concepts
Weak Convergence and Central Limit Theorems
Convergence in Probability and Applications
Almost Sure Convergence
Robustness of Some Classical Tests
Effect of Distribution
Effect of Dependence
Robustness in Linear Models
Nonparametric Mean
Edgeworth Expansions
The t-test
A Result of Bahadur and Savage
Alternative Tests
Problems
Notes
Quadratic Mean Differentiable Families:
Quadratic Mean Differentiability (q m d )
Contiguity
Likelihood Methods in Parametric Models
Efficient Likelihood Estimation
Wald Tests and Confidence Regions
Rao Score Tests
Likelihood Ratio Tests
Problems
Notes
Large Sample Optimality:
Testing Sequences, Metrics, and Inequalities
Asymptotic Relative Efficiency
AUMP Tests in Univariate Models
Asymptotically Normal Experiments
Applications to Parametric Models
One-sided Hypotheses
Equivalence Hypotheses
Multi-sided Hypotheses
Applications to Nonparametric Models
Nonparametric Mean
Nonparametric Testing of Functionals
Problems
Notes
Testing Goodness of Fit:
The Kolmogorov-Smirnov Test
Simple Null Hypothesis
Extensions of the Kolmogorov-Smirnov Test
Pearson’s Chi-squared Statistic
Simple Null Hypothesis
Chi-squared Test of Uniformity
Composite Null Hypothesis
Neyman’s Smooth Tests
Fixed k Asymptotics
Neyman’s Smooth Tests With Large k
Weighted Quadratic Test Statistics
Global Behavior of Power Functions
Problems
Notes
General Large Sample Methods:
Permutation and Randomization Tests
The Basic Construction
Asymptotic Results
Basic Large Sample Approximations
Pivotal Method
Asymptotic Pivotal Method
Asymptotic Approximation
Bootstrap Sampling Distributions
Introduction and Consistency
The Nonparametric Mean
Further Examples
Stepdown Multiple Testing
Higher Order Asymptotic Comparisons
Hypothesis Testing
Subsampling
The Basic Theorem in the I I D Case
Comparison with the Bootstrap
Hypothesis Testing
Problems
Notes
A Auxiliary Results:
A Equivalence Relations; Groups
A Convergence of Functions; Metric Spaces
A Banach and Hilbert Spaces
A Dominated Families of Distributions
A The Weak Compactness Theorem
Author Index
Subject Index