John Wiley & Sons, Inc., 1986. — 526 p. — (Probability and Statistics). — ISBN: 0-471-73577-9.
Statistics is the art and science of extracting useful information from empirical data. An effective way for conveying the information is to use parametric stochastic models. After some models had been used for more than two centuries, R. A. Fisher multiplied the number of useful models and derived statistical procedures based on them in the 1920s. His work laid the foundation for what is the most widely used statistical approach in today’s sciences.
This “classical approach” is founded on stringent stochastic models, and before long it was noticed that the real world does not behave as nicely as described by their assumptions. In addition, the good performance and the valid application of the procedures require strict adherence to the assumptions. Consequently, nonparametric statistics emerged as a field of research, and some of its methods, such as the Wilcoxon test, became widely popular in applications. The basic principle was to make as few assumptions about the data as possible and still get the answer to a specific question like “Is there a difference?” While some problems of this kind did find very satisfactory solutions, parametric models continued to play an outstanding role because of their capacity to describe the information contained in a data set more completely, and because they are useful in a wider range of
applications, especially in more complex situations.
Robust statistics combines the virtues of both approaches. Parametric models are used as vehicles of information, and procedures that do not depend critically on the assumptions inherent in these models are implemented.
Introduction and motivation
One-dimensional estimators
One-dimensional tests
Multidimensional estimators
Estlmation of covariance matrices and multivariate location
Linear models: robust estimation
Linear models: robust testing
Complements and outlook