Berlin: de Gruyter, 1998. — 377 p.
The central limit theorem of probability theory asserts that, under some conditions, the distribution of a sum of random variables is approximated by the normal law. This allows us to consider in actual practice the normal distribution rather than the distribution of a sum of random variables. In other words, we can replace the distribution of a sum of random variables, which is usually very complicated, with a somewhat simple limiting normal distribution. The natural question arises about the normal approximation errors.
The first results concerning the normal approximation errors were due to the Russian mathematician, academician Λ. M. Lyapunov, who obtained them in his works of 1900-1901. Lyapunov's investigations inspired many scientists to begin the analysis of approximation errors in limit theorems; this theme is still far from completion even today. An impetus to the development of this field was given by the emergence of the famous Berry-Esseen theorem proved in the beginning of the forties. In the second half of the forties, the first works on the normal approximation in multidimensional spaces came to light, whereas the investigations concerning the infinite-dimensional cases began in the sixties.
The importance of the investigations concerning the theory of limit theorems for sums of independent random variables is explicable indeed. The simple addition operation of independent random variables corresponds to the very complicated operation of convolution of distributions. From the formal viewpoint, the distribution of the sum of η independent random variables distributed by a law Ρ can be immediately expressed as the η-fold convolution P*n of the distribution P. But the calculation of multifold convolution is very hard, and thus a highly non-trivial mathematical theory appeared, which deals with the multifold convolutions; it was exactly the theory of limit theorems for sums of independent random variables. This theory goes back to the works due to Bernoulli, de Moivre, Laplace, pursued by Poisson, Cauchy, Gauss, Chebyshev, Markov, and Lyapunov. The related list of mathematicians of the twentieth century is much more voluminous.
The experiences in the field of limit theorems for sums of independent random variables were summed up in the outstanding monograph 'Limit Theorems for Sums of Independent Random Variables' due to Β. V. Gnedenko and Α. Ν. Kolmogorov, which appeared in 1949, and since then as translated into many languages.
The estimation of approximation errors in limit theorems is a constituent part of the theory of limit theorems. To solve this problem, various approaches were suggested; the most powerful and well-deserved among them is the method of characteristic functions. It was introduced by Lyapunov, and up to the forties of the twentieth century remained the base method to analyze multifold convolutions of distributions. Then other methods came to light; we mention here two of them: the method of compositions suggested by Bergström in the end of the forties, and the method of metric distances proposed by Zolotarev in the middle of the seventies. Applied to a series of problems of probability theory, the latter method appeared to be no weaker than the method of characteristic functions, and even overpowered it in many cases.
It is clear that the error of approximation of a function can be understood in many senses. So, a considerable part of this monograph is devoted to introducing the reader into the theory of probability metrics. After this, we consider some results based on the method of characteristic functions, then the method of compositions, and finally, the method of metric distances. The last method is the base of this monograph. The idea of this method consists of the use of metrics that possess some special properties. We consider the case of real-valued random variables, the case of random variables with values in finite-dimensional Euclidean spaces, and the case where the random variables take values in infinite-dimensional real separable Hilbert spaces.
It seems likely that the title of this book intrigued the reader. Indeed, from the common viewpoint, first the problem arises, then some method to solve it, and finally, the result appears. But very often we come up against another situation, where some result (maybe, in some special case) is given, then an algorithm to validate it in the general case appears, and finally, one can formulate the whole problem itself, as well as many new problems that occur while proving.
Probably, the most natural example of such a situation is the central limit theorem itself. It was first formulated by de Moivre in 1730 for the special case of Bernoulli trials. A century later, Laplace re-invented the central limit theorem, and found the integral form of the limiting normal law (whereas de Moivre represented the limiting distribution as a series). Then decades of intensive work of the best mathematicians were required for the central limit theorem to get its modern form. So, new methods were invented (e.g. the method of characteristic functions), and new problems arose: in particular, the problem to estimate the convergence rate. Some of these problems are not solved yet.