Зарегистрироваться
Восстановить пароль
FAQ по входу

Thrun S. Explanation-Based Neural Network Learning. A Lifelong Learning Approach

  • Файл формата pdf
  • размером 4,39 МБ
  • Добавлен пользователем
  • Описание отредактировано
Thrun S. Explanation-Based Neural Network Learning. A Lifelong Learning Approach
Kluwer, 1994. — 274.
This book is the result of an attempt to broaden the scope of machine learning. The framework proposed here, called lifelong learning, addresses scenarios in which a learning algorithm faces a whole collection of learning tasks. Instead of having just an isolated set of data points, a lifelong learning algorithm can incrementally build on previous learning experiences in order to generalize more accurately. Consider, for example, the task of recognizing objects from color camera images, which is one of the examples studied in this book. When learning to recognize a new object, knowledge acquired in previous object recognition tasks can aid the learner with a general understanding of the invariances that apply to all object recognition tasks (e.g., invariances with respect to translation, rotation, scaling, varying illumination), hence lead to improved recognition rates from less training data. Lifelong learning addresses the question of learning to learn. The acquisition, representation and transfer of domain knowledge are the key scientific concerns that arise in lifelong learning.
To approach the lifelong learning problem, this book describes a new algorithm, called the explanation-based neural network learning algorithm (EBNN). EBNN integrates two well-understood machine learning paradigms: artificial neural network learning and explanation-based learning. The neural network learning strategy enables EBNN to learn from noisy data in the absence of prior learning experience. It also allows it to learn domain-specific knowledge that can be transferred to later learning tasks. The explanation-based strategy employs this domain-specific knowledge to explain the data in order to guide the generalization in a knowledgeable and domain-specific way. By doing so, it reduces the need for training data, replacing it by previously learned domain-specific knowledge.
To elucidate the EBNN approach in practice, empirical results derived in the context of supervised and reinforcement learning are also reported. Experimental test beds include an object recognition task, several robot navigation and manipulation tasks, and the game of chess. The main scientific result of these studies is that the transfer of previously learned knowledge decreases the need for training data. In all our experiments, EBNN generalizes significantly more accurately than traditional methods if it has previously faced other, related learning tasks. A second key result is that EBNN's transfer mechanism is both effective and robust to errors in the domain knowledge. If the learned domain knowledge is accurate, EBNN compares well to other explanation-based methods. If this knowledge is inaccurate and thus misleading, EBNN degrades gracefully to a comparable inductive neural network algorithm. Whenever possible, I have preferred real robot hardware over simulations, and high-dimensional feature spaces over those low-dimensional ones that are commonly used in artificial "toy" problems. The diversity of experimental test beds shall illustrate that EBNN is applicable under a wide variety of circumstances, and in a large class of problems.
This book is purely technical in nature. Its central aim is to advance the stateof- the-art in machine learning. In particular, it seeks to provide a learning algorithm that generalizes more correctly from less training data than conventional algorithms by exploiting domain knowledge gathered in previous learning tasks. EBNN is adequate if the learning algorithm faces multiple, related learning tasks; it will fail to improve the learning results if a single, isolated set of data points is all that is available for learning. This research demonstrates that significantly superior results can be achieved by going beyond the intrinsic limitations associated with learning single functions in isolation. Hopefully, the book opens up more questions than it provides answers, by pointing out potential research directions for future work on machine learning.
Explanation-Based Neural Network Learning
The Invariance Approach
Reinforcement Learning
Empirical Results
Discussion
A: An Algorithm for Approximating Values and Slopes with Artificial Neural Networks
B: Proofs of the Theorems
C: Example Chess Games
  • Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.
  • Регистрация