Зарегистрироваться
Восстановить пароль
FAQ по входу

Hammer B., Hitzler P. (eds.) Perspectives of Neural-Symbolic Integration

  • Файл формата pdf
  • размером 2,88 МБ
  • Добавлен пользователем
  • Описание отредактировано
Hammer B., Hitzler P. (eds.) Perspectives of Neural-Symbolic Integration
Springer, 2007. — 325 p. — (Studies in Computational Intelligence 77). — ISBN: 978-3-540-73953-1.
The human brain possesses the remarkable capability of understanding, interpreting, and producing human language, thereby relying mostly on the left hemisphere. The ability to acquire language is innate as can be seen from disorders such as specific language impairment (SLI), which manifests itself in a missing sense for grammaticality. Language exhibits strong compositionality and structure. Hence biological neural networks are naturally connected to processing and generation of high-level symbolic structures.
Unlike their biological counterparts, artificial neural networks and logic do not form such a close liason. Symbolic inference mechanisms and statistical machine learning constitute two major and very different paradigms in artificial intelligence which both have their strengths and weaknesses: Statistical methods offer flexible and highly effective tools which are ideally suited for possibly corrupted or noisy data, high uncertainty and missing information as occur in everyday life such as sensor streams in robotics, measurements in medicine such as EEG and EKG, financial and market indices, etc. The models, however, are often reduced to black box mechanisms which complicate the integration of prior high level knowledge or human inspection, and they lack the ability to cope with a rich structure of objects, classes, and relations. Symbolic mechanisms, on the other hand, are perfectly applicative for intuitive human-machine interaction, the integration of complex prior knowledge, and well founded recursive inference. Their capability of dealing with uncertainty and noise and their efficiency when addressing corrupted large scale real-world data sets, however, is limited. Thus, the inherent strengths and weaknesses of these two methods ideally complement each other.
Neuro-symbolic integration centers at the border of these two paradigms and tries to combine the strengths of the two directions while getting rid of their weaknesses eventually aiming at artificial systems which could be competitive to human capacities of data processing and inference. Different degrees of neuro-symbolic integration exist: (1) Researchers incorporate aspects of symbolic structures into statistical learners or they enrich structural reasoning by statistical aspects to extend the applicability of the respective paradigm. As an example, logical inference mechanisms can be enlarged by statistical reasoning mainly relying on Bayesian statistics. The resulting systems are able to solve complex real-world problems, such as impressively demonstrated in recent advances of statistical-relational learning. (2) Researchers try to exactly map inference mechanisms of one paradigm towards the other such that a direct relation can be established and the paradigm which is ideally suited for the task at hand can be chosen without any limitations on the setting. Recent results on the integration of logic programs into neural networks constitute a very interesting example of this ‘core method’.
This book focuses on extensions of neural methodology towards symbolic integration. According to the possible degree of integration, it is split into two parts: ‘loose’ coupling of neural paradigms and symbolic mechanisms by means of extensions of neural networks to deal with complex structures, and ‘strong’ coupling of neural and logical paradigms by means of establishing direct equivalences of neural network models and symbolic mechanisms.
Part I Structured Data and Neural Networks Introduction: Structured Data and Neural Networks
Kernels for Strings and Graphs
Comparing Sequence Classification Algorithms for Protein Subcellular Localization
Mining Structure-Activity Relations in Biological Neural Networks using Neuron Rank
Adaptive Contextual Processing of Structured Data by Recursive Neural Networks: A Survey of Computational Properties
Markovian Bias of Neural-based Architectures with Feedback Connections
Time Series Prediction with the Self-Organizing Map: A Review
A Dual Interaction Perspective for Robot Cognition: Grasping as a “Rosetta Stone”
Part II Logic and Neural Networks Introduction: Logic and Neural Networks
SHRUTI: A Neurally Motivated Architecture for Rapid, Scalable Inference
The Core Method: Connectionist Model Generation for First-Order Logic Programs
Learning Models of Predicate Logical Theories with Neural Networks Based on Topos Theory
Advances in Neural-Symbolic Learning Systems: Modal and Temporal Reasoning
Connectionist Representation of Multi-Valued Logic Programs
  • Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.
  • Регистрация