Добавлен пользователем ilenia, дата добавления неизвестна
Описание отредактировано
Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory.
Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.
Cambridge University Press, 2009, 389 pages This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of...
InTech, 2011. — 112 p. The RNNs (Recurrent Neural Networks) are a general case of artificial neural networks where the connections are not feed-forward ones only. In RNNs, connections between units form directed cycles, providing an implicit internal memory. Those RNNs are adapted to problems dealing with signals evolving through time. Their internal memory gives them the...
InTech, 2012. — 302 p. The first section illustrates some general concepts of artificial neural networks, their properties, mode of training, static training (feedforward) and dynamic training (recurrent), training data classification, supervised, semi-supervised and unsupervised training. Recurrent Neural Networks (RNNs), are like other ANN abstractions of biological nervous...
Springer, 2007. — 402 p. Professor A. I. Galushkin’s monograph Neural Networks Theory appears at a time when the theory has achieved maturity and is the fulcrum of a vast literature. Nevertheless, Professor Galushkin’s work has high importance because it serves a special purpose which is explained in the following. The roots of neural networks theory go back to the pioneering...
Institute for Theoretical Computer Science
Graz University of Technology
The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. Characteristic features of this new model are (i) that it is a model for adaptive computational systems, (ii) that it provides a...