Recurrent neural networks - Information Processing and self-organization at the Edge of Chaos

Aus Aifbportal
Version vom 3. Februar 2011, 13:15 Uhr von Dhe (Diskussion | Beiträge)
(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Wechseln zu:Navigation, Suche

Recurrent neural networks - Information Processing and self-organization at the Edge of Chaos

Kolloquium Angewandte Informatik

We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer peak close to this phase transition, providing an explanation for why guiding the recurrent layer towards the edge of chaos is computationally useful. As a consequence, our work suggests self-organized ways of improving performance in recurrent neural networks, driven by both input data and the learning goal. This is in contrast to other self-organized approaches for adapting the recurrent layer, like intrinsic plasticity, which do not take the learning goal into account.

(Dr. Oliver Obst, Researcher Scientist, Autonomous Systems Lab, Australian)

Start: 04. Februar 2011 um 14:00
Ende: 04. Februar 2011 um 15:00

Im Gebäude 11.40, Raum: 231

Veranstaltung vormerken: (iCal)

Veranstalter: Forschungsgruppe(n) Wissensmanagement
Information: Media:Kolloquium Dr. Obst 04 02 11.pdf