Techreport1148: Unterschied zwischen den Versionen
K (Added from ontology) |
Dhe (Diskussion | Beiträge) K (Textersetzung - „ID Number“ durch „Archivierungsnummer“) |
||
Zeile 16: | Zeile 16: | ||
|Month=Februar | |Month=Februar | ||
|Institution=AIFB, University of Karlsruhe | |Institution=AIFB, University of Karlsruhe | ||
− | | | + | |Archivierungsnummer=1148 |
}} | }} | ||
{{Publikation Details | {{Publikation Details |
Version vom 1. Oktober 2009, 14:41 Uhr
Published: 2006
Februar
Institution: AIFB, University of Karlsruhe
Archivierungsnummer: 1148
Kurzfassung
Artificial neural networks can be trained to perform excellently
in many application areas. Whilst they can learn from raw data to
solve sophisticated recognition and analysis problems, the acquired
knowledge remains hidden within the network architecture and is not
readily accessible for analysis or further use: Trained networks are black
boxes. Recent research efforts therefore investigate the possibility to
extract symbolic knowledge from trained networks, in order to analyze,
validate, and reuse the structural insights gained implicitly during the
training process. In this paper, we will study how knowledge in form
of propositional logic programs can be obtained in such a way that the
programs are as simple as possible—where simple is being understood
in some clearly defined and meaningful way.
Download: Media:2006_1148_Lehmann_Extracting_Redu_1.pdf
Neuro-symbolische Integration, Logikprogrammierung, Künstliche Intelligenz