Stage-oe-small.jpg

Inproceedings3571: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
(Die Seite wurde neu angelegt: „{{Publikation Erster Autor |ErsterAutorNachname=Thoma |ErsterAutorVorname=Steffen }} {{Publikation Author |Rank=2 |Author=Achim Rettinger }} {{Publikation Author …“)
 
Zeile 12: Zeile 12:
 
}}
 
}}
 
{{Inproceedings
 
{{Inproceedings
|Referiert=False
+
|Referiert=True
 
|Title=Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics
 
|Title=Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics
 
|Year=2017
 
|Year=2017
 
|Month=Oktober
 
|Month=Oktober
|Booktitle= The Semantic Web – ISWC 2017
+
|Booktitle=The Semantic Web – ISWC 2017
 
|Publisher=Springer
 
|Publisher=Springer
 
}}
 
}}
 
{{Publikation Details
 
{{Publikation Details
 
|Abstract=KnowledgeGraphs(KGs)effectivelycaptureexplicitrelationalknowl- edge about individual entities. However, visual attributes of those entities, like their shape and color and pragmatic aspects concerning their usage in natural lan- guage are not covered. Recent approaches encode such knowledge by learning la- tent representations (‘embeddings’) separately: In computer vision, visual object features are learned from large image collections and in computational linguistics, word embeddings are extracted from huge text corpora which capture their distri- butional semantics. We investigate the potential of complementing the relational knowledge captured in KG embeddings with knowledge from text documents and images by learning a shared latent representation that integrates information across those modalities. Our empirical results show that a joined concept rep- resentation provides measurable benefits for i) semantic similarity benchmarks, since it shows a higher correlation with the human notion of similarity than uni- or bi-modal representations, and ii) entity-type prediction tasks, since it clearly outperforms plain KG embeddings. These findings encourage further research towards capturing types of knowledge that go beyond today’s KGs.
 
|Abstract=KnowledgeGraphs(KGs)effectivelycaptureexplicitrelationalknowl- edge about individual entities. However, visual attributes of those entities, like their shape and color and pragmatic aspects concerning their usage in natural lan- guage are not covered. Recent approaches encode such knowledge by learning la- tent representations (‘embeddings’) separately: In computer vision, visual object features are learned from large image collections and in computational linguistics, word embeddings are extracted from huge text corpora which capture their distri- butional semantics. We investigate the potential of complementing the relational knowledge captured in KG embeddings with knowledge from text documents and images by learning a shared latent representation that integrates information across those modalities. Our empirical results show that a joined concept rep- resentation provides measurable benefits for i) semantic similarity benchmarks, since it shows a higher correlation with the human notion of similarity than uni- or bi-modal representations, and ii) entity-type prediction tasks, since it clearly outperforms plain KG embeddings. These findings encourage further research towards capturing types of knowledge that go beyond today’s KGs.
|Download=Towards Holistic Concept Representations.pdf,  
+
|Download=Towards Holistic Concept Representations.pdf,
 
|Forschungsgruppe=Web Science
 
|Forschungsgruppe=Web Science
 
}}
 
}}

Version vom 28. Juli 2017, 09:44 Uhr


Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics


Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics



Published: 2017 Oktober

Buchtitel: The Semantic Web – ISWC 2017
Verlag: Springer

Referierte Veröffentlichung

BibTeX

Kurzfassung
KnowledgeGraphs(KGs)effectivelycaptureexplicitrelationalknowl- edge about individual entities. However, visual attributes of those entities, like their shape and color and pragmatic aspects concerning their usage in natural lan- guage are not covered. Recent approaches encode such knowledge by learning la- tent representations (‘embeddings’) separately: In computer vision, visual object features are learned from large image collections and in computational linguistics, word embeddings are extracted from huge text corpora which capture their distri- butional semantics. We investigate the potential of complementing the relational knowledge captured in KG embeddings with knowledge from text documents and images by learning a shared latent representation that integrates information across those modalities. Our empirical results show that a joined concept rep- resentation provides measurable benefits for i) semantic similarity benchmarks, since it shows a higher correlation with the human notion of similarity than uni- or bi-modal representations, and ii) entity-type prediction tasks, since it clearly outperforms plain KG embeddings. These findings encourage further research towards capturing types of knowledge that go beyond today’s KGs.

Download: Media:Towards Holistic Concept Representations.pdf



Forschungsgruppe

Web Science


Forschungsgebiet