Stage-oe-small.jpg

Lehre/Seminar Seminar Representation Learning for Knowledge Graphs/en

Aus Aifbportal
< Lehre/Seminar Seminar Representation Learning for Knowledge Graphs
Version vom 18. September 2020, 13:12 Uhr von Cq9199 (Diskussion | Beiträge) (Auto create by AifbPortalExt)
(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Wechseln zu:Navigation, Suche

Seminar Representation Learning for Knowledge Graphs

Details of Course
Type of course seminar
Lecturer(s) Harald Sack, Mehwish Alam
Instructor(s) Russa Biswas
Subject
Credit Points 3
Control of Success
Term winter


You find additional information, the time schedule and room numbers in the University Course Overview.

Course Overview https://www.wiwi.kit.edu/seminare.php?details=EA8C2A04-C018-4F92-8F57-3FC6D83150CE
Student Portal https://studium.kit.edu



Research Group


Content

Data representation or feature representation plays a key role in the performance of machine learning algorithms. In recent years, rapid growth has been observed in Representation Learning (RL) of words and Knowledge Graphs (KG) into low dimensional vector spaces and its applications to many real-world scenarios. Word embeddings are a low dimensional vector representation of words that are capable of capturing the context of a word in a document, semantic similarity as well as its relation with other words. Similarly, KG embeddings are a low dimensional vector representation of entities and relations from a KG preserving its inherent structure and capturing the semantic similarity between the entities. Each embedding space exhibits different semantic characteristics based on the source of information, e.g, text or KGs as well as the learning of the embedding algorithms. The same algorithm, when applied to different representations of the same training data, leads to different results due to the variation in the features encoded in the respective representations. The distributed representation of text in the form of the word and document vectors as well as of the entities and relations of the KG in form of entity and relation vectors have evolved as the key elements of various natural language processing tasks such as Entity Linking, Named Entity Recognition and disambiguation, etc. Different embedding spaces are generated for textual documents of different languages, hence aligning the embedding spaces has become a stepping stone for machine translation. On the other hand, in addition to multilingualism and domain-specific information, different KGs of the same domain have structural differences, making the alignment of the KG embeddings more challenging. In order to generate coherent embedding spaces for knowledge-driven applications such as question answering, named entity disambiguation, knowledge graph completion, etc., it is necessary to align the embedding spaces generated from different sources.

In this seminar, we would like to study the different state of the art algorithms for aligning embedding space. We would focus on two types of alignment algorithms: (1) Entity - Entity alignment, and (2) Entity - Word alignment.




Notes

If code is available from the authors, then re-implementation of it for small scale experiments using Google Colab with python.

Participation is restricted to 10 students max.