Stage-oe-small.jpg

Inproceedings3929: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
K
K
Zeile 34: Zeile 34:
 
{{Publikation Details
 
{{Publikation Details
 
|Abstract=Different Knowledge Graph Embedding (KGE) models have been proposed so far which are trained on some specific KG completion tasks such as link prediction and evaluated on datasets which are mainly created for such purpose. Mostly, the embeddings learnt on link prediction tasks are not applied for downstream tasks in real-world use-cases such as data available in different companies/organizations. In this paper, the challenges with enriching a KG which is generated from a real-world relational database (RDB) about companies, with information from external sources such as Wikidata and learning repre- sentations for the KG are presented. Moreover, a comparative analysis is presented between the KGEs and various text embeddings on some downstream clustering tasks. The results of experiments indicate that in use-cases like the one used in this paper, where the KG is highly skewed, it is beneficial to use text embeddings or language models instead of KGEs.
 
|Abstract=Different Knowledge Graph Embedding (KGE) models have been proposed so far which are trained on some specific KG completion tasks such as link prediction and evaluated on datasets which are mainly created for such purpose. Mostly, the embeddings learnt on link prediction tasks are not applied for downstream tasks in real-world use-cases such as data available in different companies/organizations. In this paper, the challenges with enriching a KG which is generated from a real-world relational database (RDB) about companies, with information from external sources such as Wikidata and learning repre- sentations for the KG are presented. Moreover, a comparative analysis is presented between the KGEs and various text embeddings on some downstream clustering tasks. The results of experiments indicate that in use-cases like the one used in this paper, where the KG is highly skewed, it is beneficial to use text embeddings or language models instead of KGEs.
|Download=vol3034paper4.pdf
+
|Download=paper4.pdf
 
|Forschungsgruppe=Information Service Engineering
 
|Forschungsgruppe=Information Service Engineering
 
}}
 
}}

Version vom 10. November 2022, 16:06 Uhr


Challenges of Applying Knowledge Graph and their Embeddings to a Real-world Use-case


Challenges of Applying Knowledge Graph and their Embeddings to a Real-world Use-case



Published: 2021 Dezember

Buchtitel: Proceedings of the Workshop on Deep Learning for Knowledge Graphs (DL4KG 2021) co-located with the 20th International Semantic Web Conference (ISWC 2021)
Verlag: CEUR-WS
Organisation: DL4KG

Nicht-referierte Veröffentlichung

BibTeX

Kurzfassung
Different Knowledge Graph Embedding (KGE) models have been proposed so far which are trained on some specific KG completion tasks such as link prediction and evaluated on datasets which are mainly created for such purpose. Mostly, the embeddings learnt on link prediction tasks are not applied for downstream tasks in real-world use-cases such as data available in different companies/organizations. In this paper, the challenges with enriching a KG which is generated from a real-world relational database (RDB) about companies, with information from external sources such as Wikidata and learning repre- sentations for the KG are presented. Moreover, a comparative analysis is presented between the KGEs and various text embeddings on some downstream clustering tasks. The results of experiments indicate that in use-cases like the one used in this paper, where the KG is highly skewed, it is beneficial to use text embeddings or language models instead of KGEs.

Download: Media:paper4.pdf



Forschungsgruppe

Information Service Engineering


Forschungsgebiet