Stage-oe-small.jpg

Crowdsourced DBpedia Quality Assessment: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
(Die Seite wurde neu angelegt: „{{DataOrTool |Forschungsgruppe=Web Science und Wissensmanagement |name=Crowdsourced DBpedia Quality Assessment |contact persons=Maribel Acosta; |link=http://peop…“)
 
 
Zeile 2: Zeile 2:
 
|Forschungsgruppe=Web Science und Wissensmanagement
 
|Forschungsgruppe=Web Science und Wissensmanagement
 
|name=Crowdsourced DBpedia Quality Assessment
 
|name=Crowdsourced DBpedia Quality Assessment
|contact persons=Maribel Acosta;  
+
|contact persons=Maribel Acosta;
|link=http://people.aifb.kit.edu/yt2652/DBpediaQualityAssessment/
+
|link=http://people.aifb.kit.edu/mac/DBpediaQualityAssessment/
 
|long description EN=In this work we look into the use of crowdsourcing as a means to handle Linked Data quality problems that are challenging to be solved automatically. We analyzed the most common errors encountered in DBpedia and classified them according to the extent to which they are likely to be amenable to a specific crowdsourcing approach. Based on this analysis, we implemented a quality assessment methodology for Linked Data that leverage the wisdom of the crowds in different ways: (i) a contest format targeting an expert crowd of researchers and Linked Data enthusiasts; and (ii) paid microtasks published on Amazon Mechanical Turk. We empirically evaluated the the capacity of crowdsourcing approaches to spot quality issues in DBpedia and investigated how the contributions of the two crowds could be optimally integrated into Linked Data curation processes. The results showed that the two styles of crowdsourcing are complementary, and that crowdsourcing-enabled quality assessment is a promising and affordable way to enhance the quality of Linked Data sets.
 
|long description EN=In this work we look into the use of crowdsourcing as a means to handle Linked Data quality problems that are challenging to be solved automatically. We analyzed the most common errors encountered in DBpedia and classified them according to the extent to which they are likely to be amenable to a specific crowdsourcing approach. Based on this analysis, we implemented a quality assessment methodology for Linked Data that leverage the wisdom of the crowds in different ways: (i) a contest format targeting an expert crowd of researchers and Linked Data enthusiasts; and (ii) paid microtasks published on Amazon Mechanical Turk. We empirically evaluated the the capacity of crowdsourcing approaches to spot quality issues in DBpedia and investigated how the contributions of the two crowds could be optimally integrated into Linked Data curation processes. The results showed that the two styles of crowdsourcing are complementary, and that crowdsourcing-enabled quality assessment is a promising and affordable way to enhance the quality of Linked Data sets.
 
|dataOrTool=Data
 
|dataOrTool=Data
 
}}
 
}}

Aktuelle Version vom 14. März 2017, 16:26 Uhr



Transparent.png

Crowdsourced DBpedia Quality Assessment




Kontaktperson: Maribel Acosta

http://people.aifb.kit.edu/mac/DBpediaQualityAssessment/

Forschungsgruppe: Web Science und Wissensmanagement





Involvierte Personen


Publikationen

inproceedings
Maribel Acosta, Amrapali Zaveri, Elena Simperl, Dimitris Kontokostas, Sören Auer, Jens Lehmann
Crowdsourcing Linked Data Quality Assessment
The Semantic Web – ISWC 2013, Seiten: 260-276, Springer, Oktober, 2013
(Details)


↑ top


Projekte