Stage-oe-small.jpg

Inproceedings3177: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
(Die Seite wurde neu angelegt: „{{Publikation Erster Autor |ErsterAutorNachname=Blanco |ErsterAutorVorname=Roi }} {{Publikation Author |Rank=2 |Author=Harry Halpin }} {{Publikation Author |Rank=…“)
 
 
(4 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)
Zeile 33: Zeile 33:
 
|Year=2011
 
|Year=2011
 
|Month=Juli
 
|Month=Juli
|Booktitle=Proceeding of the 34rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR
+
|Booktitle=Proceeding of the 34rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011)
 
|Publisher=ACM
 
|Publisher=ACM
 
|Address=Beijing, PR China
 
|Address=Beijing, PR China
|Editor=ACM
 
 
}}
 
}}
 
{{Publikation Details
 
{{Publikation Details
 +
|Abstract=The primary problem confronting any new kind of search task is how to boot-strap a reliable and repeatable evaluation campaign, and a crowd-sourcing approach provides many advantages. However, can these crowd-sourced evaluations be repeated over long periods of time in a reliable manner? To demonstrate, we investigate creating an evaluation campaign for the semantic search task of keyword-based ad-hoc object retrieval. In contrast to traditional search over web-pages, object search aims at the retrieval of information from factual assertions about real-world objects rather than searching over web-pages with textual descriptions. Using the first large-scale evaluation campaign that specifically targets the task of ad-hoc Web object retrieval over a number of deployed systems, we demonstrate that crowd-sourced evaluation campaigns can be repeated over time and still maintain reliable results. Furthermore, we show how these results are comparable to expert judges when ranking systems and that the results hold over different evaluation and relevance metrics. This work provides empirical support for scalable, reliable, and repeatable search system evaluation using crowdsourcing.
 +
|ISBN=978-1-4503-0757-4
 +
|Download=Sigir2011-crowd-search-evaluation.pdf,
 +
|Projekt=IGreen
 
|Forschungsgruppe=Wissensmanagement
 
|Forschungsgruppe=Wissensmanagement
 
}}
 
}}

Aktuelle Version vom 2. Februar 2012, 09:33 Uhr


Repeatable and Reliable Search System Evaluation using Crowdsourcing


Repeatable and Reliable Search System Evaluation using Crowdsourcing



Published: 2011 Juli

Buchtitel: Proceeding of the 34rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011)
Verlag: ACM
Erscheinungsort: Beijing, PR China

Referierte Veröffentlichung

BibTeX

Kurzfassung
The primary problem confronting any new kind of search task is how to boot-strap a reliable and repeatable evaluation campaign, and a crowd-sourcing approach provides many advantages. However, can these crowd-sourced evaluations be repeated over long periods of time in a reliable manner? To demonstrate, we investigate creating an evaluation campaign for the semantic search task of keyword-based ad-hoc object retrieval. In contrast to traditional search over web-pages, object search aims at the retrieval of information from factual assertions about real-world objects rather than searching over web-pages with textual descriptions. Using the first large-scale evaluation campaign that specifically targets the task of ad-hoc Web object retrieval over a number of deployed systems, we demonstrate that crowd-sourced evaluation campaigns can be repeated over time and still maintain reliable results. Furthermore, we show how these results are comparable to expert judges when ranking systems and that the results hold over different evaluation and relevance metrics. This work provides empirical support for scalable, reliable, and repeatable search system evaluation using crowdsourcing.

ISBN: 978-1-4503-0757-4
Download: Media:Sigir2011-crowd-search-evaluation.pdf

Projekt

IGreen



Forschungsgruppe

Wissensmanagement


Forschungsgebiet

Information Retrieval, Semantische Suche