Aus Aifbportal
Wechseln zu:Navigation, Suche

Annotating and Analyzing Biased Sentences in News Articles using Crowdsourcing

Annotating and Analyzing Biased Sentences in News Articles using Crowdsourcing

Published: 2020

Buchtitel: Proceedings of the 12th Conference on Language Resources and Evaluation (LREC'20)
Verlag: European Language Resources Association (ELRA)

Referierte Veröffentlichung


The spread of biased news and its consumption by the readers has become a considerable issue. Researchers from multiple domains including social science and media studies have made efforts to mitigate this media bias issue. Specifically, various techniques ranging from natural language processing to machine learning have been used to help determine news bias automatically. However, due to the lack of publicly available datasets in this field, especially ones containing labels concerning bias on a fine-grained level (e.g., on sentence level), it is still challenging to develop methods for effectively identifying bias embedded in new articles. In this paper, we propose a novel news bias dataset which facilitates the development and evaluation of approaches for detecting subtle bias in news articles and for understanding the characteristics of biased sentences. Our dataset consists of 966 sentences from 46 English-language news articles covering 4 different events and contains labels concerning bias on the sentence level. For scalability reasons, the labels were obtained based on crowd-sourcing. Our dataset can be used for analyzing news bias, as well as for developing and evaluating methods for news bias detection. It can also serve as resource for related researches including ones focusing on fake news detection.

Download: Media:NewsBias-LREC2020.pdf


Web Science


Information Retrieval, Informationssysteme, Natürliche Sprachverarbeitung