Socially Responsible Hate Speech Detection: Can Classifiers Reflect Social Stereotypes?

Francielle Vargas, Isabelle Carvalho, Ali Hürriyetoǧlu, Thiago A.S. Pardo, Fabrício Benevenuto

Onderzoeksoutput: Bijdrage aan conferentiePaperWetenschappelijkpeer review

4 Citaten (Scopus)

Samenvatting

Recent studies have shown that hate speech technologies may propagate social stereotypes against marginalized groups. Nevertheless, there has been a lack of realistic approaches to assess and mitigate biased technologies. In this paper, we introduce a new approach to analyze the potential of hate-speech classifiers to reflect social stereotypes through the investigation of stereotypical beliefs by contrasting them with counter-stereotypes. We empirically measure the distribution of stereotypical beliefs by analyzing the distinctive classification of tuples containing stereotypes versus counterstereotypes in machine learning models and datasets. Experiment results show that hate speech classifiers attribute unreal or negligent offensiveness to social identity groups by reflecting and reinforcing stereotypical beliefs regarding minorities. Furthermore, we also found out that models that embed expert and context information from offensiveness markers present promising results to mitigate social stereotype bias towards socially responsible hate speech detection.

Originele taal-2Engels
Pagina's1187-1196
Aantal pagina's10
StatusGepubliceerd - 2023

Vingerafdruk

Duik in de onderzoeksthema's van 'Socially Responsible Hate Speech Detection: Can Classifiers Reflect Social Stereotypes?'. Samen vormen ze een unieke vingerafdruk.

Citeer dit