Download PDFOpen PDF in browser

Combating Hate: How Multilingual Transformers Can Help Detect Topical Hate Speech

13 pagesPublished: May 26, 2023


Automated hate speech detection is important to protecting people’s dignity, online experiences, and physical safety in Society 5.0. Transformers are sophisticated pre- trained language models that can be fine-tuned for multilingual hate speech detection. Many studies consider this application as a binary classification problem. Additionally, research on topical hate speech detection use target-specific datasets containing assertions about a particular group. In this paper we investigate multi-class hate speech detection using target-generic datasets. We assess the performance of mBERT and XLM-RoBERTA on high and low resource languages, with limited sample sizes and class imbalance. We find that our fine-tuned mBERT models are performant in detecting gender-targeted hate speech. Our Urdu classifier produces a 31% lift on the baseline model. We also present a pipeline for processing multilingual datasets for multi-class hate speech detection. Our approach could be used in future works on topically focused hate speech detection for other low resource languages, particularly African languages which remain under-explored in this domain.

Keyphrases: hate speech, machine learning, Natural Language Processing

In: Aurona Gerber and Knut Hinkelmann (editors). Proceedings of Society 5.0 Conference 2023, vol 93, pages 203--215

BibTeX entry
  author    = {Trishanta Srikissoon and Vukosi Marivate},
  title     = {Combating Hate: How Multilingual Transformers Can Help Detect Topical Hate Speech},
  booktitle = {Proceedings of Society 5.0 Conference 2023},
  editor    = {Aurona Gerber and Knut Hinkelmann},
  series    = {EPiC Series in Computing},
  volume    = {93},
  pages     = {203--215},
  year      = {2023},
  publisher = {EasyChair},
  bibsource = {EasyChair,},
  issn      = {2398-7340},
  url       = {},
  doi       = {10.29007/1cm6}}
Download PDFOpen PDF in browser