Evaluating Small Language Models for Detecting Media Bias through Fine-tuning






Researcher(s):

  1. Srinath K R
  2. Balaraman Ravindran
  3. Philip Smith


Description:


Public opinion, individual behaviour and decision making are all greatly affected by media bias which can occur in many forms like language, race, context, hate, etc. This study aims to assess the effectiveness of various open-source small language models (SLMs) in identifying different types of media bias by fine-tuning them on MBIB, the first Media Bias Identification Benchmark, which is a curated collection of multiple datasets covering different types of media bias. We also aim to explore a few key domain-specific factors that need to be considered for deployment of these models along the lines of various aspects of Responsible AI.




Links:


  1. Introducing MBIB - The First Media Bias Identification Benchmark Task and Dataset Collection