Fairness evaluation of distilled LLMs






Researcher(s):

  1. Eshan Gujarati
  2. Bavish Kulur
  3. Karthick Seshadri


Description:


Smaller, task-specific models can be created using knowledge distillation from large language models (LLMs).

The goal of the project is to:

(1) Evaluate whether fairness is preserved during distillation of LLMs for different distillation techniques used in the literature.

(2) Explore ways in which fairness can be preserved during distillation.




Links:


  1. Bias and Fairness in Large Language Models: A Survey
  2. A Survey on Knowledge Distillation of Large Language Models