Smaller, task-specific models can be created using knowledge distillation from large language models (LLMs).
The goal of the project is to:
(1) Evaluate whether fairness is preserved during distillation of LLMs for different distillation techniques used in the literature.
(2) Explore ways in which fairness can be preserved during distillation.