LLM-based Fine Tuning of Restless Multi-armed Bandits for Public Health – Fairness in Multilingual Settings






Researcher(s):

  1. Chandrasekar Subramanian (Research Advisor)
  2. Gokul Krishnan (Research Scientist)
  3. Ambreesh Parthasarathy (Pre-doc)
  4. Kalyan Nadimpalli (Pre-doc)
  5. Prof. B. Ravindran (Professor and Head)


Collaborators:





Description:


This project focuses on designing social interventions to improve health outcomes for pregnant mothers. Existing work [1, 2, 3] proposes Restless Multi-Armed Bandit-based allocation algorithms, including methods [4] to shape allocation policy using large language models (LLMs) based on English-language commands. In this project, the research objectives are to:

(1) Identify the fairness and bias impact of multilinguality (including low-resource languages) in such an LLM-based approach

(2) Explore techniques for debiasing and improving fairness.




Links:


  1. [2202.00916] Scalable Decision-Focused Learning in Restless Multi-Armed Bandits with Application to Maternal and Child Health
  2. [2109.08075] Field Study in Deploying Restless Multi-Armed Bandits: Assisting Non-Profits in Improving Maternal and Child Health
  3. Robust Planning over Restless Groups: Engagement Interventions for a Large-Scale Maternal Telehealth Program | Proceedings of the AAAI Conference on Artificial Intelligence
  4. [2402.14807] A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health