This project aims to devise various methods to embed fairness through model editing. The first methodology involves using Model Poisoning in a federated learning setting to embed fairness. [Bhagoji et al 2019]. The second methodology involves using Causal Counterfactuals to edit representations.