Enforcing Individual Fairness via Rényi Variational Inference
Abstract
As opposed to group fairness algorithms which enforce equality of distributions, individual fairness aims at treating similar people similarly. In this paper, we focus on individual fairness regarding sensitive attributes that should be removed from people comparisons. In that aim, we present a new method that leverages the Variational Autoencoder (VAE) algorithm and the Hirschfeld-Gebelein-Renyi (HGR) maximal correlation coefficient for enforcing individual fairness in predictions. We also propose new metrics to assess individual fairness. We demonstrate the effectiveness of our approach in enforcing individual fairness on several machine learning tasks prone to algorithmic bias.