Fairness without the Sensitive Attribute via Causal Variational Autoencoder - Sorbonne Université
Conference Papers Year : 2022

Fairness without the Sensitive Attribute via Causal Variational Autoencoder

Vincent Grari
  • Function : Author
Marcin Detyniecki

Abstract

In recent years, most fairness strategies in machine learning have focused on mitigating unwanted biases by assuming that the sensitive information is available. However, in practice this is not always the case: due to privacy purposes and regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected. Yet, only a few prior works address the issue of mitigating bias in such a difficult setting, in particular to meet classical fairness objectives such as Demographic Parity and Equalized Odds. By leveraging recent developments for approximate inference, we propose in this paper an approach to fill this gap. To infer a sensitive information proxy, we introduce a new variational auto-encoding-based framework named SRCVAE that relies on knowledge of the underlying causal graph. The bias mitigation is then done in an adversarial fairness approach. Our proposed method empirically achieves significant improvements over existing works in the field. We observe that the generated proxy’s latent space correctly recovers sensitive information and that our approach achieves a higher accuracy while obtaining the same level of fairness on two real datasets.

Dates and versions

hal-03923281 , version 1 (04-01-2023)

Identifiers

Cite

Vincent Grari, Sylvain Lamprier, Marcin Detyniecki. Fairness without the Sensitive Attribute via Causal Variational Autoencoder. Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}, Jul 2022, Vienna, France. pp.696-702, ⟨10.24963/ijcai.2022/98⟩. ⟨hal-03923281⟩
30 View
0 Download

Altmetric

Share

More