Achieving Fairness with Decision Trees: An Adversarial Approach - Sorbonne Université
Journal Articles Data Science and Engineering Year : 2020

Achieving Fairness with Decision Trees: An Adversarial Approach

Vincent Grari
Boris Ruf
  • Function : Author
Sylvain Lamprier
Marcin Detyniecki

Abstract

Abstract Fair classification has become an important topic in machine learning research. While most bias mitigation strategies focus on neural networks, we noticed a lack of work on fair classifiers based on decision trees even though they have proven very efficient. In an up-to-date comparison of state-of-the-art classification algorithms in tabular data, tree boosting outperforms deep learning (Zhang et al. in Expert Syst Appl 82:128–150, 2017). For this reason, we have developed a novel approach of adversarial gradient tree boosting. The objective of the algorithm is to predict the output Y with gradient tree boosting while minimizing the ability of an adversarial neural network to predict the sensitive attribute S . The approach incorporates at each iteration the gradient of the neural network directly in the gradient tree boosting. We empirically assess our approach on four popular data sets and compare against state-of-the-art algorithms. The results show that our algorithm achieves a higher accuracy while obtaining the same level of fairness, as measured using a set of different common fairness definitions.

Dates and versions

hal-03923322 , version 1 (04-01-2023)

Identifiers

Cite

Vincent Grari, Boris Ruf, Sylvain Lamprier, Marcin Detyniecki. Achieving Fairness with Decision Trees: An Adversarial Approach. Data Science and Engineering, 2020, 5 (2), pp.99-110. ⟨10.1007/s41019-020-00124-2⟩. ⟨hal-03923322⟩
38 View
0 Download

Altmetric

Share

More