Comparison-based Inverse Classification for Interpretability in Machine Learning - Sorbonne Université Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Comparison-based Inverse Classification for Interpretability in Machine Learning

Thibault Laugel
Xavier Renard

Résumé

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself , nor on the processed data (neither the training nor the test data). It proposes an inverse classification approach whose principle consists in determining the minimal changes needed to alter a prediction: in an instance-based framework, given a data point whose classification must be explained, the proposed method consists in identifying a close neighbor classified differently, where the closeness definition integrates a spar-sity constraint. This principle is implemented using observation generation in the Growing Spheres algorithm. Experimental results on two datasets illustrate the relevance of the proposed approach that can be used to gain knowledge about the classifier.
Fichier principal
Vignette du fichier
180115_final.pdf (329.42 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01905982 , version 1 (26-10-2018)

Identifiants

Citer

Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki. Comparison-based Inverse Classification for Interpretability in Machine Learning. 17th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU 2018), Jun 2018, Cadix, Spain. pp.100-111, ⟨10.1007/978-3-319-91473-2_9⟩. ⟨hal-01905982⟩
286 Consultations
1410 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More