Interpretable Random Forests via Rule Extraction - Sorbonne Université
Conference Papers Year : 2021

Interpretable Random Forests via Rule Extraction

Abstract

We introduce SIRUS (Stable and Interpretable RUle Set) for regression, a stable rule learning algorithm which takes the form of a short and simple list of rules. State-of-the-art learning algorithms are often referred to as "black boxes" because of the high number of operations involved in their prediction process. Despite their powerful predictivity, this lack of interpretability may be highly restrictive for applications with critical decisions at stake. On the other hand, algorithms with a simple structure-typically decision trees, rule algorithms, or sparse linear models-are well known for their instability. This undesirable feature makes the conclusions of the data analysis unreliable and turns out to be a strong operational limitation. This motivates the design of SIRUS, which combines a simple structure with a remarkable stable behavior when data is perturbed. The algorithm is based on random forests, the predictive accuracy of which is preserved. We demonstrate the efficiency of the method both empirically (through experiments) and theoretically (with the proof of its asymptotic stability). Our R/C++ software implementation sirus is available from CRAN.
Fichier principal
Vignette du fichier
sirus_reg_final.pdf (648.95 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-02557113 , version 1 (28-04-2020)
hal-02557113 , version 2 (08-06-2020)
hal-02557113 , version 3 (06-10-2020)
hal-02557113 , version 4 (08-02-2021)

Identifiers

Cite

Clément Bénard, Gérard Biau, Sébastien da Veiga, Erwan Scornet. Interpretable Random Forests via Rule Extraction. 24th International Conference on Artificial Intelligence and Statistics, Apr 2021, Online, France. pp.937-945, ⟨10.48550/arXiv.2004.14841⟩. ⟨hal-02557113v4⟩
588 View
742 Download

Altmetric

Share

More