Supervised Feature Compression based on Counterfactual Analysis

Veronica Piccialli, Dolores Romero Morales, Cecilia Salvatore*

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

6 Downloads (Pure)

Abstract

Counterfactual Explanations are becoming a de-facto standard in post-hoc interpretable machine learning. For a given classifier and an instance classified in an undesired class, its counterfactual explanation corresponds to small perturbations of that instance that allows changing the classification outcome. This work aims to leverage Counterfactual Explanations to detect the important decision boundaries of a pre-trained black-box model. This information is used to build a supervised discretization of the features in the dataset with a tunable granularity. Using the discretized dataset, an optimal Decision Tree can be trained that resembles the black-box model, but that is more interpretable and compact. Numerical results on real-world datasets show the effectiveness of the approach in terms of accuracy and sparsity.
Original languageEnglish
JournalEuropean Journal of Operational Research
Number of pages30
ISSN0377-2217
DOIs
Publication statusPublished - 15 Nov 2023

Bibliographical note

Epub ahead of print. Published online: 15 November 2023.

Keywords

  • Counterfactual analysis
  • Feature compression
  • Interpretability
  • Supervised classification
  • Machine learning

Cite this