Sparsity in Optimal Randomized Classification Trees

Rafael Blanquero, Emilio Carrizosa, Cristina Molero-Río, Dolores Romero Morales

Research output: Contribution to journalJournal articleResearchpeer-review

97 Downloads (Pure)

Abstract

Decision trees are popular Classification and Regression tools and, when small-sized, easy to interpret. Traditionally, a greedy approach has been used to build the trees, yielding a very fast training process; however, controlling sparsity (a proxy for interpretability) is challenging. In recent studies, optimal decision trees, where all decisions are optimized simultaneously, have shown a better learning performance, especially when oblique cuts are implemented. In this paper, we propose a continuous optimization approach to build sparse optimal classification trees, based on oblique cuts, with the aim of using fewer predictor variables in the cuts as well as along the whole tree. Both types of sparsity, namely local and global, are modeled by means of regularizations with polyhedral norms. The computational experience reported supports the usefulness of our methodology. In all our data sets, local and global sparsity can be improved without harming classification accuracy. Unlike greedy approaches, our ability to easily trade in some of our classification accuracy for a gain in global sparsity is shown.
Original languageEnglish
JournalEuropean Journal of Operational Research
Volume284
Issue number1
Pages (from-to)255-272
Number of pages18
ISSN0377-2217
DOIs
Publication statusPublished - Jul 2020

Bibliographical note

Published online: 16. December 2019

Keywords

  • Data mining
  • Optimal classification trees
  • Global and local sparsity
  • Nonlinear Programming

Cite this