Abstract
Decision trees are popular Classification and Regression tools and, when small-sized, easy to interpret. Traditionally, a greedy approach has been used to build the trees, yielding a very fast training process; however, controlling sparsity (a proxy for interpretability) is challenging. In recent studies, optimal decision trees, where all decisions are optimized simultaneously, have shown a better learning performance, especially when oblique cuts are implemented. In this paper, we propose a continuous optimization approach to build sparse optimal classification trees, based on oblique cuts, with the aim of using fewer predictor variables in the cuts as well as along the whole tree. Both types of sparsity, namely local and global, are modeled by means of regularizations with polyhedral norms. The computational experience reported supports the usefulness of our methodology. In all our data sets, local and global sparsity can be improved without harming classification accuracy. Unlike greedy approaches, our ability to easily trade in some of our classification accuracy for a gain in global sparsity is shown.
Original language | English |
---|---|
Journal | European Journal of Operational Research |
Volume | 284 |
Issue number | 1 |
Pages (from-to) | 255-272 |
Number of pages | 18 |
ISSN | 0377-2217 |
DOIs | |
Publication status | Published - Jul 2020 |
Bibliographical note
Published online: 16. December 2019Keywords
- Data mining
- Optimal classification trees
- Global and local sparsity
- Nonlinear Programming