Wasserstein Support Vector Machine: Support Vector Machines Made Fair

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

In this paper, a novel model combining Support Vector Machines (SVM) and equity is introduced. Assuming that a group of individuals need to be protected against discrimination, we address the problem of training the classifier by jointly maximizing the classification performance (SVM margin) and equity (closeness between the distribution of the predictions in the protected group and the remaining individuals). Training makes an efficient use of the available information, since the margin is evaluated on individuals for which the class label is known, whereas the equity is measured on individuals for whom we know whether they belong to the protected group or not, and thus their class label is not required. We modify the dual SVM formulation with a penalization of the Wasserstein distance between the empirical distribution of the SVM scores from the two groups. In our approach, predictions are made by reweighting the records, and we show that these weights can be found by training an SVM with a modified kernel. Numerical results are presented on classic benchmark datasets in the Fair Machine Learning literature, where we investigate the tradeoff between accuracy and unfairness for different values of the decision threshold. With a mild penalization of the Wasserstein distance, we can dramatically reduce the unfairness while keeping a similar level of accuracy.
Original languageEnglish
JournalEuropean Journal of Operational Research
Volume329
Issue number2
Pages (from-to)641-652
Number of pages12
ISSN0377-2217
DOIs
Publication statusPublished - Mar 2026

Bibliographical note

Published online: 27 October 2025.

Keywords

  • Support vector machines
  • Algorithmic fairness
  • Wasserstein Distance
  • Multi-objective optimization

Cite this