Back to Search Start Over

Adversarial Robustness in Two-Stage Learning-to-Defer: Algorithms and Guarantees

Authors :
Montreuil, Yannis
Carlier, Axel
Ng, Lai Xing
Ooi, Wei Tsang
Publication Year :
2025

Abstract

Learning-to-Defer (L2D) facilitates optimal task allocation between AI systems and decision-makers. Despite its potential, we show that current two-stage L2D frameworks are highly vulnerable to adversarial attacks, which can misdirect queries or overwhelm decision agents, significantly degrading system performance. This paper conducts the first comprehensive analysis of adversarial robustness in two-stage L2D frameworks. We introduce two novel attack strategies -- untargeted and targeted -- that exploit inherent structural vulnerabilities in these systems. To mitigate these threats, we propose SARD, a robust, convex, deferral algorithm rooted in Bayes and $(\mathcal{R},\mathcal{G})$-consistency. Our approach guarantees optimal task allocation under adversarial perturbations for all surrogates in the cross-entropy family. Extensive experiments on classification, regression, and multi-task benchmarks validate the robustness of SARD.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.01027
Document Type :
Working Paper