Back to Search Start Over

Verifiable Learning for Robust Tree Ensembles

Authors :
Calzavara, Stefano
Cazzaro, Lorenzo
Pibiri, Giulio Ermanno
Prezza, Nicola
Publication Year :
2023

Abstract

Verifying the robustness of machine learning models against evasion attacks at test time is an important research problem. Unfortunately, prior work established that this problem is NP-hard for decision tree ensembles, hence bound to be intractable for specific inputs. In this paper, we identify a restricted class of decision tree ensembles, called large-spread ensembles, which admit a security verification algorithm running in polynomial time. We then propose a new approach called verifiable learning, which advocates the training of such restricted model classes which are amenable for efficient verification. We show the benefits of this idea by designing a new training algorithm that automatically learns a large-spread decision tree ensemble from labelled data, thus enabling its security verification in polynomial time. Experimental results on public datasets confirm that large-spread ensembles trained using our algorithm can be verified in a matter of seconds, using standard commercial hardware. Moreover, large-spread ensembles are more robust than traditional ensembles against evasion attacks, at the cost of an acceptable loss of accuracy in the non-adversarial setting.<br />Comment: 19 pages, 5 figures; full version of the revised paper accepted at ACM CCS 2023 with corrected typo in footnote 1

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.03626
Document Type :
Working Paper