Back to Search Start Over

POPQORN: Quantifying Robustness of Recurrent Neural Networks

Authors :
Ko, Ching-Yun
Lyu, Zhaoyang
Weng, Tsui-Wei
Daniel, Luca
Wong, Ngai
Lin, Dahua
Publication Year :
2019

Abstract

The vulnerability to adversarial attacks has been a critical issue for deep neural networks. Addressing this issue requires a reliable way to evaluate the robustness of a network. Recently, several methods have been developed to compute $\textit{robustness quantification}$ for neural networks, namely, certified lower bounds of the minimum adversarial perturbation. Such methods, however, were devised for feed-forward networks, e.g. multi-layer perceptron or convolutional networks. It remains an open problem to quantify robustness for recurrent networks, especially LSTM and GRU. For such networks, there exist additional challenges in computing the robustness quantification, such as handling the inputs at multiple steps and the interaction between gates and states. In this work, we propose $\textit{POPQORN}$ ($\textbf{P}$ropagated-$\textbf{o}$ut$\textbf{p}$ut $\textbf{Q}$uantified R$\textbf{o}$bustness for $\textbf{RN}$Ns), a general algorithm to quantify robustness of RNNs, including vanilla RNNs, LSTMs, and GRUs. We demonstrate its effectiveness on different network architectures and show that the robustness quantification on individual steps can lead to new insights.<br />Comment: 10 pages, Ching-Yun Ko and Zhaoyang Lyu contributed equally, accepted to ICML 2019. Please see arXiv source codes for the appendix by clicking [Other formats]

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1905.07387
Document Type :
Working Paper