Back to Search Start Over

Noise-adding Methods of Saliency Map as Series of Higher Order Partial Derivative

Authors :
Seo, Junghoon
Choe, Jeongyeol
Koo, Jamyoung
Jeon, Seunghyeon
Kim, Beomsu
Jeon, Taegyun
Publication Year :
2018

Abstract

SmoothGrad and VarGrad are techniques that enhance the empirical quality of standard saliency maps by adding noise to input. However, there were few works that provide a rigorous theoretical interpretation of those methods. We analytically formalize the result of these noise-adding methods. As a result, we observe two interesting results from the existing noise-adding methods. First, SmoothGrad does not make the gradient of the score function smooth. Second, VarGrad is independent of the gradient of the score function. We believe that our findings provide a clue to reveal the relationship between local explanation methods of deep neural networks and higher-order partial derivatives of the score function.<br />Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1806.03000
Document Type :
Working Paper