1. On Network Science and Mutual Information for Explaining Deep Neural Networks
- Author
-
Brian Davis, Jose M. F. Moura, Kartikeya Bhardwaj, Umang Bhatt, and Radu Marculescu
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,business.industry ,Deep learning ,Feed forward ,Machine Learning (stat.ML) ,Network science ,02 engineering and technology ,Mutual information ,010501 environmental sciences ,Information theory ,01 natural sciences ,Machine Learning (cs.LG) ,Statistics - Machine Learning ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Information flow (information theory) ,Artificial intelligence ,business ,0105 earth and related environmental sciences ,Interpretability - Abstract
In this paper, we present a new approach to interpret deep learning models. By coupling mutual information with network science, we explore how information flows through feedforward networks. We show that efficiently approximating mutual information allows us to create an information measure that quantifies how much information flows between any two neurons of a deep learning model. To that end, we propose NIF, Neural Information Flow, a technique for codifying information flow that exposes deep learning model internals and provides feature attributions., Comment: ICASSP 2020 (shorter version appeared at AAAI-19 Workshop on Network Interpretability for Deep Learning)
- Published
- 2020