1. Evaluating Tree Explanation Methods for Anomaly Reasoning: A Case Study of SHAP TreeExplainer and TreeInterpreter
- Author
-
Sharma, Pulkit, Mirzan, Shezan Rohinton, Bhandari, Apurva, Pimpley, Anish, Eswaran, Abhiram, Srinivasan, Soundar, and Shao, Liqun
- Subjects
FOS: Computer and information sciences ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Science - Computer Science and Game Theory ,Computer Science and Game Theory (cs.GT) - Abstract
Understanding predictions made by Machine Learning models is critical in many applications. In this work, we investigate the performance of two methods for explaining tree-based models- Tree Interpreter (TI) and SHapley Additive exPlanations TreeExplainer (SHAP-TE). Using a case study on detecting anomalies in job runtimes of applications that utilize cloud-computing platforms, we compare these approaches using a variety of metrics, including computation time, significance of attribution value, and explanation accuracy. We find that, although the SHAP-TE offers consistency guarantees over TI, at the cost of increased computation, consistency does not necessarily improve the explanation performance in our case study., 10 pages, 2 figures, 4 tables, CMAI workshop 2020
- Published
- 2020