Back to Search Start Over

TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation.

Authors :
Li, Jiawei
Li, Yiming
Xiang, Xingchun
Xia, Shu-Tao
Dong, Siyi
Cai, Yun
Source :
Entropy; Nov2020, Vol. 22 Issue 11, p1203, 1p
Publication Year :
2020

Abstract

Deep Neural Networks (DNNs) usually work in an end-to-end manner. This makes the trained DNNs easy to use, but they remain an ambiguous decision process for every test case. Unfortunately, the interpretability of decisions is crucial in some scenarios, such as medical or financial data mining and decision-making. In this paper, we propose a Tree-Network-Tree (TNT) learning framework for explainable decision-making, where the knowledge is alternately transferred between the tree model and DNNs. Specifically, the proposed TNT learning framework exerts the advantages of different models at different stages: (1) a novel James–Stein Decision Tree (JSDT) is proposed to generate better knowledge representations for DNNs, especially when the input data are in low-frequency or low-quality; (2) the DNNs output high-performing prediction result from the knowledge embedding inputs and behave as a teacher model for the following tree model; and (3) a novel distillable Gradient Boosted Decision Tree (dGBDT) is proposed to learn interpretable trees from the soft labels and make a comparable prediction as DNNs do. Extensive experiments on various machine learning tasks demonstrated the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10994300
Volume :
22
Issue :
11
Database :
Complementary Index
Journal :
Entropy
Publication Type :
Academic Journal
Accession number :
147333500
Full Text :
https://doi.org/10.3390/e22111203