Back to Search Start Over

Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Networks.

Authors :
Tran, Son N.
d'Avila Garcez, Artur S.
Source :
IEEE Transactions on Neural Networks & Learning Systems. Feb2018, Vol. 29 Issue 2, p246-258. 13p.
Publication Year :
2018

Abstract

Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language—a set of logical rules that we call confidence rules—and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural–symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*DEEP learning
*BOLTZMANN machine

Details

Language :
English
ISSN :
2162237X
Volume :
29
Issue :
2
Database :
Academic Search Index
Journal :
IEEE Transactions on Neural Networks & Learning Systems
Publication Type :
Periodical
Accession number :
127490693
Full Text :
https://doi.org/10.1109/TNNLS.2016.2603784