Back to Search Start Over

NSL: Hybrid Interpretable Learning From Noisy Raw Data

Authors :
Cunnington, Daniel
Russo, Alessandra
Law, Mark
Lobo, Jorge
Kaplan, Lance
Cunnington, Daniel
Russo, Alessandra
Law, Mark
Lobo, Jorge
Kaplan, Lance
Publication Year :
2020

Abstract

Inductive Logic Programming (ILP) systems learn generalised, interpretable rules in a data-efficient manner utilising existing background knowledge. However, current ILP systems require training examples to be specified in a structured logical format. Neural networks learn from unstructured data, although their learned models may be difficult to interpret and are vulnerable to data perturbations at run-time. This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data. NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics. Features extracted by the neural components define the structured context of labelled examples and the confidence of the neural predictions determines the level of noise of the examples. Using the scoring function of FastLAS, NSL searches for short, interpretable rules that generalise over such noisy examples. We evaluate our framework on propositional and first-order classification tasks using the MNIST dataset as raw data. Specifically, we demonstrate that NSL is able to learn robust rules from perturbed MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines whilst being more general and interpretable.<br />Comment: This article has been replaced with arXiv:2106.13103

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1228451302
Document Type :
Electronic Resource