Back to Search Start Over

Knowledge Representation and Reasoning Methods to Explain Errors in Machine Learning

Authors :
Alirezaie, Marjan
Längkvist, Martin
Loutfi, Amy
Alirezaie, Marjan
Längkvist, Martin
Loutfi, Amy
Publication Year :
2020

Abstract

In this chapter we focus the use of knowledge representation and reasoning (KRR) methods as a guide to machine learning algorithms whereby relevant contextual knowledge can be leveraged upon. In this way, the learning methods improve performance by taking into account causal relationships behind errors. Performance improvement can be obtained by focusing the learning task on aspects that are particularly challenging (or prone to error), and then using added knowledge inferred by the reasoner as a means to provide further input to learning algorithms. Said differently, the KRR algorithms guide the learning algorithms, feeding it labels and data in order to iteratively reduce the errors calculated by a given cost function. This closed loop system comes with the added benefit that errors are also made more understandable to the human, as it is the task of the KRR system to contextualize the errors from the ML algorithm in accordance with its knowledge model. This represents a type of explainable AI that is focused on interpretability. This chapter will discuss the benefits of using KRR methods with ML methods in this way, and demonstrate an approach applied to satellite data for the purpose of improving classification and segmentation task.

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1234143086
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.3233.SSW200017