Back to Search Start Over

Explaining Relation Classification Models with Semantic Extents

Authors :
Klöser, Lars
Büsgen, Andre
Kohl, Philipp
Kraft, Bodo
Zündorf, Albert
Publication Year :
2023

Abstract

In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions. We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.<br />Comment: Accepted at DeLTA 2023: Deep Learning Theory and Applications conference

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.02193
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/978-3-031-39059-3_13