1. A novel application of XAI in squinting models: A position paper
- Author
-
Kenneth Wenger, Katayoun Hossein Abadi, Damian Fozard, Kayvan Tirdad, Alex Dela Cruz, and Alireza Sadeghian
- Subjects
Artificial Intelligence ,Deep learning ,Pathology ,Explainable AI ,XAI ,Safety critical AI ,Cybernetics ,Q300-390 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Artificial Intelligence, and Machine Learning especially, are becoming increasingly foundational to our collective future. Recent developments around generative models such as ChatGPT, and DALL-E represent just the tip of the iceberg in new gadgets that will change the way we live our lives. Convolutional Neural Networks (CNNs) and Transformer models are at the heart of advancements in the autonomous vehicles and health care industries as well. Yet these models, as impressive as they are, still make plenty of mistakes without justifying or explaining what aspects of the input or internal state, was responsible for the error. Often, the goal of automation is to increase throughput, processing as many tasks as possible in a short a period of time. For some use cases the cost of mistakes might be acceptable as long as production is increased above some set margin. However, in health care, autonomous vehicles, and financial applications, the cost of a mistake might have catastrophic consequences. For this reason, industries where single mistakes can be costly are less enthusiastic about early AI adoption. The field of eXplainable AI (XAI) has attracted significant attention in recent years with the goal of producing algorithms that shed light into the decision-making process of neural networks. In this paper we show how robust vision pipelines can be built using XAI algorithms with the goal of producing automated watchdogs that actively monitor the decision-making process of neural networks for signs of mistakes or ambiguous data. We call these robust vision pipelines, squinting pipelines.
- Published
- 2023
- Full Text
- View/download PDF