Back to Search
Start Over
An AI Architecture with the Capability to Explain Recognition Results
- Source :
- 2024 IEEE 3rd International Conference on Computing and Machine Intelligence (ICMI), 2024, pp. 1-6
- Publication Year :
- 2024
-
Abstract
- Explainability is needed to establish confidence in machine learning results. Some explainable methods take a post hoc approach to explain the weights of machine learning models, others highlight areas of the input contributing to decisions. These methods do not adequately explain decisions, in plain terms. Explainable property-based systems have been shown to provide explanations in plain terms, however, they have not performed as well as leading unexplainable machine learning methods. This research focuses on the importance of metrics to explainability and contributes two methods yielding performance gains. The first method introduces a combination of explainable and unexplainable flows, proposing a metric to characterize explainability of a decision. The second method compares classic metrics for estimating the effectiveness of neural networks in the system, posing a new metric as the leading performer. Results from the new methods and examples from handwritten datasets are presented.
- Subjects :
- Computer Science - Machine Learning
Subjects
Details
- Database :
- arXiv
- Journal :
- 2024 IEEE 3rd International Conference on Computing and Machine Intelligence (ICMI), 2024, pp. 1-6
- Publication Type :
- Report
- Accession number :
- edsarx.2406.08740
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1109/ICMI60790.2024.10586116