1. Enhancing Feature Selection and Interpretability in AI Regression Tasks Through Feature Attribution
- Author
-
Hinterleitner, Alexander, Bartz-Beielstein, Thomas, Schulz, Richard, Spengler, Sebastian, Winter, Thomas, and Leitenmeier, Christoph
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,68 ,I.2.0 - Abstract
Research in Explainable Artificial Intelligence (XAI) is increasing, aiming to make deep learning models more transparent. Most XAI methods focus on justifying the decisions made by Artificial Intelligence (AI) systems in security-relevant applications. However, relatively little attention has been given to using these methods to improve the performance and robustness of deep learning algorithms. Additionally, much of the existing XAI work primarily addresses classification problems. In this study, we investigate the potential of feature attribution methods to filter out uninformative features in input data for regression problems, thereby improving the accuracy and stability of predictions. We introduce a feature selection pipeline that combines Integrated Gradients with k-means clustering to select an optimal set of variables from the initial data space. To validate the effectiveness of this approach, we apply it to a real-world industrial problem - blade vibration analysis in the development process of turbo machinery.
- Published
- 2024