Back to Search Start Over

Enhancing Self-Supervised Learning through Explainable Artificial Intelligence Mechanisms: A Computational Analysis.

Authors :
Neghawi, Elie
Liu, Yan
Source :
Big Data & Cognitive Computing; Jun2024, Vol. 8 Issue 6, p58, 23p
Publication Year :
2024

Abstract

Self-supervised learning continues to drive advancements in machine learning. However, the absence of unified computational processes for benchmarking and evaluation remains a challenge. This study conducts a comprehensive analysis of state-of-the-art self-supervised learning algorithms, emphasizing their underlying mechanisms and computational intricacies. Building upon this analysis, we introduce a unified model-agnostic computation (UMAC) process, tailored to complement modern self-supervised learning algorithms. UMAC serves as a model-agnostic and global explainable artificial intelligence (XAI) methodology that is capable of systematically integrating and enhancing state-of-the-art algorithms. Through UMAC, we identify key computational mechanisms and craft a unified framework for self-supervised learning evaluation. Leveraging UMAC, we integrate an XAI methodology to enhance transparency and interpretability. Our systematic approach yields a 17.12% increase in improvement in training time complexity and a 13.1% boost in improvement in testing time complexity. Notably, improvements are observed in augmentation, encoder architecture, and auxiliary components within the network classifier. These findings underscore the importance of structured computational processes in enhancing model efficiency and fortifying algorithmic transparency in self-supervised learning, paving the way for more interpretable and efficient AI models. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
25042289
Volume :
8
Issue :
6
Database :
Complementary Index
Journal :
Big Data & Cognitive Computing
Publication Type :
Academic Journal
Accession number :
178156385
Full Text :
https://doi.org/10.3390/bdcc8060058