1. A Survey of the Self Supervised Learning Mechanisms for Vision Transformers
- Author
-
Khan, Asifullah, Sohail, Anabia, Fiaz, Mustansar, Hassan, Mehdi, Afridi, Tariq Habib, Marwat, Sibghat Ullah, Munir, Farzeen, Ali, Safdar, Naseem, Hannan, Zaheer, Muhammad Zaigham, Ali, Kamran, Sultana, Tangina, Tanoli, Ziaurrehman, and Akhter, Naeem more...
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Deep supervised learning models require high volume of labeled data to attain sufficiently good results. Although, the practice of gathering and annotating such big data is costly and laborious. Recently, the application of self supervised learning (SSL) in vision tasks has gained significant attention. The intuition behind SSL is to exploit the synchronous relationships within the data as a form of self-supervision, which can be versatile. In the current big data era, most of the data is unlabeled, and the success of SSL thus relies in finding ways to utilize this vast amount of unlabeled data available. Thus it is better for deep learning algorithms to reduce reliance on human supervision and instead focus on self-supervision based on the inherent relationships within the data. With the advent of ViTs, which have achieved remarkable results in computer vision, it is crucial to explore and understand the various SSL mechanisms employed for training these models specifically in scenarios where there is limited labelled data available. In this survey, we develop a comprehensive taxonomy of systematically classifying the SSL techniques based upon their representations and pre-training tasks being applied. Additionally, we discuss the motivations behind SSL, review popular pre-training tasks, and highlight the challenges and advancements in this field. Furthermore, we present a comparative analysis of different SSL methods, evaluate their strengths and limitations, and identify potential avenues for future research., Comment: 34 Pages, 5 Figures, 7 Tables more...
- Published
- 2024