Back to Search
Start Over
Offloading the computational complexity of transfer learning with generic features.
- Source :
- PeerJ Computer Science; Mar2024, p1-33, 33p
- Publication Year :
- 2024
-
Abstract
- Deep learning approaches are generally complex, requiring extensive computational resources and having high time complexity. Transfer learning is a state-of-the-art approach to reducing the requirements of high computational resources by using pretrained models without compromising accuracy and performance. In conventional studies, pre-trained models are trained on datasets from different but similar domains with many domain-specific features. The computational requirements of transfer learning are directly dependent on the number of features that include the domain-specific and the generic features. This article investigates the prospects of reducing the computational requirements of the transfer learning models by discarding domain-specific features from a pre-trained model. The approach is applied to breast cancer detection using the dataset curated breast imaging subset of the digital database for screening mammography and various performance metrics such as precision, accuracy, recall, F1-score, and computational requirements. It is seen that discarding the domain-specific features to a specific limit provides significant performance improvements as well as minimizes the computational requirements in terms of training time (reduced by approx. 12%), processor utilization (reduced approx. 25%), and memory usage (reduced approx. 22%). The proposed transfer learning strategy increases accuracy (approx. 7%) and offloads computational complexity expeditiously. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 23765992
- Database :
- Complementary Index
- Journal :
- PeerJ Computer Science
- Publication Type :
- Academic Journal
- Accession number :
- 176567918
- Full Text :
- https://doi.org/10.7717/peerj-cs.1938