Back to Search
Start Over
Computationally efficient LC-SCS deep learning model for breast cancer classification using thermal imaging.
- Source :
- Neural Computing & Applications; Sep2024, Vol. 36 Issue 26, p16233-16250, 18p
- Publication Year :
- 2024
-
Abstract
- Deep learning applications have witnessed significant advancements across diverse domains, revolutionizing tasks such as image recognition, disease classification, cancer detection, natural language processing, and autonomous decision-making. However, striking a balance between performance and cost remains a fundamental challenge. Performance, which encompasses accuracy, efficiency, and effectiveness, is crucial for the real-world applicability of deep learning models. Attaining high performance often involves using resource-intensive models with intricate architectures, extensive datasets, and hyperparameter tuning. While these factors lead to improved performance, they also result in increased costs, including computational resources, time, and energy consumption. This research addresses this challenge by proposing a novel low-cost deep learning model based on Sharpened Cosine Similarity (LC-SCS) for breast cancer classification using thermal images. In our study on a DMR-IR dataset, we also utilized pre-trained models: ResNet-101, VGG-16, Inception-V3, ResNet-50, VGG-19, and Xception. The proposed LC-SCS model achieved an impressive accuracy of 94%, trailing just 4% behind the leading VGG-19 model, while maintaining a low computational cost. It achieved recall and precision scores of 0.95 each, with an F1-score of 0.94. The LC-SCS model excels in computational efficiency, demanding only 1.85 GFLOPs and 1.2 million parameters. The memory requirement is minimal at 865.35 MB, and it exhibits a latency of just 0.003 s. Additionally, the CPU execution time for prediction is 3.59 s. Comparatively, the best-performing pre-trained VGG-19 model achieved 98% accuracy but incurred significantly higher costs. The proposed LC-SCS model showcases a remarkable 298.63 times reduction in GFLOPs, a 116.31 times reduction in total parameters, a 4.57 times reduction in memory requirements for model weights, a 5 times reduction in latency, and a 3.33 times reduction in CPU execution time for prediction, thus making it as an exceptionally resource-efficient model, particularly in scenarios with limited computational resources. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 09410643
- Volume :
- 36
- Issue :
- 26
- Database :
- Complementary Index
- Journal :
- Neural Computing & Applications
- Publication Type :
- Academic Journal
- Accession number :
- 179234250
- Full Text :
- https://doi.org/10.1007/s00521-024-09968-5