1. Improving Diabetic Retinopathy grading using Feature Fusion for limited data samples.
- Author
-
Ashwini, K and Dash, Ratnakar
- Subjects
- *
CONVOLUTIONAL neural networks , *DIABETIC retinopathy , *PERFORMANCE standards , *BLOOD vessels , *RESEARCH personnel - Abstract
Early detection of Diabetic Retinopathy (DR) and its grading has been a growing demand among researchers in this community. Computer-aided diagnostic (CAD) systems have the potential to enhance the sensitivity and effectiveness of early diagnoses, benefiting ophthalmic specialists by offering additional insights for more efficient treatment options. The proposed study addresses the challenges of improved detection of mild stage and the limited number of samples with fewer parameters. Fundus images are initially pre-processed for this task using resizing, augmentation and oversampling. Oversampling is employed to guarantee the balanced inclusion of images from every grade category throughout the training stage. The proposed approach utilizes a Convolutional Neural Network (CNN) to extract texture and vessel features separately from the fundus images. This methodology exploited Local Binary Pattern (LBP) for improved texture features before applying CNN. Similarly, we utilized Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance the blood vessels of the fundus images, enabling the extraction of relevant features using CNN. The extracted features are combined and classified using fully connected layers. The proposed approach is validated using standard datasets such as IDRiD, APTOS, DDR, and EyePACS with limited samples. The experimental results demonstrate that the proposed model in this research outperforms state-of-the-art models across all standard performance metrics, with classification accuracies of 92.46%, 98.08%, 95.66% and 88.84%. • Tackles early-stage DR detection challenges; implemented with limited sample sizes. • By using Local Binary Pattern for texture enhancement, Contrast Limited Adaptive Histogram Equalization for better vessel visibility, and fully connected layers for classification. • The proposed model achieves accuracies of 92.46%, 98.08%, 95.66%, and 88.84% on IDRiD, APTOS, DDR, and EyePACS datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF