19 results on '"CBIS-DDSM"'
Search Results
2. MFAN: Multi-Feature Attention Network for Breast Cancer Classification.
- Author
-
Nasir, Inzamam Mashood, Alrasheedi, Masad A., and Alreshidi, Nasser Aedh
- Subjects
- *
CANCER diagnosis , *BREAST cancer , *TUMOR classification , *DEEP learning , *ARTIFICIAL intelligence , *IMAGE fusion - Abstract
Cancer-related diseases are some of the major health hazards affecting individuals globally, especially breast cancer. Cases of breast cancer among women persist, and the early indicators of the diseases go unnoticed in many cases. Breast cancer can therefore be treated effectively if the detection is correctly conducted, and the cancer is classified at the preliminary stages. Yet, direct mammogram and ultrasound image diagnosis is a very intricate, time-consuming process, which can be best accomplished with the help of a professional. Manual diagnosis based on mammogram images can be cumbersome, and this often requires the input of professionals. Despite various AI-based strategies in the literature, similarity in cancer and non-cancer regions, irrelevant feature extraction, and poorly trained models are persistent problems. This paper presents a new Multi-Feature Attention Network (MFAN) for breast cancer classification that works well for small lesions and similar contexts. MFAN has two important modules: the McSCAM and the GLAM for Feature Fusion. During channel fusion, McSCAM can preserve the spatial characteristics and extract high-order statistical information, while the GLAM helps reduce the scale differences among the fused features. The global and local attention branches also help the network to effectively identify small lesion regions by obtaining global and local information. Based on the experimental results, the proposed MFAN is a powerful classification model that can classify breast cancer subtypes while providing a solution to the current problems in breast cancer diagnosis on two public datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Towards Robust Supervised Pectoral Muscle Segmentation in Mammography Images.
- Author
-
Aliniya, Parvaneh, Nicolescu, Mircea, Nicolescu, Monica, and Bebis, George
- Subjects
BREAST cancer ,MACHINE learning ,EARLY detection of cancer ,MAMMOGRAMS ,IMAGE segmentation ,DEEP learning - Abstract
Mammography images are the most commonly used tool for breast cancer screening. The presence of pectoral muscle in images for the mediolateral oblique view makes designing a robust automated breast cancer detection system more challenging. Most of the current methods for removing the pectoral muscle are based on traditional machine learning approaches. This is partly due to the lack of segmentation masks of pectoral muscle in available datasets. In this paper, we provide the segmentation masks of the pectoral muscle for the INbreast, MIAS, and CBIS-DDSM datasets, which will enable the development of supervised methods and the utilization of deep learning. Training deep learning-based models using segmentation masks will also be a powerful tool for removing pectoral muscle for unseen data. To test the validity of this idea, we trained AU-Net separately on the INbreast and CBIS-DDSM for the segmentation of the pectoral muscle. We used cross-dataset testing to evaluate the performance of the models on an unseen dataset. In addition, the models were tested on all of the images in the MIAS dataset. The experimental results show that cross-dataset testing achieves a comparable performance to the same-dataset experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Integrative hybrid deep learning for enhanced breast cancer diagnosis: leveraging the Wisconsin Breast Cancer Database and the CBIS-DDSM dataset
- Author
-
Patnala S. R. Chandra Murty, Chinta Anuradha, P. Appala Naidu, Deenababu Mandru, Maram Ashok, Athiraja Atheeswaran, Nagalingam Rajeswaran, and V. Saravanan
- Subjects
Wisconsin Breast Cancer Database ,CBIS-DDSM ,CNN ,AUC-ROC ,Medicine ,Science - Abstract
Abstract The objective of this investigation was to improve the diagnosis of breast cancer by combining two significant datasets: the Wisconsin Breast Cancer Database and the DDSM Curated Breast Imaging Subset (CBIS-DDSM). The Wisconsin Breast Cancer Database provides a detailed examination of the characteristics of cell nuclei, including radius, texture, and concavity, for 569 patients, of which 212 had malignant tumors. In addition, the CBIS-DDSM dataset—a revised variant of the Digital Database for Screening Mammography (DDSM)—offers a standardized collection of 2,620 scanned film mammography studies, including cases that are normal, benign, or malignant and that include verified pathology data. To identify complex patterns and trait diagnoses of breast cancer, this investigation used a hybrid deep learning methodology that combines Convolutional Neural Networks (CNNs) with the stochastic gradients method. The Wisconsin Breast Cancer Database is used for CNN training, while the CBIS-DDSM dataset is used for fine-tuning to maximize adaptability across a variety of mammography investigations. Data integration, feature extraction, model development, and thorough performance evaluation are the main objectives. The diagnostic effectiveness of the algorithm was evaluated by the area under the Receiver Operating Characteristic Curve (AUC-ROC), sensitivity, specificity, and accuracy. The generalizability of the model will be validated by independent validation on additional datasets. This research provides an accurate, comprehensible, and therapeutically applicable breast cancer detection method that will advance the field. These predicted results might greatly increase early diagnosis, which could promote improvements in breast cancer research and eventually lead to improved patient outcomes.
- Published
- 2024
- Full Text
- View/download PDF
5. Integrative hybrid deep learning for enhanced breast cancer diagnosis: leveraging the Wisconsin Breast Cancer Database and the CBIS-DDSM dataset.
- Author
-
Murty, Patnala S. R. Chandra, Anuradha, Chinta, Naidu, P. Appala, Mandru, Deenababu, Ashok, Maram, Atheeswaran, Athiraja, Rajeswaran, Nagalingam, and Saravanan, V.
- Subjects
RECEIVER operating characteristic curves ,CANCER diagnosis ,DATABASES ,CONVOLUTIONAL neural networks ,BREAST cancer research - Abstract
The objective of this investigation was to improve the diagnosis of breast cancer by combining two significant datasets: the Wisconsin Breast Cancer Database and the DDSM Curated Breast Imaging Subset (CBIS-DDSM). The Wisconsin Breast Cancer Database provides a detailed examination of the characteristics of cell nuclei, including radius, texture, and concavity, for 569 patients, of which 212 had malignant tumors. In addition, the CBIS-DDSM dataset—a revised variant of the Digital Database for Screening Mammography (DDSM)—offers a standardized collection of 2,620 scanned film mammography studies, including cases that are normal, benign, or malignant and that include verified pathology data. To identify complex patterns and trait diagnoses of breast cancer, this investigation used a hybrid deep learning methodology that combines Convolutional Neural Networks (CNNs) with the stochastic gradients method. The Wisconsin Breast Cancer Database is used for CNN training, while the CBIS-DDSM dataset is used for fine-tuning to maximize adaptability across a variety of mammography investigations. Data integration, feature extraction, model development, and thorough performance evaluation are the main objectives. The diagnostic effectiveness of the algorithm was evaluated by the area under the Receiver Operating Characteristic Curve (AUC-ROC), sensitivity, specificity, and accuracy. The generalizability of the model will be validated by independent validation on additional datasets. This research provides an accurate, comprehensible, and therapeutically applicable breast cancer detection method that will advance the field. These predicted results might greatly increase early diagnosis, which could promote improvements in breast cancer research and eventually lead to improved patient outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Towards Robust Supervised Pectoral Muscle Segmentation in Mammography Images
- Author
-
Parvaneh Aliniya, Mircea Nicolescu, Monica Nicolescu, and George Bebis
- Subjects
breast cancer mammography ,pectoral muscle ,INbreast ,CBIS-DDSM ,MIAS ,deep learning ,Photography ,TR1-1050 ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Mammography images are the most commonly used tool for breast cancer screening. The presence of pectoral muscle in images for the mediolateral oblique view makes designing a robust automated breast cancer detection system more challenging. Most of the current methods for removing the pectoral muscle are based on traditional machine learning approaches. This is partly due to the lack of segmentation masks of pectoral muscle in available datasets. In this paper, we provide the segmentation masks of the pectoral muscle for the INbreast, MIAS, and CBIS-DDSM datasets, which will enable the development of supervised methods and the utilization of deep learning. Training deep learning-based models using segmentation masks will also be a powerful tool for removing pectoral muscle for unseen data. To test the validity of this idea, we trained AU-Net separately on the INbreast and CBIS-DDSM for the segmentation of the pectoral muscle. We used cross-dataset testing to evaluate the performance of the models on an unseen dataset. In addition, the models were tested on all of the images in the MIAS dataset. The experimental results show that cross-dataset testing achieves a comparable performance to the same-dataset experiments.
- Published
- 2024
- Full Text
- View/download PDF
7. Morphological and Textural Data Fusion for Breast Cancer Classification Based on Inter and Intra group Variances.
- Author
-
Gurudas V. R., Shaila S. G., and Vadivel A.
- Subjects
BREAST cancer ,TUMOR classification ,EARLY detection of cancer ,MAMMOGRAMS ,BREAST imaging ,FEATURE selection ,MULTISENSOR data fusion - Abstract
Nowadays, the most predominant cancer disease is Breast Cancer that has a higher death rate and women gender is the most affected by this disease. But detecting Breast Cancer in early stage is challenging as the malignance growth at this stage occurs in the duct that are undetected as symptoms are less. This paper addresses the challenge of early detection of Breast Cancer cells by proposing the fusion scheme of morphological and texture features of the cells for analysis. Morphological features such as the shape and marginal characteristics of the mass are considered as per the Breast Imaging Reporting and Data System (BI-RADS) standard. Texture features of the mass were also extracted to understand the characteristics of pixel variation in the masses. These features are combined and its dimension is normalized using Exhaustive Feature Selection (EFS). The accuracy of the proposed feature on the INbreast dataset is 94.75% on an average. The accuracy for the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) Calc RoI dataset is 95% and for CBIS-DDSM Mass RoI dataset it is 94.5%. The result is further compared with contemporary methods and found that the fused feature is performing well. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. CBGAT: an efficient breast cancer prediction model using deep learning methods.
- Author
-
Sarathkumar, M. and Dhanalakshmi, K. S.
- Abstract
In recent years, breast cancer is observed to be more prevalent among women. Timely and accurate prediction of the disease enables health care professionals to adopt a strong decision-making system in treatment optimization. Thus, this study employed a deep learning-based strategy for efficient breast cancer diagnosis. In this study, the feature extraction is efficiently performed with the Deep CNN and VGG-16 and the classification process has been processed that intends to classify appropriate features by using the newly introduced CBGAT (CNN-Bi-LSTM-GRU-AM) technique that helps to enhance the accuracy of image recognition and to predict the breast cancer without any human intervention. The complete approach uses MIAS and the CBIS-DDSM dataset for the process of training and for the evaluations. The proposed study aims in minimising the cases of human intervention of breast cancer diagnosis. The approach DL architectures comprising both the Deep CNN and VCG-16, for the process of feature extraction from the breast cancer diagnosis. Further, the proposed approach is embedded with the PCA in enhancing the procedures of feature extraction. This PCA is used in the proposed study as a fusion technique. PCA is used as one of a dimensionality reduction technique, used in capturing the more of exact information from the features. The combination of CNN-Bi-LSTM-GRU-AM classification approach used in the study is considered in analysing the sequential information and in capturing the long-term dependencies also used in focusing the salient features. The Deep-CNN and VCG-16 are incorporated for leveraging the strengths of each of the network, and are used in capturing the range of features. This integration aids in enhancing the representation and the discriminative power of the extracted features. Influencing the use of AM in the classification process is one more an added advantage of the proposed approach. This AM allows the model to be focused on the vita regions of features within the images provided as input.The experimental implementation and performance analysis of the proposed system is undertaken. The accuracy of each proposed classification technique is discussed with respect to their dataset. The analytical outcomes explore that the proposed model is more effective than the traditional methods as the proposed model highlights a higher accuracy rate, F1 score, sensitivity, specificity, and Area under the curve (AUC). The proposed also determined the importance of breast cancer prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Breast Cancer Diagnosis Using YOLO-Based Multiscale Parallel CNN and Flattened Threshold Swish.
- Author
-
Mohammed, Ahmed Dhahi and Ekmekci, Dursun
- Subjects
CANCER diagnosis ,BREAST ,COMPUTER-aided diagnosis ,MEDICAL personnel ,FEATURE extraction ,CONVOLUTIONAL neural networks - Abstract
In the field of biomedical imaging, the use of Convolutional Neural Networks (CNNs) has achieved impressive success. Additionally, the detection and pathological classification of breast masses creates significant challenges. Traditional mammogram screening, conducted by healthcare professionals, is often exhausting, costly, and prone to errors. To address these issues, this research proposes an end-to-end Computer-Aided Diagnosis (CAD) system utilizing the 'You Only Look Once' (YOLO) architecture. The proposed framework begins by enhancing digital mammograms using the Contrast Limited Adaptive Histogram Equalization (CLAHE) technique. Then, features are extracted using the proposed CNN, leveraging multiscale parallel feature extraction capabilities while incorporating DenseNet and InceptionNet architectures. To combat the 'dead neuron' problem, the CNN architecture utilizes the 'Flatten Threshold Swish' (FTS) activation function. Additionally, the YOLO loss function has been enhanced to effectively handle lesion scale variation in mammograms. The proposed framework was thoroughly tested on two publicly available benchmarks: INbreast and CBIS-DDSM. It achieved an accuracy of 98.72% for breast cancer classification on the INbreast dataset and a mean Average Precision (mAP) of 91.15% for breast cancer detection on the CBIS-DDSM. The proposed CNN architecture utilized only 11.33 million parameters for training. These results highlight the proposed framework's ability to revolutionize vision-based breast cancer diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Automatic Breast Cancer Detection with Mammography Approach Using Deep Learning Algorithm
- Author
-
Satapathy, Santosh Kumar, Parmar, Drashti, Kondaveeti, Hari Kishan, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Chaki, Nabendu, editor, Roy, Nilanjana Dutta, editor, Debnath, Papiya, editor, and Saeed, Khalid, editor
- Published
- 2023
- Full Text
- View/download PDF
11. Adaptive Sailfish Optimization-Contrast Limited Adaptive Histogram Equalization (ASFO-CLAHE) for Hyperparameter Tuning in Image Enhancement
- Author
-
Surya, S., Muthukumaravel, A., Chlamtac, Imrich, Series Editor, Joseph, Ferdin Joe John, editor, Balas, Valentina Emilia, editor, Rajest, S. Suman, editor, and Regin, R., editor
- Published
- 2023
- Full Text
- View/download PDF
12. Impact of multi-source data augmentation on performance of convolutional neural networks for abnormality classification in mammography
- Author
-
InChan Hwang, Hari Trivedi, Beatrice Brown-Mulry, Linglin Zhang, Vineela Nalla, Aimilia Gastounioti, Judy Gichoya, Laleh Seyyed-Kalantari, Imon Banerjee, and MinJae Woo
- Subjects
mammography ,CBIS-DDSM ,EMBED ,breast cancer ,FFDM—full field digital mammography ,cancer screening (MeSH) ,Medical physics. Medical radiology. Nuclear medicine ,R895-920 - Abstract
IntroductionTo date, most mammography-related AI models have been trained using either film or digital mammogram datasets with little overlap. We investigated whether or not combining film and digital mammography during training will help or hinder modern models designed for use on digital mammograms.MethodsTo this end, a total of six binary classifiers were trained for comparison. The first three classifiers were trained using images only from Emory Breast Imaging Dataset (EMBED) using ResNet50, ResNet101, and ResNet152 architectures. The next three classifiers were trained using images from EMBED, Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), and Digital Database for Screening Mammography (DDSM) datasets. All six models were tested only on digital mammograms from EMBED.ResultsThe results showed that performance degradation to the customized ResNet models was statistically significant overall when EMBED dataset was augmented with CBIS-DDSM/DDSM. While the performance degradation was observed in all racial subgroups, some races are subject to more severe performance drop as compared to other races.DiscussionThe degradation may potentially be due to ( 1) a mismatch in features between film-based and digital mammograms ( 2) a mismatch in pathologic and radiological information. In conclusion, use of both film and digital mammography during training may hinder modern models designed for breast cancer screening. Caution is required when combining film-based and digital mammograms or when utilizing pathologic and radiological information simultaneously.
- Published
- 2023
- Full Text
- View/download PDF
13. Machine learning applications in breast cancer prediction using mammography.
- Author
-
Harshvardhan, G.M., Mori, Kei, Verma, Sarika, and Athanasiou, Lambros
- Subjects
- *
CONVOLUTIONAL neural networks , *DEEP learning , *MACHINE learning , *DATABASE design , *BREAST cancer - Abstract
Breast cancer is the second leading cause of cancer-related deaths among women. Early detection of lumps and subsequent risk assessment significantly improves prognosis. In screening mammography, radiologist interpretation of mammograms is prone to high error rates and requires extensive manual effort. To this end, several computer-aided diagnosis methods using machine learning have been proposed for automatic detection of breast cancer in mammography. In this paper, we provide a comprehensive review and analysis of these methods and discuss practical issues associated with their reproducibility. We aim to aid the readers in choosing the appropriate method to implement and we guide them towards this purpose. Moreover, an effort is made to re-implement a sample of the presented methods in order to highlight the importance of providing technical details associated with those methods. Advancing the domain of breast cancer pathology classification using machine learning involves the availability of public databases and development of innovative methods. Although there is significant progress in both areas, more transparency in the latter would boost the domain progress. • ML methods for breast cancer prediction using mamography lack of critical details. • Re-implementaion of the CNN methods is crucial for advancing the field. • Reproducibility of current methods is challenging. • Code-sharing and implementation details can further advance the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Multi-Task Fusion for Improving Mammography Screening Data Classification.
- Author
-
Wimmer, Maria, Sluiter, Gert, Major, David, Lenis, Dimitrios, Berg, Astrid, Neubauer, Theresa, and Buhler, Katja
- Subjects
- *
DEEP learning , *MAMMOGRAMS , *DATA integrity , *MACHINE learning , *CLASSIFICATION , *BLENDED learning - Abstract
Machine learning and deep learning methods have become essential for computer-assisted prediction in medicine, with a growing number of applications also in the field of mammography. Typically these algorithms are trained for a specific task, e.g., the classification of lesions or the prediction of a mammogram’s pathology status. To obtain a comprehensive view of a patient, models which were all trained for the same task(s) are subsequently ensembled or combined. In this work, we propose a pipeline approach, where we first train a set of individual, task-specific models and subsequently investigate the fusion thereof, which is in contrast to the standard model ensembling strategy. We fuse model predictions and high-level features from deep learning models with hybrid patient models to build stronger predictors on patient level. To this end, we propose a multi-branch deep learning model which efficiently fuses features across different tasks and mammograms to obtain a comprehensive patient-level prediction. We train and evaluate our full pipeline on public mammography data, i.e., DDSM and its curated version CBIS-DDSM, and report an AUC score of 0.962 for predicting the presence of any lesion and 0.791 for predicting the presence of malignant lesions on patient level. Overall, our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling. Moreover, by providing not only global patient-level predictions but also task-specific model results that are related to radiological features, our pipeline aims to closely support the reading workflow of radiologists. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. An Effective Ensemble Machine Learning Approach to Classify Breast Cancer Based on Feature Selection and Lesion Segmentation Using Preprocessed Mammograms
- Author
-
A. K. M. Rakibul Haque Rafid, Sami Azam, Sidratul Montaha, Asif Karim, Kayes Uddin Fahim, and Md. Zahid Hasan
- Subjects
General Immunology and Microbiology ,General Agricultural and Biological Sciences ,breast cancer ,classification ,mammogram ,segmentation ,image processing ,machine learning ,feature extraction ,ensemble model ,feature selection ,CBIS-DDSM ,General Biochemistry, Genetics and Molecular Biology - Abstract
Background: Breast cancer, behind skin cancer, is the second most frequent malignancy among women, initiated by an unregulated cell division in breast tissues. Although early mammogram screening and treatment result in decreased mortality, differentiating cancer cells from surrounding tissues are often fallible, resulting in fallacious diagnosis. Method: The mammography dataset is used to categorize breast cancer into four classes with low computational complexity, introducing a feature extraction-based approach with machine learning (ML) algorithms. After artefact removal and the preprocessing of the mammograms, the dataset is augmented with seven augmentation techniques. The region of interest (ROI) is extracted by employing several algorithms including a dynamic thresholding method. Sixteen geometrical features are extracted from the ROI while eleven ML algorithms are investigated with these features. Three ensemble models are generated from these ML models employing the stacking method where the first ensemble model is built by stacking ML models with an accuracy of over 90% and the accuracy thresholds for generating the rest of the ensemble models are >95% and >96. Five feature selection methods with fourteen configurations are applied to notch up the performance. Results: The Random Forest Importance algorithm, with a threshold of 0.045, produces 10 features that acquired the highest performance with 98.05% test accuracy by stacking Random Forest and XGB classifier, having a higher than >96% accuracy. Furthermore, with K-fold cross-validation, consistent performance is observed across all K values ranging from 3–30. Moreover, the proposed strategy combining image processing, feature extraction and ML has a proven high accuracy in classifying breast cancer.
- Published
- 2022
16. A novel breast cancer detection architecture based on a CNN-CBR system for mammogram classification.
- Author
-
Bouzar-Benlabiod L, Harrar K, Yamoun L, Khodja MY, and Akhloufi MA
- Subjects
- Humans, Female, Mammography methods, Machine Learning, Image Enhancement, Breast Neoplasms diagnostic imaging
- Abstract
This paper presents a novel framework for breast cancer detection using mammogram images. The proposed solution aims to output an explainable classification from a mammogram image. The classification approach uses a Case-Based Reasoning system (CBR). CBR accuracy strongly depends on the quality of the extracted features. To achieve relevant classification, we propose a pipeline that includes image enhancement and data augmentation to improve the quality of extracted features and provide a final diagnosis. An efficient segmentation method based on a U-Net architecture is used to extract Regions of interest (RoI) from mammograms. The purpose is to combine deep learning (DL) with CBR to improve classification accuracy. DL provides accurate mammogram segmentation, while CBR gives an explainable and accurate classification. The proposed approach was tested on the CBIS-DDSM dataset and achieved high performance with an accuracy (Acc) of 86.71 % and a recall of 91.34 %, outperforming some well-known machine learning (ML) and DL approaches., Competing Interests: Declaration of Competing Interest None declared., (Copyright © 2023 Elsevier Ltd. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
17. BreastNet18: A High Accuracy Fine-Tuned VGG16 Model Evaluated Using Ablation Study for Diagnosing Breast Cancer from Enhanced Mammography Images
- Author
-
Sidratul Montaha, Sami Azam, Abul Kalam Muhammad Rakibul Haque Rafid, Pronab Ghosh, Md. Zahid Hasan, Mirjam Jonkman, and Friso De Boer
- Subjects
image preprocessing ,mammograms ,fine-tuned VGG16 ,deep learning ,breast cancer classification ,data augmentation ,ablation study ,transfer learning models ,feature map analysis ,CBIS-DDSM ,General Immunology and Microbiology ,QH301-705.5 ,Article ,General Biochemistry, Genetics and Molecular Biology ,Biology (General) ,General Agricultural and Biological Sciences - Abstract
Simple Summary Breast cancer diagnosis at an early stage using mammography is important, as it assists clinical specialists in treatment planning to increase survival rates. The aim of this study is to construct an effective method to classify breast images into four classes with a low error rate. Initially, unwanted regions of mammograms are removed, the quality is enhanced, and the cancerous lesions are highlighted with different artifacts removal, noise reduction, and enhancement techniques. The number of mammograms is increased using seven augmentation techniques to deal with over-fitting and under-fitting problems. Afterwards, six fine-tuned convolution neural networks (CNNs), originally developed for other purposes, are evaluated, and VGG16 yielded the highest performance. We propose a BreastNet18 model based on the fine-tuned VGG16, changing different hyper parameters and layer structures after experimentation with our dataset. Performing an ablation study on the proposed model and selecting suitable parameter values for preprocessing algorithms increases the accuracy of our model to 98.02%, outperforming some existing state-of-the-art approaches. To analyze the performance, several performance metrics are generated and evaluated for every model and for BreastNet18. Results suggest that accuracy improvement can be obtained through image pre-processing techniques, augmentation, and ablation study. To investigate possible overfitting issues, a k-fold cross validation is carried out. To assert the robustness of the network, the model is tested on a dataset containing noisy mammograms. This may help medical specialists in efficient and accurate diagnosis and early treatment planning. Abstract Background: Identification and treatment of breast cancer at an early stage can reduce mortality. Currently, mammography is the most widely used effective imaging technique in breast cancer detection. However, an erroneous mammogram based interpretation may result in false diagnosis rate, as distinguishing cancerous masses from adjacent tissue is often complex and error-prone. Methods: Six pre-trained and fine-tuned deep CNN architectures: VGG16, VGG19, MobileNetV2, ResNet50, DenseNet201, and InceptionV3 are evaluated to determine which model yields the best performance. We propose a BreastNet18 model using VGG16 as foundational base, since VGG16 performs with the highest accuracy. An ablation study is performed on BreastNet18, to evaluate its robustness and achieve the highest possible accuracy. Various image processing techniques with suitable parameter values are employed to remove artefacts and increase the image quality. A total dataset of 1442 preprocessed mammograms was augmented using seven augmentation techniques, resulting in a dataset of 11,536 images. To investigate possible overfitting issues, a k-fold cross validation is carried out. The model was then tested on noisy mammograms to evaluate its robustness. Results were compared with previous studies. Results: Proposed BreastNet18 model performed best with a training accuracy of 96.72%, a validating accuracy of 97.91%, and a test accuracy of 98.02%. In contrast to this, VGGNet19 yielded test accuracy of 96.24%, MobileNetV2 77.84%, ResNet50 79.98%, DenseNet201 86.92%, and InceptionV3 76.87%. Conclusions: Our proposed approach based on image processing, transfer learning, fine-tuning, and ablation study has demonstrated a high correct breast cancer classification while dealing with a limited number of complex medical images.
- Published
- 2021
- Full Text
- View/download PDF
18. An Effective Ensemble Machine Learning Approach to Classify Breast Cancer Based on Feature Selection and Lesion Segmentation Using Preprocessed Mammograms.
- Author
-
Rafid AKMRH, Azam S, Montaha S, Karim A, Fahim KU, and Hasan MZ
- Abstract
Background: Breast cancer, behind skin cancer, is the second most frequent malignancy among women, initiated by an unregulated cell division in breast tissues. Although early mammogram screening and treatment result in decreased mortality, differentiating cancer cells from surrounding tissues are often fallible, resulting in fallacious diagnosis. Method: The mammography dataset is used to categorize breast cancer into four classes with low computational complexity, introducing a feature extraction-based approach with machine learning (ML) algorithms. After artefact removal and the preprocessing of the mammograms, the dataset is augmented with seven augmentation techniques. The region of interest (ROI) is extracted by employing several algorithms including a dynamic thresholding method. Sixteen geometrical features are extracted from the ROI while eleven ML algorithms are investigated with these features. Three ensemble models are generated from these ML models employing the stacking method where the first ensemble model is built by stacking ML models with an accuracy of over 90% and the accuracy thresholds for generating the rest of the ensemble models are >95% and >96. Five feature selection methods with fourteen configurations are applied to notch up the performance. Results: The Random Forest Importance algorithm, with a threshold of 0.045, produces 10 features that acquired the highest performance with 98.05% test accuracy by stacking Random Forest and XGB classifier, having a higher than >96% accuracy. Furthermore, with K-fold cross-validation, consistent performance is observed across all K values ranging from 3−30. Moreover, the proposed strategy combining image processing, feature extraction and ML has a proven high accuracy in classifying breast cancer.
- Published
- 2022
- Full Text
- View/download PDF
19. BreastNet18: A High Accuracy Fine-Tuned VGG16 Model Evaluated Using Ablation Study for Diagnosing Breast Cancer from Enhanced Mammography Images.
- Author
-
Montaha, Sidratul, Azam, Sami, Rafid, Abul Kalam Muhammad Rakibul Haque, Ghosh, Pronab, Hasan, Md. Zahid, Jonkman, Mirjam, and De Boer, Friso
- Subjects
BREAST ,CANCER diagnosis ,CONVOLUTIONAL neural networks ,MAMMOGRAMS ,BREAST cancer ,MEDICAL specialties & specialists - Abstract
Simple Summary: Breast cancer diagnosis at an early stage using mammography is important, as it assists clinical specialists in treatment planning to increase survival rates. The aim of this study is to construct an effective method to classify breast images into four classes with a low error rate. Initially, unwanted regions of mammograms are removed, the quality is enhanced, and the cancerous lesions are highlighted with different artifacts removal, noise reduction, and enhancement techniques. The number of mammograms is increased using seven augmentation techniques to deal with over-fitting and under-fitting problems. Afterwards, six fine-tuned convolution neural networks (CNNs), originally developed for other purposes, are evaluated, and VGG16 yielded the highest performance. We propose a BreastNet18 model based on the fine-tuned VGG16, changing different hyper parameters and layer structures after experimentation with our dataset. Performing an ablation study on the proposed model and selecting suitable parameter values for preprocessing algorithms increases the accuracy of our model to 98.02%, outperforming some existing state-of-the-art approaches. To analyze the performance, several performance metrics are generated and evaluated for every model and for BreastNet18. Results suggest that accuracy improvement can be obtained through image pre-processing techniques, augmentation, and ablation study. To investigate possible overfitting issues, a k-fold cross validation is carried out. To assert the robustness of the network, the model is tested on a dataset containing noisy mammograms. This may help medical specialists in efficient and accurate diagnosis and early treatment planning. Background: Identification and treatment of breast cancer at an early stage can reduce mortality. Currently, mammography is the most widely used effective imaging technique in breast cancer detection. However, an erroneous mammogram based interpretation may result in false diagnosis rate, as distinguishing cancerous masses from adjacent tissue is often complex and error-prone. Methods: Six pre-trained and fine-tuned deep CNN architectures: VGG16, VGG19, MobileNetV2, ResNet50, DenseNet201, and InceptionV3 are evaluated to determine which model yields the best performance. We propose a BreastNet18 model using VGG16 as foundational base, since VGG16 performs with the highest accuracy. An ablation study is performed on BreastNet18, to evaluate its robustness and achieve the highest possible accuracy. Various image processing techniques with suitable parameter values are employed to remove artefacts and increase the image quality. A total dataset of 1442 preprocessed mammograms was augmented using seven augmentation techniques, resulting in a dataset of 11,536 images. To investigate possible overfitting issues, a k-fold cross validation is carried out. The model was then tested on noisy mammograms to evaluate its robustness. Results were compared with previous studies. Results: Proposed BreastNet18 model performed best with a training accuracy of 96.72%, a validating accuracy of 97.91%, and a test accuracy of 98.02%. In contrast to this, VGGNet19 yielded test accuracy of 96.24%, MobileNetV2 77.84%, ResNet50 79.98%, DenseNet201 86.92%, and InceptionV3 76.87%. Conclusions: Our proposed approach based on image processing, transfer learning, fine-tuning, and ablation study has demonstrated a high correct breast cancer classification while dealing with a limited number of complex medical images. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.