19 results
Search Results
2. Compressed lightweight deep learning models for resource‐constrained Internet of things devices in the healthcare sector.
- Author
-
Habib, Gousia and Qureshi, Shaima
- Subjects
- *
IMAGE recognition (Computer vision) , *OBJECT recognition (Computer vision) , *CONVOLUTIONAL neural networks , *IMAGE analysis , *BRAIN tumors - Abstract
The performance of convolutional neural networks (CNNs) in image classification and object detection has been remarkable, even though they contain millions and billions of parameters. This over‐parameterization of CNN makes them both memory‐intensive and computationally complex and exhaustive. This greatly hinders the application of CNNs in resource‐constrained environments such as Internet of things (IoT) and edge devices. This poses a critical challenge for CNNs in deploying these powerful computer vision tools to mobile devices, which needs immediate attention. In this study, we have proposed a novel technique based on non‐convex optimization, max‐norm regularization. The max‐norm will structurally prune the number of parameters without compromising the model's performance. The proximal gradient descent algorithm is used for network optimization while using this non‐convex regularizer. The max‐norm is combined with the channel pruning to achieve more sparse CNN networks. Later, the pruned network can be easily deployed in the resource‐constrained application environment. The proposed technique is tested on several benchmark datasets for validation. In addition, in this study, the sparsified CNNs are used for biomedical image analysis using the BRAIN MRI dataset. This sparsely trained CNN model can later serve as the best lightweight model applicable in the IoT healthcare sector for detecting and classifying three types of brain tumours, one of the most life‐threatening diseases whose early detection can save the costly lives of human beings. This is the first paper to propose the novel max‐norm regularizer to enforce sparse learning through CNNs. The paper provides a detailed analysis of convex and non‐convex regularizers before presenting the proposed novel max‐norm regularizer. Finally, the paper compares the proposed max‐norm regularizer with existing regularization methods using state‐of‐the‐art CNN models. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. A review of the Segment Anything Model (SAM) for medical image analysis: Accomplishments and perspectives.
- Author
-
Ali, Mudassar, Wu, Tong, Hu, Haoji, Luo, Qiong, Xu, Dong, Zheng, Weizeng, Jin, Neng, Yang, Chen, and Yao, Jincao
- Subjects
- *
IMAGE analysis , *DIAGNOSTIC imaging , *ENGINEERING mathematics , *CLINICAL medicine , *ARTIFICIAL intelligence - Abstract
The purpose of this paper is to provide an overview of the developments that have occurred in the Segment Anything Model (SAM) within the medical image segmentation category over the course of the past year. However, SAM has demonstrated notable achievements in adapting to medical image segmentation tasks through fine-tuning on medical datasets, transitioning from 2D to 3D datasets, and optimizing prompting engineering. This is despite the fact that direct application on medical datasets has shown mixed results. Despite the difficulties, the paper emphasizes the significant potential that SAM possesses in the field of medical segmentation. One of the suggested directions for the future is to investigate the construction of large-scale datasets, to address multi-modal and multi-scale information, to integrate with semi-supervised learning structures, and to extend the application methods of SAM in clinical settings. In addition to making a significant contribution to the field of medical segmentation. • SAM excels in medical image segmentation tasks. • Fine-tuning SAM with medical datasets improves performance. • SAM handles both 2D and 3D datasets, boosting segmentation accuracy. • Optimizing prompts further enhances SAM's segmentation capabilities. • SAM shows promise for clinical applications and large-scale dataset integration. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. Enhancing the performance of CNN models for pneumonia and skin cancer detection using novel fractional activation function.
- Author
-
Kumar, Meshach and Mehta, Utkal
- Subjects
CONVOLUTIONAL neural networks ,IMAGE recognition (Computer vision) ,IMAGE analysis ,DIAGNOSTIC imaging ,DIAGNOSIS - Abstract
This paper introduces a novel Riemann–Liouville (RL) conformable fractional derivative based Adaptable-Shifted-Fractional-Rectified-Linear-Unit, briefly called R L ASFReLU , and evaluates its efficacy in enhancing the performance of convolutional neural network (CNN) models for pneumonia and skin cancer detection. The study conducts a comprehensive comparative analysis against traditional activation functions and state-of-the-art CNN architectures. The results show that R L ASFReLU consistently outperforms other functions, achieving higher accuracy. Comparative evaluations with various neural network architectures reveal that the model equipped with R L ASFReLU exhibits superior performance despite its simplicity and fewer trainable parameters, highlighting its efficiency and effectiveness. The findings suggest that R L ASFReLU holds promise in improving diagnostic accuracy and efficiency in medical imaging applications, contributing to advancements in healthcare technology and facilitating better patient care. The proposed fractional nonlinear transformation can offer high performance with reduced computational cost, making it practical for deployment in healthcare settings. • Enhanced Local Feature Extraction : The proposed activation function improves the CNN's ability to extract local features while preserving global context, enhancing the accuracy and generalization of medical image analysis across diverse patient images. • Novel Fractional CNN Model : A new CNN model is developed, integrating the
RL ASFReLU function, which significantly improves the model's accuracy and prediction capabilities for medical images. • Comprehensive Evaluation : The paper conducts a thorough comparative evaluation of proposed activation function against traditional activation functions and state-of- the-art CNN architectures, demonstrating its superior performance. • Efficient Medical Image Analysis Framework : The integration of proposed activation function into CNN architectures offers a robust, lightweight, and computationally efficient framework for medical image analysis, balancing the efficiency of CNNs with enhanced feature extraction capabilities. [ABSTRACT FROM AUTHOR]- Published
- 2025
- Full Text
- View/download PDF
5. The Role of the Generative Adversarial Network in Medical Image Reconstruction: A Systematic Review.
- Author
-
Rahmanian, Laleh and Shamsaei, Mojtaba
- Subjects
GENERATIVE adversarial networks ,ARTIFICIAL intelligence ,COMPUTER-assisted image analysis (Medicine) ,IMAGE analysis ,IMAGE processing ,DEEP learning - Abstract
Background: In the realm of medical imaging, obtaining clear, high-resolution images is challenging due to a multitude of factors encompassing the intricacies of imaging systems, diverse imaging environments, and the potential impact of human-related variables. The imperative initial step in the assessment of medical images involves medical image processing, a field that leverages the power of machine learning and deep learning models to cultivate intelligent systems, thereby imbuing these images with heightened interpretability and enhancing diagnostic efficiency. The advent of Generative Adversarial Networks (GANs) represents a transformative technological breakthrough, ushering in a new era in the realm of medical image analysis. GANs have introduced a robust framework for the manifold applications of medical images. These applications vary from the enhancement of medical images to their precise segmentation, accurate classification, meticulous reconstruction, and even synthesis. This study aimed to give a general insight into the role of Generative Adversarial Networks (GANs) in medical image reconstruction. This comprehensive background provides the necessary context for understanding the pivotal role of GANs in revolutionizing the domain of medical imaging and underscores their impact on the development of sophisticated and intuitive systems for the advancement of medical diagnostics. Materials and Methods: PubMed, ScienceDirect, Web of Science databases, and Google Scholar were explored using different combinations of keywords: "Generative Adversarial Networks (GANs)"," Deep Learning", "Image Reconstruction", "Medical Imaging" and "Artificial Intelligence". Also, an additional search was performed on Semantic Scholar. Finally, 20 most related and recent papers were included in the study. Results: Generative Adversarial Networks (GANs), consisting of a generator and a discriminator neural network in a competitive framework, have demonstrated their effectiveness in medical image reconstruction. They excel in generating high-fidelity images from incomplete medical data by training on complete image datasets and leveraging this knowledge to fill in the gaps. GANs also play a pivotal role in generating multimodal datasets from a single modality source, thereby expanding the diversity of training data for improved accuracy in medical image analysis. This versatility of GANs finds practical application in various algorithms designed for medical image reconstruction, such as Medical Image Reconstruction using Generative Adversarial Networks (MirGAN) and GAN-Based Medical Image Super-Resolution via High-Resolution Representation Learning (Med-SRNet). These techniques are tailored to tasks like medical image reconstruction and super-resolution, enhancing the quality of medical images. As a result, they simplify the process of image analysis and diagnosis in the field of medicine. In this context, GANs have emerged as a transformative technology, significantly contributing to the improvement of medical imaging quality and the facilitation of more accurate analysis and diagnosis of medical conditions. Conclusion: In summary, although GANs have exhibited substantial promise in the realm of medical image reconstruction, they have also their challenges. These limitations encompass restricted data accessibility, intricate computational demands, interpretability issues, susceptibility to overfitting, and quality control concerns. [ABSTRACT FROM AUTHOR]
- Published
- 2025
6. Effect of Polymer Mortar Modification Using Eco-friendly Biochar on Microstructure
- Author
-
Załęgowski, Kamil, Kępniak, Maja, Ghosh, Arindam, Series Editor, Chua, Daniel, Series Editor, de Souza, Flavio Leandro, Series Editor, Aktas, Oral Cenk, Series Editor, Han, Yafang, Series Editor, Gong, Jianghong, Series Editor, Jawaid, Mohammad, Series Editor, Czarnecki, Lech, editor, Garbacz, Andrzej, editor, Wang, Ru, editor, Frigione, Mariaenrica, editor, and Aguiar, Jose B., editor
- Published
- 2025
- Full Text
- View/download PDF
7. Hybrid multiple instance learning network for weakly supervised medical image classification and localization.
- Author
-
Lai, Qi, Vong, Chi-Man, Yan, Tao, Wong, Pak-Kin, and Liang, Xiaokun
- Subjects
- *
COMPUTER-aided diagnosis , *COMPUTER-assisted image analysis (Medicine) , *IMAGE recognition (Computer vision) , *IMAGE analysis , *CONVOLUTIONAL neural networks , *LOCALIZATION (Mathematics) , *SUPERVISED learning - Abstract
Weakly supervised medical image analysis is of great significance for computer-aided diagnosis due to the difficulty in obtaining accurately labeled medical data. In this paper, we proposed a new Multi-instance Learning (MIL) framework called HybridMIL integrating CNN Convolutional Neural Networks (CNN) and Broad Learning Systems (BLS). Our HybridMIL can overcome several challenging issues over existing MIL methods based on either CNN or BLS alone: (i) Multiple levels (i.e., different resolutions) of feature information can be simultaneously extracted through a newly proposed instance-level feature enhancement (IFE) module; (ii) Global-level semantic information contained in the deep layers can be better represented under the global-level semantic enhancement (GSE) module; (iii) Hybrid feature fusion (HFF) module is newly designed to effectively fuse and align the multi-level outputs of IFE and global-level semantic information of GSE for subsequent classification and localization tasks. The proposed HybridMIL is evaluated on various public medical and MIL benchmark datasets. The results indicate that HybridMIL surpasses other recent MIL models in terms of classification and localization performance by up to 8.5% and 9.0%, respectively. Lastly, we demonstrate the highly competitive performance of HybridMIL in general MIL problems, going beyond weakly supervised medical image analysis. • A novel hybrid MIL network is proposed that combines CNNs with the BLS to capture multiple-level feature information, and global-level semantics information, and estimates the inter-correlation between them, forming a single framework. • The proposed HybridMIL not only includes the cognitive process of visual appearance but also the enhanced representation process of instance and semantic level correlations. • Without any other mechanism, the proposed HybridMIL framework can easily and effectively achieve comparable classification and localization performance on public medical datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Rice leaf disease identification and classification using machine learning techniques: A comprehensive review.
- Author
-
Mukherjee, Rashmi, Ghosh, Anushri, Chakraborty, Chandan, De, Jayanta Narayan, and Mishra, Debi Prasad
- Subjects
- *
RICE diseases & pests , *COMPUTER-assisted image analysis (Medicine) , *MACHINE learning , *ARTIFICIAL intelligence , *LEAF area - Abstract
In recent times, various researchers attempted to develop artificial intelligence (AI) assisted techniques in the field of agriculture for early detection, surveillance and treatment related to plant leaf, seed, root, and stem diseases. Rice leaf disease detection is one of such important areas, where the crop is frequently affected by various diseases. Farmer inspects usually at a later stage causing enormous damage. This manual inspection is subjective, time-consuming and error prone. Under such situation, AI-enabled tools and techniques play crucial role for early and more precise prediction of rice diseases. This paper demonstrates a comprehensive review on application of AI-assisted rice leaf disease detection in the last two decades. Research studies were searched using relevant keywords through the online databases [ PubMed: 246; Science Direct: 100; Scopus: 56; Web of Science: 8; Willey online library:16; Cochrane:0; Cross references:20 ]. A total of 446 titles and abstracts were identified as suitable for this study and finally, 48 most-appropriate state-of-art articles were considered. Furthermore, this study summarizes the visual characteristics of rice leaf diseases, imaging modalities and image acquisition techniques. Various image processing techniques for infected leaf area segmentation and feature extraction were also summarized. Finally, the reported machine learning (ML) algorithms were discussed and compared in respect to their advantages and limitations. In addition, AI-enabled mobile applications for rice disease detection have been discussed. • Demonstrated a comprehensive review of machine learning algorithms published between 1999 and 2022 for rice leaf disease detection. • Summarized visual characteristics of rice leaf diseases, imaging modalities and image acquisition techniques. • Comparative study amongst litertature in respect to infected area segmentation and feature extraction. • Explored ML algorithms for rice leafe disease detectiojn and compared in respect to its advantages and limitations. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Adaptive fusion of dual-view for grading prostate cancer.
- Author
-
He, Yaolin, Li, Bowen, He, Ruimin, Fu, Guangming, Sun, Dan, Shan, Dongyong, and Zhang, Zijian
- Subjects
- *
MAGNETIC resonance imaging , *PROSTATE cancer , *IMAGE analysis , *CANCER diagnosis , *DIFFUSION coefficients , *DEEP learning - Abstract
Accurate preoperative grading of prostate cancer is crucial for assisted diagnosis. Multi-parametric magnetic resonance imaging (MRI) is a commonly used non-invasive approach, however, the interpretation of MRI images is still subject to significant subjectivity due to variations in physicians' expertise and experience. To achieve accurate, non-invasive, and efficient grading of prostate cancer, this paper proposes a deep learning method that adaptively fuses dual-view MRI images. Specifically, a dual-view adaptive fusion model is designed. The model employs encoders to extract embedded features from two MRI sequences: T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC). The model reconstructs the original input images using the embedded features and adopts a cross-embedding fusion module to adaptively fuse the embedded features from the two views. Adaptive fusion refers to dynamically adjusting the fusion weights of the features from the two views according to different input samples, thereby fully utilizing complementary information. Furthermore, the model adaptively weights the prediction results from the two views based on uncertainty estimation, further enhancing the grading performance. To verify the importance of effective multi-view fusion for prostate cancer grading, extensive experiments are designed. The experiments evaluate the performance of single-view models, dual-view models, and state-of-the-art multi-view fusion algorithms. The results demonstrate that the proposed dual-view adaptive fusion method achieves the best grading performance, confirming its effectiveness for assisted grading diagnosis of prostate cancer. This study provides a novel deep learning solution for preoperative grading of prostate cancer, which has the potential to assist clinical physicians in making more accurate diagnostic decisions and has significant clinical application value. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. A Multi-task learning U-Net model for end-to-end HEp-2 cell image analysis.
- Author
-
Percannella, Gennaro, Petruzzello, Umberto, Tortorella, Francesco, and Vento, Mario
- Subjects
- *
ARTIFICIAL neural networks , *CELL segmentation , *IMAGE recognition (Computer vision) , *IMAGE segmentation , *IMAGE analysis - Abstract
Antinuclear Antibody (ANA) testing is pivotal to help diagnose patients with a suspected autoimmune disease. The Indirect Immunofluorescence (IIF) microscopy performed with human epithelial type 2 (HEp-2) cells as the substrate is the reference method for ANA screening. It allows for the detection of antibodies binding to specific intracellular targets, resulting in various staining patterns that should be identified for diagnosis purposes. In recent years, there has been an increasing interest in devising deep learning methods for automated cell segmentation and classification of staining patterns, as well as for other tasks related to this diagnostic technique (such as intensity classification). However, little attention has been devoted to architectures aimed at simultaneously managing multiple interrelated tasks, via a shared representation. In this paper, we propose a deep neural network model that extends U-Net in a Multi-Task Learning (MTL) fashion, thus offering an end-to-end approach to tackle three fundamental tasks of the diagnostic procedure, i.e., HEp-2 cell specimen intensity classification, specimen segmentation, and pattern classification. The experiments were conducted on one of the largest publicly available datasets of HEp-2 images. The results showed that the proposed approach significantly outperformed the competing state-of-the-art methods for all the considered tasks. • IIF on Hep-2 cells is crucial for diagnosing autoimmune diseases. • It involves one segmentation task and two classification tasks. • A Multi-Task Learning approach could help but hasn't been explored till now. • We propose a Multi-Task, end-to-end U-Net architecture for performing all three tasks. • We achieve significant improvements over methods designed for individual tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. Porosity prediction of cold sprayed titanium parts using machine learning.
- Author
-
Eberle, Martin, Pinches, Samuel, Kean Wah Tai, Wesley, Guzman, Pablo, King, Hannah, Zhou, Hailing, and Ang, Andrew
- Subjects
- *
ARTIFICIAL neural networks , *FEATURE selection , *RANDOM forest algorithms , *PRINCIPAL components analysis , *IMAGE analysis - Abstract
[Display omitted] • 60 samples were manufactured, and the porosity was measured via image analysis. • Comparison of data preparation methods such as filter-based feature selection and principal component analysis. • Training of machine learning algorithms with prepared datasets. • A prediction accuracy of less than 0.7% porosity was achieved. The desired porosity level of cold-sprayed titanium parts varies depending on the application and therefore requires precise control. To achieve the desired porosity the selection of the correct spray parameters is essential. This study investigates how the cold spraying process affects porosity levels through the application of machine learning techniques. 14 parameters are recorded during the cold spraying process of titanium parts, with the porosity level of each process being manually measured through the analysis of microscope images. Due to the high cost associated with generating data, the dataset size was limited for this study. To alleviate this problem such that machine learning models can be properly trained, this paper carefully enhances a firsthand dataset by using feature engineering, feature selection, and dimension reduction techniques. The study implemented random forest, gradient boosting, and neural network algorithms, with the neural network model demonstrating the best performance. This model achieved an RMSE of 0.7 % on unseen data. For the spray parameter ranges of the available dataset, based on the Shapley value analysis, the spray angle has been identified as the most influential feature of the model for predicting porosity. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
12. Advancing histopathology in Health 4.0: Enhanced cell nuclei detection using deep learning and analytic classifiers.
- Author
-
Pons, S., Dura, E., Domingo, J., and Martin, S.
- Subjects
- *
CELL nuclei , *DIGITAL technology , *IMAGE analysis , *EVIDENCE gaps , *LOGISTIC regression analysis , *DEEP learning - Abstract
This study contributes to the Health 4.0 paradigm by enhancing the precision of cell nuclei detection in histopathological images, a critical step in digital pathology. The presented approach is characterized by the combination of deep learning with traditional analytic classifiers. Traditional methods in histopathology rely heavily on manual inspection by expert histopathologists. While deep learning has revolutionized this process by offering rapid and accurate detections, its black-box nature often results in a lack of interpretability. This can be a significant hindrance in clinical settings where understanding the rationale behind predictions is crucial for decision-making and quality assurance. Our research addresses this gap by employing the YOLOv5 framework for initial nuclei detection, followed by an analysis phase where poorly performing cases are isolated and retrained to enhance model robustness. Furthermore, we introduce a logistic regression classifier that uses a combination of color and textural features to discriminate between satisfactorily and unsatisfactorily analyzed images. This dual approach not only improves detection accuracy but also provides insights into model performance variations, fostering a layer of interpretability absent in most deep learning applications. By integrating these advanced analytical techniques, our work aligns with the Health 4.0 initiative's goals of leveraging digital innovations to elevate healthcare quality. This study paves the way for more transparent, efficient, and reliable digital pathology practices, underscoring the potential of smart technologies in enhancing diagnostic processes within the Health 4.0 framework. [Display omitted] • This paper addresses analysis of histopathological images for cell nuclei detection. • An ensemble of two deep learning models is proposed to improve the performance. • Initial step involves training a deep learning model using all available images. • Results are assessed and categorized based on the model's performance. • Images with worse results are identified, augmented and used to train a new model. • A logistic regression classifier reproduces the data division. • The input features of this classifier offer valuable insights for pathologists. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
13. Anomaly detection and segmentation in industrial images using multi-scale reverse distillation.
- Author
-
Liu, Chien-Liang and Chung, Chia-Chen
- Subjects
ANOMALY detection (Computer security) ,IMAGE reconstruction ,IMAGE analysis ,AUTOENCODER ,DISTILLATION - Abstract
Anomaly detection and segmentation in industrial images are critical tasks requiring robust and precise methodologies. This paper presents the Multi-Scale Reverse Distillation (MSRD) methodology, an innovative improvement of the foundational reverse distillation approach. MSRD leverages autoencoder-based techniques integrated with information at different levels to significantly enhance reconstruction capabilities. A novel module incorporated at the decoder's end facilitates precise sample reconstruction. The proposed loss function incorporates the reconstruction loss L R e c o n , calculated using structural similarity index measure (SSIM) between the original and reconstructed images, in addition to the knowledge distillation loss L K D. Additionally, the integration of a feature pyramid network improves the spatial coherence of anomaly maps across varying scales, enabling detailed anomaly segmentation. The MSRD method undergoes rigorous evaluation on three public datasets, demonstrating superior performance in both anomaly detection and segmentation. The results highlight MSRD's adaptability and effectiveness in one-class learning-based applications. This study underscores MSRD's potential as a powerful tool for industrial anomaly detection, offering significant advancements in AI-driven image analysis. [Display omitted] • Introduce MSRD, which enhances anomaly detection in images. • MSRD boosts RD with information at different levels. • Novel decoder module for precise image reconstruction. • The refined loss function improves the detection precision. • Proven effectiveness on MVTec, VisA, and BTAD datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
14. A three stage framework for abnormality detection in sperm cell images using CNN.
- Author
-
Prabaharan, L. and Saravanan, N.
- Subjects
CONVOLUTIONAL neural networks ,CELL morphology ,CELL analysis ,DEEP learning ,IMAGE analysis - Abstract
• This study has addressed one of the important social problem is infertility problem. • Convolutional Neural Network was used to increase the detection accuracy in sperm cell images. • Novel framework for achieving easy flow on implementation. Image analysis is crucial for microscopic medical images, particularly for imaging sperm cells. Sperm morphology analysis, a crucial process of assisted fertilization techniques, can be used to evaluate male infertility, which significantly impacts couples' quality of life. This paper proposes a technique that combines convolutional neural networks (CNN) with modified Havrda-Charvat entropic segmentation to identify normal sperm cells in pre-processed image samples. Initially, a noise removal algorithm is applied to the sperm cell images, followed by segmentation using the modified Havrda-Charvat entropy method to isolate individual sperm cells. High detection accuracy is then achieved through a combination of deep learning and feature extraction. This research optimizes three stages: image pre-processing with a Wiener filter, segmentation using the Havrda-Charvat entropy technique, and abnormality detection with CNN. The proposed method achieves 98.99% accuracy in identifying normal sperm cells based on their morphology, outperforming state-of-the-art techniques. By enhancing sperm cell analysis methods, this research facilitates more precise and automated segmentation, processing, and detection. The proposed approach has the potential to revolutionize reproductive medicine by improving the accuracy of fertility diagnoses and the effectiveness of treatments. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
15. A hyperspectral stealth material design method based on the composition and mixing spectral feature of desert soil.
- Author
-
Ma, Xiaodong, Wei, Biao, Qing, Xiaolong, Wang, Yaqin, Qi, Lun, Wu, Xueyu, Yuan, Le, and Weng, Xiaolong
- Subjects
ENVIRONMENTAL soil science ,DESERT soils ,SOIL science ,IMAGE analysis ,MULTISPECTRAL imaging - Abstract
In this study, we used desert soil from Gansu, China, as a sample to propose a method for designing hyperspectral stealth coatings against desert soil backgrounds within the spectral range of 400–2500 nm, and the corresponding coating was prepared. Firstly, the correlation between the composition and typical spectral detected characteristics of the desert soil was systematically analyzed. It was found that the color and the spectrum of the desert soil in the range of 400–1000 nm were influenced by different types of iron oxides. The main spectral characteristic and reflection intensity at 1000–2500 nm were impacted by quartz and montmorillonite. Subsequently, the design method for hyperspectral stealth coatings was developed by analyzing the differences in spectral and structural characteristics between the coatings and the soil. The prepared coating exhibited similar color and spectral shape to the soil in the range of 400–1000 nm, with comparable spectral features around 1414 nm, 1915 nm, 2212 nm, 2250 nm, and 2346 nm. The correlation coefficient and the spectral cosine angle between the reflectance spectra of the coating and the soil within the 400–2500 nm wavelength were calculated to be 0.989 and only 0.05 radians, respectively. The effectiveness of the coating in achieving excellent camouflage against the desert soil background was confirmed through the analysis of multispectral images and thermal infrared temperature. This study holds significant importance for the application of hyperspectral stealth techniques in desert soil scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
16. Reproducibility and repeatability of 18F-(2S, 4R)-4-fluoroglutamine PET imaging in preclinical oncology models.
- Author
-
Ayers, Gregory D., Cohen, Allison S., Bae, Seong-Woo, Wen, Xiaoxia, Pollard, Alyssa, Sharma, Shilpa, Claus, Trey, Payne, Adria, Geng, Ling, Zhao, Ping, Tantawy, Mohammed Noor, Gammon, Seth T., and Manning, H. Charles
- Subjects
POSITRON emission tomography ,MEASUREMENT errors ,IMAGE analysis ,STATISTICAL correlation ,NULL hypothesis - Abstract
Introduction: Measurement of repeatability and reproducibility (R&R) is necessary to realize the full potential of positron emission tomography (PET). Several studies have evaluated the reproducibility of PET using
18 F-FDG, the most common PET tracer used in oncology, but similar studies using other PET tracers are scarce. Even fewer assess agreement and R&R with statistical methods designed explicitly for the task.18 F-(2S, 4R)-4-fluoro-glutamine (18 F-Gln) is a PET tracer designed for imaging glutamine uptake and metabolism. This study illustrates high reproducibility and repeatability with18 F-Gln for in vivo research. Methods: Twenty mice bearing colorectal cancer cell line xenografts were injected with ~9 MBq of18 F-Gln and imaged in an Inveon microPET. Three individuals analyzed the tumor uptake of18 F-Gln using the same set of images, the same image analysis software, and the same analysis method. Scans were randomly re-ordered for a second repeatability measurement 6 months later. Statistical analyses were performed using the methods of Bland and Altman (B&A), Gauge Reproducibility and Repeatability (Gauge R&R), and Lin's Concordance Correlation Coefficient. A comprehensive equivalency test, designed to reject a null hypothesis of non-equivalence, was also conducted. Results: In a two-way random effects Gauge R&R model, variance among mice and their measurement variance were 0.5717 and 0.024. Reproducibility and repeatability accounted for 31% and 69% of the total measurement error, respectively. B&A repeatability coefficients for analysts 1, 2, and 3 were 0.16, 0.35, and 0.49. One-half B&A agreement limits between analysts 1 and 2, 1 and 3, and 2 and 3 were 0.27, 0.47, and 0.47, respectively. The mean square deviation and total deviation index were lowest for analysts 1 and 2, while coverage probabilities and coefficients of the individual agreement were highest. Finally, the definitive agreement inference hypothesis test for equivalency demonstrated that all three confidence intervals for the average difference of means from repeated measures lie within our a priori limits of equivalence (i.e. ± 0.5%ID/g). Conclusions: Our data indicate high individual analyst and laboratory-level reproducibility and repeatability. The assessment of R&R using the appropriate methods is critical and should be adopted by the broader imaging community. [ABSTRACT FROM AUTHOR]- Published
- 2025
- Full Text
- View/download PDF
17. Skin image analysis for detection and quantitative assessment of dermatitis, vitiligo and alopecia areata lesions: a systematic literature review.
- Author
-
Kallipolitis, Athanasios, Moutselos, Konstantinos, Zafeiriou, Argyriοs, Andreadis, Stelios, Matonaki, Anastasia, Stavropoulos, Thanos G., and Maglogiannis, Ilias
- Subjects
COMPUTER vision ,IMAGE analysis ,SKIN imaging ,IMAGE processing ,ARTIFICIAL intelligence ,DEEP learning - Abstract
Vitiligo, alopecia areata, atopic, and stasis dermatitis are common skin conditions that pose diagnostic and assessment challenges. Skin image analysis is a promising noninvasive approach for objective and automated detection as well as quantitative assessment of skin diseases. This review provides a systematic literature search regarding the analysis of computer vision techniques applied to these benign skin conditions, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. The review examines deep learning architectures and image processing algorithms for segmentation, feature extraction, and classification tasks employed for disease detection. It also focuses on practical applications, emphasizing quantitative disease assessment, and the performance of various computer vision approaches for each condition while highlighting their strengths and limitations. Finally, the review denotes the need for disease-specific datasets with curated annotations and suggests future directions toward unsupervised or self-supervised approaches. Additionally, the findings underscore the importance of developing accurate, automated tools for disease severity score calculation to improve ML-based monitoring and diagnosis in dermatology. Trial registration: Not applicable. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
18. Label credibility correction based on cell morphological differences for cervical cells classification.
- Author
-
Pang, Wenbo, Qiu, Yue, Jin, Shu, Jiang, Huiyan, and Ma, Yi
- Subjects
IMAGE recognition (Computer vision) ,IMAGE analysis ,CELL imaging ,SUPERVISED learning ,ARTIFICIAL intelligence - Abstract
Cervical cancer is one of the deadliest cancers that pose a significant threat to women's health. Early detection and treatment are commonly used methods to prevent cervical cancer. The use of pathological image analysis techniques for the automatic interpretation of cervical cells in pathological slides is a prominent area of research in the field of digital medicine. According to The Bethesda System, cervical cytology necessitates further classification of precancerous lesions based on positive interpretations. However, clinical definitions among different categories of lesion are complex and often characterized by fuzzy boundaries. In addition, pathologists can deduce different criteria for judgment based on The Bethesda System, leading to potential confusion during data labeling. Noisy labels due to this reason are a great challenge for supervised learning. To address the problem caused by noisy labels, we propose a method based on label credibility correction for cervical cell images classification network. Firstly, a contrastive learning network is used to extract discriminative features from cell images to obtain more similar intra-class sample features. Subsequently, these features are fed into an unsupervised method for clustering, resulting in unsupervised class labels. Then unsupervised labels are corresponded to the true labels to separate confusable and typical samples. Through a similarity comparison between the cluster samples and the statistical feature centers of each class, the label credibility analysis is carried out to group labels. Finally, a cervical cell images multi-class network is trained using synergistic grouping method. In order to enhance the stability of the classification model, momentum is incorporated into the synergistic grouping loss. Experimental validation is conducted on a dataset comprising approximately 60,000 cells from multiple hospitals, showcasing the effectiveness of our proposed approach. The method achieves 2-class task accuracy of 0.9241 and 5-class task accuracy of 0.8598. Our proposed method achieves better performance than existing classification networks on cervical cancer. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
19. Reports Outline Food Research Study Results from Sao Paulo State University (UNESP) (Alternative Non-Destructive Approach for Estimating Morphometric Measurements of Chicken Eggs from Tomographic Images with Computer Vision).
- Subjects
COMPUTED tomography ,TOMOGRAPHY ,IMAGE analysis ,EGGS ,COMPUTER vision ,DEEP learning - Abstract
Researchers at Sao Paulo State University (UNESP) have developed a non-destructive method using computer vision to estimate morphometric measurements of chicken eggs from tomographic images. This approach, utilizing deep learning architectures, achieved an accuracy of up to 98.69% in extracting important measurements such as height, width, shell thickness, and volume. The study suggests that this alternative method could replace traditional invasive techniques, offering greater efficiency and accuracy in assessing egg quality in industrial and research settings. The research was supported by the Sao Paulo State Research Support Foundation and the Pernambuco State Science And Technology Support Foundation. [Extracted from the article]
- Published
- 2025
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.