3,474 results on '"THRESHOLDING algorithms"'
Search Results
2. Novel approaches for target parameter extraction with eigenvalue thresholding and Dolph–Chebyshev windowing in multiple‐input multiple‐output (MIMO) radar system.
- Author
-
Jagtap, Sheetal G. and Kunte, Ashwini S.
- Subjects
- *
MARINE electronics , *MULTIPLE Signal Classification , *RADIO waves , *MIMO systems , *THRESHOLDING algorithms - Abstract
Summary: Multiple‐input multiple‐output (MIMO) radar, employing multiple transmitters and receivers, enhances radar capabilities. It detects and tracks objects like aircraft and ships using radio waves. Compared with traditional phased‐array radar, MIMO systems offer greater flexibility, improving angular resolution and target detection. Researchers focus on direction of arrival (DoA) evaluation for closely spaced targets. Effective beamforming and accurate DoA estimation are crucial for MIMO radar performance. This study explores two methods: Capon beamforming with Dolph–Chebyshev windowing and the MUSIC algorithm with Eigenvalue thresholding. Tested under low signal‐to‐noise ratio (SNR) and fewer snapshots, these techniques notably reduce side lobes and enhance angular resolution, validated by experiments. Additionally, the suppression of side lobes significantly improves the clarity and accuracy of target detection, minimizing potential interference and false targets. This enhancement in side lobe suppression facilitates a more precise spatial differentiation between multiple targets, thus contributing to the overall effectiveness and reliability of MIMO radar systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. GSISTA-Net: generalized structure ISTA networks for image compressed sensing based on optimized unrolling algorithm.
- Author
-
Zeng, Chunyan, Yu, Yan, Wang, Zhifeng, Xia, Shiyan, Cui, Hao, and Wan, Xiangkui
- Subjects
THRESHOLDING algorithms ,IMAGE reconstruction ,BUILDING additions ,DEEP learning ,SIGNAL-to-noise ratio ,COMPRESSED sensing - Abstract
Image compressed sensing technology, particularly algorithm unrolling networks, has garnered significant attention in the field of compressed sensing due to their interpretability and high performance. However, similar to traditional compressed sensing methods, algorithm unrolling networks update and transmit pixel-level image data through specific instances of algorithms, often failing to fully exploit the rich information encoded in image features. This limitation results in information loss and incomplete feature fusion. In this paper, we introduce a novel approach: the Generalized Structure Iterative Shrinkage Threshold Algorithm (GSISTA), and present an algorithm extension network built upon GSISTA, referred to as GSISTA-Net. GSISTA-Net facilitates the efficient transfer of image feature information during the deep reconstruction stage through a jump connection structure. Additionally, it incorporates a dual-scale denoising module within the deep reconstruction stage to enhance denoising effectiveness. Our experimental results demonstrate that the proposed method surpasses the performance of five prominent state-of-the-art algorithms, specifically ReconNet, CSNet, ISTA-Net + , AMPNet, and OPINE-Net + , when evaluated in terms of both Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Diagnosis and multiclass classification of diabetic retinopathy using enhanced multi thresholding optimization algorithms and improved Naive Bayes classifier.
- Author
-
Bhimavarapu, Usharani
- Subjects
NAIVE Bayes classification ,OPTIMIZATION algorithms ,DIABETIC retinopathy ,METAHEURISTIC algorithms ,FEATURE extraction ,THRESHOLDING algorithms - Abstract
Early diagnosis is crucial to prevent a diabetic patient from being affected by blindness. Automatic and accurate detection of diabetic retinopathy is essential. A methodology for the detection and classification of diabetic retinopathy is presented in this paper. Data preprocessing methods are used to highlight subtle information to classify DR anomalies accurately. Image-enhancing techniques are used to boost image quality. Following the preprocessing stage, three main procedures are performed: segmentation, feature extraction, and classification. In contrast to brute force methods, metaheuristic algorithms can explore the solution space more quickly and provide precise, ideal solutions. Due to a lack of detailed image data, it is impossible to determine the precise limits based on image segmentation features. Threshold segmentation is the most effective choice for segmenting fundus images since it has benefits, including simple implementation, low computational complexity, and improved performance. A new variant of grasshopper optimization is proposed using the multi-thresholding version. The segmentation using the proposed model gives high accuracy, even for tiny lesions. A total of 41 features were extracted from the segmented fundus images. Finally, the improved Naïve Bayes classifier classifies the various classes of diabetic retinopathy. The proposed methodology was trained and tested over the DIARETDB0, Messidor-2, Eye pacs-1, and APTOS datasets. The improved naive Bayes classifier enhanced the classification of diabetic retinopathy by an accuracy of 99.98% on the APTOS dataset, which was better than the previously existing techniques. The results proved that the improved naive Bayes classifier adequately diagnoses diabetic retinopathy from the retinal fundus images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Adaptive thresholding pattern for fingerprint forgery detection.
- Author
-
Farzadpour, Zahra and Azghani, Masoumeh
- Subjects
SUPPORT vector machines ,WAVELET transforms ,FORGERY ,HUMAN fingerprints ,FORGERS ,PIXELS ,THRESHOLDING algorithms - Abstract
Fingerprint liveness detection systems have been affected by spoofing, which is a severe threat for fingerprint-based biometric systems. Therefore, it is crucial to develop some techniques to distinguish the fake fingerprints from the real ones. The software based techniques can detect the fingerprint forgery automatically. Also, the scheme shall be resistant against various distortions such as noise contamination, pixel missing and block missing, so that the forgers cannot deceive the detector by adding some distortions to the faked fingerprint. In this paper, we propose a fingerprint forgery detection algorithm based on a suggested adaptive thresholding pattern. The anisotropic diffusion of the input image is passed through three levels of the wavelet transform. The coefficients of different layers are adaptively thresholded and concatenated to produce the feature vector which is classified using the SVM classifier. Another contribution of the paper is to investigate the effect of various distortions such as pixel missing, block missing, and noise contamination. Our suggested approach includes a novel method that exhibits improved resistance against a range of distortions caused by environmental phenomena or manipulations by malicious users. In quantitative comparisons, our proposed method outperforms its counterparts by approximately 8% and 5% in accuracy for missing pixel scenarios of 90% and block missing scenarios of size 70 × 70 , respectively. This highlights the novelty approach in addressing such challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A New Neural-network-based Model for Localizing Synthetic Aperture Radar Images.
- Author
-
Guoshi Liu, Keyu Li, Xin Liu, Yingfei Gao, and Hui Li
- Subjects
OPTICAL remote sensing ,SYNTHETIC aperture radar ,ARTIFICIAL neural networks ,ARCHITECTURAL engineering ,BLIND source separation ,SPACE-based radar ,PIXELS ,AZIMUTH ,THRESHOLDING algorithms - Abstract
This article presents a new neural network-based model called SARCoorP-RBFNet for localizing synthetic aperture radar (SAR) images. The model addresses the limitations of traditional models in complex scenarios and has been tested on SAR images in China. It utilizes pairs of geodetic and image space coordinate points as input and output, respectively, and incorporates Gaussian functions as radial basis functions (RBFs). The model is trained using the generalized inverse matrix method and evaluated using the root mean square error function. The study found that the model performs well on single-scene imagery but has reduced accuracy with different-resolution images of the same area. However, it still achieves high fitting and generalization capabilities when trained with mixed samples from different regions. The study suggests that the proposed model can be a viable alternative to traditional geometric processing models for SAR images. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
7. Existence of Almost Greedy Bases in Mixed-Norm Sequence and Matrix Spaces, Including Besov Spaces.
- Author
-
Albiac, Fernando, Ansorena, José L., Bello, Glenier, and Wojtaszczyk, Przemysław
- Subjects
- *
BESOV spaces , *BANACH spaces , *SEQUENCE spaces , *THRESHOLDING algorithms , *GREEDY algorithms - Abstract
We prove that the sequence spaces ℓ p ⊕ ℓ q and the spaces of infinite matrices ℓ p (ℓ q) , ℓ q (ℓ p) and (⨁ n = 1 ∞ ℓ p n) ℓ q , which are isomorphic to certain Besov spaces, have an almost greedy basis whenever 0 < p < 1 < q < ∞ . More precisely, we custom-build almost greedy bases in such a way that the Lebesgue parameters grow in a prescribed manner. Our arguments critically depend on the extension of the Dilworth–Kalton–Kutzarova method from Dilworth et al. (Stud Math 159(1):67–101, 2003), which was originally designed for constructing almost greedy bases in Banach spaces, to make it valid for direct sums of mixed-normed spaces with nonlocally convex components. Additionally, we prove that the fundamental functions of all almost greedy bases of these spaces grow as (m 1 / q) m = 1 ∞ . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. 基于改进小波阈值的声波信号去噪算法与仿真.
- Author
-
师雪玮, 徐大林, 刘志成, and 徐志彦
- Subjects
- *
ACOUSTIC localization , *SIGNAL-to-noise ratio , *NOISE , *RESEARCH methodology , *SIMULATED annealing , *THRESHOLDING algorithms - Abstract
Due to the low signal-to-noise ratio of raw data, the reliability of data and the accuracy of acoustic source localization are severely affected by fiber optic acoustic sensing technology. To address this issue, this study optimizes the wavelet thresholding method. Firstly, a novel thresholding function is proposed, which achieves denoising while preserving key information through shape adjustment factors. It combines the advantages of both hard and soft threshold functions and has high flexibility and controllability. Secondly, an adaptive threshold calculation method is introduced, utilizing an improved simulated annealing algorithm to optimize threshold selection, reducing the algorithm’s dependence on threshold parameter selection. Through simulation experiments, it has been verified that this research method effectively suppresses noise in the signal and improves data availability. Compared to the original methods, this approach significantly improves the signal-to-noise ratio and demonstrates robustness in simulated tests of real signals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Combining CBAM and Iterative Shrinkage-Thresholding Algorithm for Compressive Sensing of Bird Images.
- Author
-
Lv, Dan, Zhang, Yan, Lv, Danjv, Lu, Jing, Fu, Yixing, and Li, Zhun
- Subjects
ARTIFICIAL neural networks ,THRESHOLDING algorithms ,WAVELET transforms ,IMAGE reconstruction ,SPECIES diversity - Abstract
Bird research contributes to understanding species diversity, ecosystem functions, and the maintenance of biodiversity. By analyzing bird images and the audio of birds, we can monitor bird distribution, abundance, and behavior to better understand the health of ecosystems. However, bird images and audio involve a vast amount of data. To improve the efficiency of data transmission and storage efficiency and save bandwidth, compressive sensing can overcome this challenge. Compressive sensing is a technique that uses the sparsity of signals to recover original data from a small number of linear measurements. This paper introduces a deep neural network based on the Iterative Shrinkage Thresholding Algorithm (ISTA) and a Convolutional Block Attention Module (CBAM), CBAM_ISTA-Net
+ , for the compressive reconstruction of bird images, audio Mel spectrograms and wavelet transform spectrograms. Using 45 bird species as research subjects, including 20 bird images, 15 audio-generated Mel spectrograms, and 10 audio wavelet transform (WT) spectrograms, the experimental results show that CBAM_ISTA-Net+ achieves a higher peak signal-to-noise ratio (PSNR) at different compression ratios. At a compression ratio of 50%, the average PSNR of the three datasets reaches 33.62 dB, 55.76 dB, and 38.59 dB, while both the Mel spectrogram and wavelet transform spectrogram achieve more than 30 dB at compression ratios of 25–50%. These results highlight the effectiveness of CBAM_ISTA-Net+ in maintaining high reconstruction quality even under significant compression, demonstrating its potential as a valuable tool for efficient data management in ecological research. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
10. On consecutive greedy and other greedy-like type of bases.
- Author
-
Berasategui, Miguel, Berná, Pablo M., and Chu, Hùng Việt
- Subjects
THRESHOLDING algorithms ,SCHAUDER bases ,GREEDY algorithms - Abstract
We continue our study of the thresholding greedy algorithm when we restrict the vectors involved in our approximations so that they either are supported on intervals of N or have constant coefficients. We introduce and characterize what we call consecutive greedy bases and provide new characterizations of almost greedy and squeeze symmetric Schauder bases. Moreover, we investigate some cases involving greedy-like properties with constant 1 and study the related notion of Property (A, τ ). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Auto focusing of in-Line Holography based on Stacked Auto Encoder with Sparse Bayesian Regression and Compressive Sensing.
- Author
-
Vimala, C and Ajeena, A
- Subjects
THRESHOLDING algorithms ,ALGORITHMS ,NOISE ,DIGITAL holographic microscopy ,HOLOGRAPHY - Abstract
In recent years, Digital holography has emerged as an exceptional imaging technology for tracking high-contrast object particles and, interestingly, analyzing 3D object data in real time. The best quality images can be obtained effectively using the auto-focusing algorithm. In this paper, the focus location of the object is traced with a deep learning-based auto-focusing algorithm. The proposed model constructs a large feature pool by considering different focus measures to reconstruct objects from two out-of-focus images. The preferred features are selected through the proposed Support vector Machine-based Recursive Feature Elimination (SVM-RFE) method. Therefore, the inappropriate features are eliminated, and the reconstruction distance is obtained by the suggested stacked autoencoder with sparse Bayesian regression (SAE-SBR) model training. It is common to find a twin image in the reconstructed image, and such noise interference is minimized with the presented high-speed iterative shrinkage/thresholding (HS-IST) based compressive sensing (CS) algorithm. Reconstruction distances are predicted by the proposed method with a standard variation of about 0.036μm. The proposed SAE-SBR predicts the right reconstruction distance of a single hologram, and it is 600 times faster than traditional autofocusing techniques like Dubois and Tamura of Gradient (ToG). Also, the computation time of the proposed model is 33.3% less than the well-known FocusNET model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. FusionOpt-Net: A Transformer-Based Compressive Sensing Reconstruction Algorithm.
- Author
-
Zhang, Honghao, Chen, Bi, Gao, Xianwei, Yao, Xiang, and Hou, Linyu
- Subjects
- *
DEEP learning , *TRANSFORMER models , *SIGNAL processing , *IMAGE reconstruction , *FEATURE extraction , *IMAGE reconstruction algorithms , *THRESHOLDING algorithms - Abstract
Compressive sensing (CS) is a notable technique in signal processing, especially in multimedia, as it allows for simultaneous signal acquisition and dimensionality reduction. Recent advancements in deep learning (DL) have led to the creation of deep unfolding architectures, which overcome the inefficiency and subpar quality of traditional CS reconstruction methods. In this paper, we introduce a novel CS image reconstruction algorithm that leverages the strengths of the fast iterative shrinkage-thresholding algorithm (FISTA) and modern Transformer networks. To enhance computational efficiency, we employ a block-based sampling approach in the sampling module. By mapping FISTA's iterative process onto neural networks in the reconstruction module, we address the hyperparameter challenges of traditional algorithms, thereby improving reconstruction efficiency. Moreover, the robust feature extraction capabilities of Transformer networks significantly enhance image reconstruction quality. Experimental results show that the FusionOpt-Net model surpasses other advanced methods on various public benchmark datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. An efficient adaptive multilevel Renyi entropy thresholding method based on the energy curve with dynamic programming.
- Author
-
Lei, Bo, He, Luhang, and Yang, Zhen
- Subjects
- *
TIME complexity , *SWARM intelligence , *DYNAMIC programming , *IMAGE segmentation , *COMPARATIVE method , *THRESHOLDING algorithms - Abstract
Renyi entropy-based thresholding is a popular image segmentation method. In this work, to improve the performance of the Renyi entropy thresholding method, an efficient adaptive multilevel Renyi entropy thresholding method based on the energy curve with dynamic programming (DP + ARET) is presented. First, the histogram is substituted by the energy curve in the Renyi entropy thresholding to take advantage of the spatial context information of pixels. Second, an adaptive entropy index selection strategy is proposed based on the image histogram. Finally, to decrease the computation complexity of the multilevel Renyi entropy thresholding, an efficient solution is calculated by the dynamic programming technique. The proposed DP + ARET method can obtain the global optimal thresholds with the time complexity linear in the number of the thresholds. The comparative experiments between the proposed method with the histogram-based method verified the effectiveness of the energy curve. The segmentation results on the COVID-19 Computed Tomography (CT) images with the same objective function by the proposed DP + ARET and swarm intelligence optimization methods testify that the DP + ARET can quickly obtain the global optimal thresholds. Finally, the performance of the DP + ARET method is compared with several image segmentation methods quantitatively and qualitatively, the average segmented accuracy (SA) is improved by 7% than the comparative methods. The proposed DP + ARET method can be used to fast segment the images with no other prior knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Reduction Accelerated Adaptive Step‐Size FISTA Based Smooth‐Lasso Regularization for Fluorescence Molecular Tomography Reconstruction.
- Author
-
Luo, Xiaoli, Jiao, Renhao, Ma, Tao, Liu, Yunjie, Gao, Zhu, Shen, Xiuhong, Ren, Qianqian, Zhang, Heng, and He, Xiaowei
- Subjects
- *
OPTICAL tomography , *THRESHOLDING algorithms , *TOMOGRAPHY , *FLUORESCENCE , *ALGORITHMS - Abstract
In this paper, a reduced accelerated adaptive fast iterative shrinkage threshold algorithm based on Smooth‐Lasso regularization (SL‐RAFISTA‐BB) is proposed for fluorescence molecular tomography (FMT) 3D reconstruction. This method uses the Smooth‐Lasso regularization to fuse the group sparse prior information which can balance the relationship between the sparsity and smoothness of the solution, simplifying the process of calculation. In particular, the convergence speed of the FISTA is improved by introducing a reduction strategy and Barzilai‐Borwein variable step size factor, and constructing a continuation strategy to reduce computing costs and the number of iterations. The experimental results show that the proposed algorithm not only accelerates the convergence speed of the iterative algorithm, but also improves the positioning accuracy of the tumor target, alleviates the over‐sparse or over‐smooth phenomenon of the reconstructed target, and clearly outlines the boundary information of the tumor target. We hope that this method can promote the development of optical molecular tomography. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Prognostic Value of Somatostatin Receptor-Derived Volumetric Parameters from a Hybrid Standardized Uptake Value Thresholding Method in Patients with 68Ga-DOTATATE-Avid Stage IV Neuroendocrine Neoplasms: A Preliminary Study.
- Author
-
Cheng, Zhaoting, Zou, Sijuan, Zhou, Jianyuan, Song, Shuang, Zhu, Yuankai, Zhao, Jun, and Zhu, Xiaohua
- Subjects
- *
NEUROENDOCRINE tumors , *SOMATOSTATIN receptors , *COMPUTED tomography , *THRESHOLDING algorithms , *BONE metastasis - Abstract
Introduction: The ability of PET/CT imaging to delineate neuroendocrine neoplasms (NENs) and predict prognosis in affected patients is often compromised by substantial uptake heterogeneity. We hereby proposed a hybrid standardized uptake value (SUV) thresholding algorithm to extract volumetric parameters from somatostatin receptor (SSTR) PET/CT imaging and investigate their prognostic performance in patients with 68Ga-DOTATATE-avid stage IV NENs. Methods: For 38 retrospectively enrolled patients, we used either fixed SUV thresholding of normal liver parenchyma (method A), 41% of the SUVmax for each lesion (method B), or a hybrid method (method A for liver metastases; fixed SUV threshold of normal bone for bone metastases; method B for primary tumors and other metastases) to quantify the whole-body SSTR-expressing tumor volume (SRETVwb) and total lesion SSTR expression (TLSREwb). Patient survival was also recorded and analyzed. Results: PET/CT images revealed heterogeneous uptake of 68Ga-DOTATATE at primary and metastatic sites. Progression-free survival (PFS) and overall survival (OS) were negatively correlated with the extent of liver or bone metastases (p < 0.05), but not significantly correlated with tumor grade or 18F-FDG PET/CT positivity. By the hybrid method, PFS was significantly shorter in patients with high SRETVwb, and OS was significantly shorter in those with high SRETVwb and TLSREwb (p < 0.05). However, when derived from method A or method B, neither SRETVwb nor TLSREwb could predict patient outcomes. Conclusion: Compared with other methods used in 68Ga-DOTATATE-avid stage IV NENs, our hybrid SUV thresholding method demonstrated robustness, with greater precision, reliability, and prognostic power. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Segmentation and deep learning to digitalize the kinematics of flow-type landslides.
- Author
-
Choi, Clarence E. and Liang, Zhengyu
- Subjects
- *
CONVOLUTIONAL neural networks , *PARTICLE image velocimetry , *DEBRIS avalanches , *THRESHOLDING algorithms , *DEEP learning , *LANDSLIDES - Abstract
Flow-type landslides, including subaerial and submarine debris flows, have poor spatiotemporal predictability. Therefore, researchers rely heavily on experimental evidence in revealing complex flow mechanisms and evaluating theoretical models. To measure the velocity field of experimental flows, conventional image analysis tools for measuring soil deformation and hydraulics have been borrowed. However, these tools were not developed for capturing the kinematics of fast-moving soil–water mixtures over complex terrain under non-uniform lighting conditions. In this study, a new framework based on deep learning was used to automatically digitalize the kinematics of experimental flow-type landslides. Captured images were broken into sequences and binarized using a fully convolutional neural network (FCNN). The proposed framework was demonstrated to outperform classic image processing algorithms (e.g., particle image velocimetry, trainable Weka segmentation, and thresholding algorithms) over a wide range of experimental conditions. The FCNN model was even able to process images from consumer-grade cameras under complex shadow, light, and boundary conditions. This feature is most useful for field-scale experimentation. With fewer than 15 annotated training images, the FCNN digitalized experimental flows with an accuracy of 97% in semantic segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Enhancing Surface Water Monitoring through Multi-Satellite Data-Fusion of Landsat-8/9, Sentinel-2, and Sentinel-1 SAR.
- Author
-
Declaro, Alexis and Kanae, Shinjiro
- Subjects
- *
THRESHOLDING algorithms , *ARTIFICIAL satellites , *SYNTHETIC apertures , *LANDSAT satellites , *CLOUDINESS - Abstract
Long revisit intervals and cloud susceptibility have restricted the applicability of earth observation satellites in surface water studies. Integrating multiple satellites offers potential for more frequent observations, yet combining different satellite sources, particularly optical and SAR satellites, presents complexities. This research explores the data-fusion potential and limitations of Landsat-8/9 Operational Land Imager (OLI), Sentinel-2 Multispectral Instrument (MSI), and Sentinel-1 Synthetic Aperture (SAR) satellites to enhance surface water monitoring. By focusing on segmented surface water images, we demonstrate that combining optical and SAR data is generally effective and straightforward using a simple statistical thresholding algorithm. Kappa coefficients(κ) ranging from 0.80 to 0.95 indicate very strong harmony for integration across reservoirs, lakes, and river environments. In vegetative environments, integration with S1SAR shows weak harmony, with κ values ranging from 0.27 to 0.45, indicating the need for further studies. Global revisit interval maps reveal significant improvement in median revisit intervals from 15.87 to 22.81 days using L8/9 alone, to 4.51 to 7.77 days after incorporating S2, and further to 3.48 to 4.62 days after adding S1SAR. Even during wet season months, multi-satellite fusion maintained the median revisit intervals to less than a week. Maximizing all available open-source earth observation satellites is integral for advancing studies requiring more frequent surface water observations, such as flood, inundation, and hydrological modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. A Novel Fingerprint Segmentation Method by Introducing Efficient Features and Robust Clustering Assignment Technique.
- Author
-
Zaimen, Abderraouf and Bouguezel, Saad
- Subjects
- *
HUMAN fingerprints , *THRESHOLDING algorithms , *GENETIC algorithms , *ACCESS control , *CRIMINAL investigation , *ERROR rates , *IMAGE segmentation - Abstract
Fingerprint recognition systems hold a pivotal role across various modern applications, such as criminal investigation, civil identification, and access control. Fingerprint segmentation is commonly the first stage of fingerprint recognition systems, focusing on extracting the foreground from captured fingerprint images. This helps to reduce extracting false minutiae, speed up the extraction process, and hence, improve the overall system performance. In this paper, we introduce a novel fingerprint segmentation method by first proposing new frequency and intensity features with employing fuzzy-c-means and genetic algorithm, accompanied by proposing a new cluster assignment strategy that involves features weighting and cluster assignment probabilities thresholding which helps to improve the accuracy compared to the standard cluster assignment method. Finally, we introduce a new morphological post-processing technique in order to minimize the misclassified pixels and obtain the final mask. Furthermore, we carry out a comprehensive performance evaluation against leading fingerprint segmentation methods on various known databases. As a result, the proposed method achieves an average error rate of 2.52%, which is much lower than the corresponding errors obtained in the literature. By these experimental results, we show that the proposed method outperforms the best existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Robust Botnet Detection Approach for Known and Unknown Attacks in IoT Networks Using Stacked Multi-classifier and Adaptive Thresholding.
- Author
-
Krishnan, Deepa and Shrinath, Pravin
- Subjects
- *
INTERNET of things , *VECTOR quantization , *THRESHOLDING algorithms - Abstract
The detection of security attacks holds significant importance in IoT networks, primarily due to the escalating number of interconnected devices and the sensitive nature of the transmitted data. This paper introduces a novel methodology designed to identify both known and unknown attacks within IoT networks. For the identification of known attacks, our proposed approach employs a stacked multi-classifier trained with classwise features. To address the challenge of highly imbalanced classes without resorting to resampling, we utilize the Localized Generalized Matrix Learning Vector Quantization (LGMLVQ) approach to select the most relevant features for each class. The efficacy of this model is evaluated using the widely recognized NF-BoT-IoT dataset, demonstrating an impressive accuracy score of 99.9952%.. The proposed study also focuses on detecting unseen attacks leveraging a shallow autoencoder, employing the technique of reconstruction error thresholding. The efficiency of this approach is evaluated using benchmark datasets. namely NF-ToN-IoT and NF-CSE-CIC-IDS 2018. The model's performance on previously unseen samples is noteworthy, with an average accuracy, precision, recall and F1-Score of 93.715%, 99.955%,90.865% and 95.145%, respectively. The proposed work presents significant contributions to IoT security by proposing a comprehensive solution with demonstrated performance in detecting both known and unknown attacks in the context of imbalanced data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Strategies for enhancing automatic fixation detection in head-mounted eye tracking.
- Author
-
Drews, Michael and Dierkes, Kai
- Subjects
- *
EYE tracking , *EYE movements , *THRESHOLDING algorithms , *OPTICAL information processing , *GAZE - Abstract
Moving through a dynamic world, humans need to intermittently stabilize gaze targets on their retina to process visual information. Overt attention being thus split into discrete intervals, the automatic detection of such fixation events is paramount to downstream analysis in many eye-tracking studies. Standard algorithms tackle this challenge in the limiting case of little to no head motion. In this static scenario, which is approximately realized for most remote eye-tracking systems, it amounts to detecting periods of relative eye stillness. In contrast, head-mounted eye trackers allow for experiments with subjects moving naturally in everyday environments. Detecting fixations in these dynamic scenarios is more challenging, since gaze-stabilizing eye movements need to be reliably distinguished from non-fixational gaze shifts. Here, we propose several strategies for enhancing existing algorithms developed for fixation detection in the static case to allow for robust fixation detection in dynamic real-world scenarios recorded with head-mounted eye trackers. Specifically, we consider (i) an optic-flow-based compensation stage explicitly accounting for stabilizing eye movements during head motion, (ii) an adaptive adjustment of algorithm sensitivity according to head-motion intensity, and (iii) a coherent tuning of all algorithm parameters. Introducing a new hand-labeled dataset, recorded with the Pupil Invisible glasses by Pupil Labs, we investigate their individual contributions. The dataset comprises both static and dynamic scenarios and is made publicly available. We show that a combination of all proposed strategies improves standard thresholding algorithms and outperforms previous approaches to fixation detection in head-mounted eye tracking. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Improvement Underwater Acoustic Signal De-Noising Based on Dual-Tree Complex Wavelet Transform.
- Author
-
Khalid, Ausama, Al-Aboosi, Yasin, and Mohd Shah, Nor Shahida
- Subjects
DISCRETE wavelet transforms ,STANDARD deviations ,NOISE ,SIGNAL denoising ,UNDERWATER noise ,THRESHOLDING algorithms - Abstract
Underwater Acoustic signal denoising is encountering high demand due to the extensive use of acoustic in a lot of underwater applications. Underwater acoustic noise (UWAN) has a high effect on the quality of the acoustic signal therefore, it is always preferred to use a de-noising filter to remove it. In this paper, we propose a filter that utilizes a Complex wavelet transform (CWT) to remove UWAN and help improve the signal-to-noise ratio (SNR) of the detected acoustic signal. CWT is nearly shift-invariant and offers a good directionality in contrast to normal wavelet transform (DWT). The proposed method was tested using a real recorded UWAN for three depths from the Tigris River. The proposed method was compared with a more conveniently used discrete wavelet transform. The test included using Two signals: fixed frequency and linear modulation signal. De-noising was performed using a soft thresholding technique based on level-dependent threshold estimation. The proposed method showed supreme performance in terms of SNR and root mean square error (RMSE). When the input signal was 5.9 dB and -13.2 dB for SNR and RMSE respectively, the results were 10.9 dB for SNR and -15.7 dB for RMSE in the case of fixed frequency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Deep learning‐based structure segmentation and intramuscular fat annotation on lumbar magnetic resonance imaging.
- Author
-
Xu, Yefu, Zheng, Shijie, Tian, Qingyi, Kou, Zhuoyan, Li, Wenqing, Xie, Xinhui, and Wu, Xiaotao
- Subjects
LUMBAR pain ,MAGNETIC resonance imaging ,MEASUREMENT errors ,THRESHOLDING algorithms ,MUSCULAR atrophy - Abstract
Background: Lumbar disc herniation (LDH) is a prevalent cause of low back pain. LDH patients commonly experience paraspinal muscle atrophy and fatty infiltration (FI), which further exacerbates the symptoms of low back pain. Magnetic resonance imaging (MRI) is crucial for assessing paraspinal muscle condition. Our study aims to develop a dual‐model for automated muscle segmentation and FI annotation on MRI, assisting clinicians evaluate LDH conditions comprehensively. Methods: The study retrospectively collected data diagnosed with LDH from December 2020 to May 2022. The dataset was split into a 7:3 ratio for training and testing, with an external test set prepared to validate model generalizability. The model's performance was evaluated using average precision (AP), recall and F1 score. The consistency was assessed using the Dice similarity coefficient (DSC) and Cohen's Kappa. The mean absolute percentage error (MAPE) was calculated to assess the error of the model measurements of relative cross‐sectional area (rCSA) and FI. Calculate the MAPE of FI measured by threshold algorithms to compare with the model. Results: A total of 417 patients being evaluated, comprising 216 males and 201 females, with a mean age of 49 ± 15 years. In the internal test set, the muscle segmentation model achieved an overall DSC of 0.92 ± 0.10, recall of 92.60%, and AP of 0.98. The fat annotation model attained a recall of 91.30%, F1 Score of 0.82, and Cohen's Kappa of 0.76. However, there was a decrease on the external test set. For rCSA measurements, except for longissimus (10.89%), the MAPE of other muscles was less than 10%. When comparing the errors of FI for each paraspinal muscle, the MAPE of the model was lower than that of the threshold algorithm. Conclusion: The models demonstrate outstanding performance, with lower error in FI measurement compared to thresholding algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. An Integrated Method Using a Convolutional Autoencoder, Thresholding Techniques, and a Residual Network for Anomaly Detection on Heritage Roof Surfaces.
- Author
-
Zhang, Yongcheng, Kong, Liulin, Antwi-Afari, Maxwell Fordjour, and Zhang, Qingzhi
- Subjects
COMPUTER vision ,ARTIFICIAL intelligence ,WATER leakage ,CONSERVATION & restoration ,PRESERVATION of architecture ,DEEP learning ,THRESHOLDING algorithms - Abstract
The roofs of heritage buildings are subject to long-term degradation, resulting in poor heat insulation, heat regulation, and water leakage prevention. Researchers have predominantly employed feature-based traditional machine learning methods or individual deep learning techniques for the detection of natural deterioration and human-made damage on the surfaces of heritage building roofs for preservation. Despite their success, balancing accuracy, efficiency, timeliness, and cost remains a challenge, hindering practical application. The paper proposes an integrated method that employs a convolutional autoencoder, thresholding techniques, and a residual network to automatically detect anomalies on heritage roof surfaces. Firstly, unmanned aerial vehicles (UAVs) were employed to collect the image data of the heritage building roofs. Subsequently, an artificial intelligence (AI)-based system was developed to detect, extract, and classify anomalies on heritage roof surfaces by integrating a convolutional autoencoder, threshold techniques, and residual networks (ResNets). A heritage building project was selected as a case study. The experiments demonstrate that the proposed approach improved the detection accuracy and efficiency when compared with a single detection method. The proposed method addresses certain limitations of existing approaches, especially the reliance on extensive data labeling. It is anticipated that this approach will provide a basis for the formulation of repair schemes and timely maintenance for preventive conservation, enhancing the actual benefits of heritage building restoration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Improving the accuracy of segmentation of white blood cells (WBCs) in microscopic images using watershed algorithm in comparison with adaptive thresholding.
- Author
-
Ramanjaneyulu, R., Rajmohan, V., Thiruchelvam, V., and Susiapan, Y.
- Subjects
- *
LEUCOCYTES , *IMAGE segmentation , *WATERSHEDS , *CONFIDENCE intervals , *SAMPLE size (Statistics) , *THRESHOLDING algorithms - Abstract
This research aims to assess and compare the efficacy of two distinct image segmentation methodologies: the Novel Watershed algorithm and the Adaptive Thresholding approach. The objective is to enhance the precision of white blood cell (WBC) segmentation within microscopic images. The segmentation process employs the Novel Watershed algorithm, designated as group 1, encompassing a sample size of N=10. Simultaneously, the Adaptive Thresholding technique constitutes group 2, also with N=10 samples. The study maintains a pre-test power of 80%, alongside alpha and beta values of 0.05 and 0.2, and a 95% confidence interval. The entire process is executed using MATLAB software. The segmentation accuracy achieved by the Novel Watershed algorithm is 86%, mirroring the accuracy of the Adaptive Thresholding approach, which also yields an 86% accuracy rate. The disparity between these segmentation results is statistically significant, denoted by a significance value of p=0.002 (p<0.05), indicating an error-free outcome. The Novel Watershed algorithm distinctly outperforms the Adaptive Thresholding algorithm in augmenting the accuracy of white blood cell segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. An improved honey badger algorithm for global optimization and multilevel thresholding segmentation: real case with brain tumor images.
- Author
-
Houssein, Essam H., Emam, Marwa M., Singh, Narinder, Samee, Nagwan Abdel, Alabdulhafith, Maali, and Çelik, Emre
- Subjects
- *
OPTIMIZATION algorithms , *METAHEURISTIC algorithms , *IMAGE segmentation , *GLOBAL optimization , *MAGNETIC resonance imaging , *THRESHOLDING algorithms - Abstract
Global optimization and biomedical image segmentation are crucial in diverse scientific and medical fields. The Honey Badger Algorithm (HBA) is a newly developed metaheuristic that draws inspiration from the foraging behavior of honey badgers. Similar to other metaheuristic algorithms, HBA encounters difficulties associated with exploitation, being trapped in local optima, and the pace at which it converges. This study aims to improve the performance of the original HBA by implementing the Enhanced Solution Quality (ESQ) method. This strategy helps to prevent becoming stuck in local optima and speeds up the convergence process. We conducted an assessment of the enhanced algorithm, mHBA, by utilizing a comprehensive collection of benchmark functions from IEEE CEC'2020. In this evaluation, we compared mHBA with well-established metaheuristic algorithms. mHBA demonstrates exceptional performance, as shown by both qualitative and quantitative assessments. Our study not only focuses on global optimization but also investigates the field of biomedical image segmentation, which is a crucial process in numerous applications involving digital image analysis and comprehension. We specifically focus on the problem of multi-level thresholding (MT) for medical image segmentation, which is a difficult process that becomes more challenging as the number of thresholds needed increases. In order to tackle this issue, we suggest a revised edition of the standard HBA, known as mHBA, which utilizes the ESQ approach. We utilized this methodology for the segmentation of Magnetic Resonance Images (MRI). The evaluation of mHBA utilizes existing metrics to gauge the quality and performance of its segmentation. This evaluation showcases the resilience of mHBA in comparison to many established optimization algorithms, emphasizing the effectiveness of the suggested technique. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A novel chaotic weighted EHO-based methodology for retinal vessel segmentation.
- Author
-
Ashanand and Kaur, Manpreet
- Subjects
RETINAL blood vessels ,THRESHOLDING algorithms ,IMAGE segmentation ,RETINAL imaging ,STATISTICAL correlation ,MATHEMATICAL morphology - Abstract
Retinal image segmentation process deals with problems like spurious vascularisation and thin vessel detection. In this paper, a three-step methodology has been proposed for retinal vessel segmentation. In first step, RGB to YIQ conversion is performed. In second step, Y component enhancement is performed. A novel Chaotic weighted Elephant Herding Optimization (CWEHO) has been proposed to optimize the clip limit and block size values of Contrast Limited Adaptive Histogram Equalization (CLAHE). CWEHO-based CLAHE along with morphological operations, non-local means filter, and median filter is applied to enhance retinal images. In third step, thin and thick vessel segmentation is performed. Top hat transformation, otsu thresholding algorithm, and vessel point selection are applied for thick vessel extraction. The first-order Gaussian derivative in conjunction with the match filter is used to extract thin vessels. DRIVE and HRF datasets are used to assess the effectiveness of proposed methodology. The average values of segmentation accuracy, specificity, sensitivity, and Mathew Correlation Coefficient (MCC) are observed to be 0.9650, 0.9895, 0.7007, 0.7650, respectively, for observer1 and 0.9696, 0.9912, 0.7390, 0.7901 for observer2 using DRIVE dataset. Similarly, 0.9592, 0.9839, 0.6850, and 0.7116, respectively, metrics for HRF dataset. Compared to state-of-the-art methods, the proposed segmentation methodology provides better results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Noise Reduction Using Sparsity Constrained and Regularized Iterative Thresholding Algorithm and Dictionary.
- Author
-
Kumar, Raj, Tripathy, Manoj, Anand, R. S., and Kumar, Niraj
- Subjects
- *
THRESHOLDING algorithms , *NOISE control , *SPEECH enhancement , *SIGNAL-to-noise ratio , *SPEECH - Abstract
Dictionary-based methods are recognized for their ability to estimate clear speech from speech contaminated with noise. However, these techniques face a limitation in distinguishing between speech and noise components. The presented algorithm addresses this challenge by introducing an innovative solution through an iterative thresholding method. In this method, the thresholding is based on noise characteristics and remains independent of variations in noise power. To determine the stopping criteria for thresholding, the algorithm leverages the structure of the speech signal in time–frequency domains using the Gini Index. Remarkably, this technique adeptly extracts reliable signal components from a noisy time–frequency magnitude spectrum, performing well under non-varying and varying Signal-to-Noise Ratio (SNR) conditions. Importantly, it achieves this without the need for mask functions, voice activity detection techniques, noise or mixture dictionaries, or SNR information, which are essential components in other dictionary-based methods. Following the thresholding process, the clean speech is assessed using a dictionary-based approach to restore perceptual loss. In evaluating its performance, the algorithm is compared with traditional enhancement techniques concerning perceptual evaluation of speech quality and short-time objective intelligibility. The assessment is conducted under various background noise conditions, including babble, white, factory, and Volvo noises. The proposed algorithm exhibits superior performance, particularly in low and varying SNR scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A Novel Approach for Farmland Size Estimation in Small-Scale Agriculture Using Edge Counting and Remote Sensing.
- Author
-
Du, Jingnan, Xu, Sucheng, Li, Jinshan, Duan, Jiakun, and Xiao, Wu
- Subjects
- *
REMOTE-sensing images , *THRESHOLDING algorithms , *REMOTE sensing , *FARM size , *AGRICULTURE - Abstract
Accurate and timely information on farmland size is crucial for agricultural development, resource management, and other related fields. However, there is currently no mature method for estimating farmland size in smallholder farming areas. This is due to the small size of farmland plots in these areas, which have unclear boundaries in medium and high-resolution satellite imagery, and irregular shapes that make it difficult to extract complete boundaries using morphological rules. Automatic farmland mapping algorithms using remote sensing data also perform poorly in small-scale farming areas. To address this issue, this study proposes a farmland size evaluation index based on edge frequency (ECR). The algorithm utilizes the high temporal resolution of Sentinel-2 satellite imagery to compensate for its spatial resolution limitations. First, all Sentinel-2 images from one year are used to calculate edge frequencies, which can divide farmland areas into low-value farmland interior regions, medium-value non-permanent edges, and high-value permanent edges (PE). Next, the Otsu's thresholding algorithm is iteratively applied twice to the edge frequencies to first extract edges and then permanent edges. The ratio of PE to cropland (ECR) is then calculated. Using the North China Plain and Northeast China Plain as study areas, and comparing with existing farmland size datasets, the appropriate estimation radius for ECR was determined to be 1600 m. The study found that the peak ECR value for the Northeast China Plain was 0.085, and the peak value for the North China Plain was 0.105. The overall distribution was consistent with the reference dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Robust Speech Enhancement Using Dabauchies Wavelet Based Adaptive Wavelet Thresholding for the Development of Robust Automatic Speech Recognition: A Comprehensive Review.
- Author
-
Shanthamallappa, Mahadevaswamy
- Subjects
ARTIFICIAL neural networks ,SPEECH enhancement ,SPEECH ,SIGNAL processing ,NATIVE language ,SPEECH perception ,INTELLIGIBILITY of speech ,THRESHOLDING algorithms ,AUTOMATIC speech recognition - Abstract
Developing a robust Automatic Speech Recognition (ASR) system is a major challenge in speech signal processing research. These systems perform exceedingly well in clean environments. However, the performance of these systems is not acceptable when the spoken signal is corrupted by several environmental and other artificial noises. The efficiency of any ASR system depends on several factors such as size of the vocabulary, native language influences, transmission channel, emotional and health state of the speaker, age of the speaker, designed speech corpus, size of the dataset, training and testing strategy and its preprocessing and other challenges. It is well known fact that the presence of noise in speech signal degrades its perceptual quality and intelligibility and hence ASR system performance is also affected. So, in this paper Dabauchies Wavelet based time adaptive Bayes thresholding algorithm is proposed with a custom Wavelet Packet Decomposition and Reconstruction Tree. The proposed system performance is evaluated on the Private Kannada Dataset and TIMIT dataset. The results reveal the effectiveness of the proposed system in various SNR levels such as − 10, − 5, 0, 5, 10, 15, 20, 25 and 30 dB. The article begins with introductory insights on ASR, Physiological process of speech production and perception in Humans, ASR jorgans, the architecture of ASR, and barriers associated with the ASR design. The work also focus on dataset design, baseline speech enhancement methods. This work provides comprehensive review to Wavelet based speech enhancement approach to the research scholars pursuing research in the area of speech signal processing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Image denoising method based on improved wavelet threshold algorithm.
- Author
-
Zhu, Guowu, Liu, Bingyou, Yang, Pan, and Fan, Xuan
- Subjects
IMAGE denoising ,SIGNAL-to-noise ratio ,THRESHOLDING algorithms ,WAVELET transforms ,ALGORITHMS ,JUDGMENT (Psychology) - Abstract
In order to achieve image denoising in high-noise environment, this paper proposes an image denoising algorithm based on improved wavelet thresholding algorithm.The algorithm first improves the problem of fixed threshold.Secondly, an improved wavelet threshold function is proposed for the traditional hard and soft threshold function.Finally, the combination of improved threshold function and threshold can improve the accuracy of wavelet threshold judgment, and realize the effective separation of image and noise. Experimental results show that the proposed algorithm can not only effectively remove image noise in noisy environment, but also obtain higher(Peak signal-to-noise ratio, PSNR) and smaller (Mean Squared Error, MSE). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. An Optimization Numerical Spiking Neural Membrane System with Adaptive Multi-Mutation Operators for Brain Tumor Segmentation.
- Author
-
Dong, Jianping, Zhang, Gexiang, Hu, Yangheng, Wu, Yijin, and Rong, Haina
- Subjects
- *
BRAIN tumors , *OPTIMIZATION algorithms , *MAGNETIC resonance imaging , *THRESHOLDING algorithms , *DIFFERENTIAL evolution - Abstract
Magnetic Resonance Imaging (MRI) is an important diagnostic technique for brain tumors due to its ability to generate images without tissue damage or skull artifacts. Therefore, MRI images are widely used to achieve the segmentation of brain tumors. This paper is the first attempt to discuss the use of optimization spiking neural P systems to improve the threshold segmentation of brain tumor images. To be specific, a threshold segmentation approach based on optimization numerical spiking neural P systems with adaptive multi-mutation operators (ONSNPSamos) is proposed to segment brain tumor images. More specifically, an ONSNPSamo with a multi-mutation strategy is introduced to balance exploration and exploitation abilities. At the same time, an approach combining the ONSNPSamo and connectivity algorithms is proposed to address the brain tumor segmentation problem. Our experimental results from CEC 2017 benchmarks (basic, shifted and rotated, hybrid, and composition function optimization problems) demonstrate that the ONSNPSamo is better than or close to 12 optimization algorithms. Furthermore, case studies from BraTS 2019 show that the approach combining the ONSNPSamo and connectivity algorithms can more effectively segment brain tumor images than most algorithms involved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A deep learning algorithm to accelerate algebraic multigrid methods in finite element solvers of 3D elliptic PDEs.
- Author
-
Caldana, Matteo, Antonietti, Paola F., and Dede', Luca
- Subjects
- *
ALGEBRAIC multigrid methods , *MACHINE learning , *ARTIFICIAL neural networks , *FINITE element method , *GRAYSCALE model , *DEEP learning , *THRESHOLDING algorithms , *ELLIPTIC differential equations - Abstract
Algebraic multigrid (AMG) methods are among the most efficient solvers for linear systems of equations and they are widely used for the solution of problems stemming from the discretization of Partial Differential Equations (PDEs). A severe limitation of AMG methods is the dependence on parameters that require to be fine-tuned. In particular, the strong threshold parameter is the most relevant since it stands at the basis of the construction of successively coarser grids needed by the AMG methods. We introduce a novel deep learning algorithm that minimizes the computational cost of the AMG method when used as a finite element solver. We show that our algorithm requires minimal changes to any existing code. The proposed Artificial Neural Network (ANN) tunes the value of the strong threshold parameter by interpreting the sparse matrix of the linear system as a gray scale image and exploiting a pooling operator to transform it into a small multi-channel image. We experimentally prove that the pooling successfully reduces the computational cost of processing a large sparse matrix and preserves the features needed for the regression task at hand. We train the proposed algorithm on a large dataset containing problems with a strongly heterogeneous diffusion coefficient defined in different three-dimensional geometries and discretized with unstructured grids and linear elasticity problems with a strongly heterogeneous Young's modulus. When tested on problems with coefficients or geometries not present in the training dataset, our approach reduces the computational time by up to 30%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Efficient image restoration via non-convex total variation regularization and ADMM optimization.
- Author
-
Kumar, Narendra, Sonkar, Munnu, and Bhatnagar, Gaurav
- Subjects
- *
IMAGE reconstruction , *MATHEMATICAL regularization , *THRESHOLDING algorithms - Abstract
This article presents a novel approach to image restoration utilizing a unique non-convex l 1 / 2 -TV regularization model. This model integrates the l 1 / 2 -quasi norm as a regularization function, introducing non-convexity to promote sparsity and unevenly penalize elements, thereby enhancing restoration outcomes. To tackle this model, an efficient algorithm named the Alternating Direction Method of Multipliers, based on the Lagrangian multiplier, is introduced. This effectively prevents the penalty parameter from reaching infinity and ensures excellent convergence behavior. The proposed algorithm decomposes the optimization problem into subproblems, for which closed-form solutions are derived, particularly addressing the challenging l 1 / 2 regularization problem. To validate its effectiveness, a comprehensive set of experiments are conducted to compare its performance with existing methods. The experimental results demonstrate that the proposed model performs well in both qualitative and quantitative evaluations. Consequently, the proposed model is not only efficient and stable but also exhibits excellent convergence behavior. • A novel regularization model for image restoration, based on non-convex total variation is proposed in this work. • We propose an effective solution approach to solve the model using the Alternating Direction Method of Multipliers (ADMM). • The proposed solution provides the closed-form thresholding formula for regularization model. • Extensive numerical experiments demonstrate the superiority of the proposed method over compared methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Multi-Level Image Segmentation Combining Chaotic Initialized Chimp Optimization Algorithm and Cauchy Mutation.
- Author
-
Li, Shujing, Li, Zhangfei, Cheng, Wenhui, Qi, Chenyang, and Li, Linguo
- Subjects
OPTIMIZATION algorithms ,THRESHOLDING algorithms ,DIAGNOSTIC imaging ,ALGORITHMS ,UNIFORMITY ,IMAGE segmentation - Abstract
To enhance the diversity and distribution uniformity of initial population, as well as to avoid local extrema in the Chimp Optimization Algorithm (CHOA), this paper improves the CHOA based on chaos initialization and Cauchy mutation. First, Sin chaos is introduced to improve the random population initialization scheme of the CHOA, which not only guarantees the diversity of the population, but also enhances the distribution uniformity of the initial population. Next, Cauchy mutation is added to optimize the global search ability of the CHOA in the process of position (threshold) updating to avoid the CHOA falling into local optima. Finally, an improved CHOA was formed through the combination of chaos initialization and Cauchy mutation (CICMCHOA), then taking fuzzy Kapur as the objective function, this paper applied CICMCHOA to natural and medical image segmentation, and compared it with four algorithms, including the improved Satin Bowerbird optimizer (ISBO), Cuckoo Search (ICS), etc. The experimental results deriving from visual and specific indicators demonstrate that CICMCHOA delivers superior segmentation effects in image segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A Sensitive SERS Sensor Combined with Intelligent Variable Selection Models for Detecting Chlorpyrifos Residue in Tea.
- Author
-
Yang, Hanhua, Qian, Hao, Xu, Yi, Zhai, Xiaodong, and Zhu, Jiaji
- Subjects
SERS spectroscopy ,THRESHOLDING algorithms ,INTELLIGENT sensors ,CHLORPYRIFOS ,DETECTION limit - Abstract
Chlorpyrifos is one of the most widely used broad-spectrum insecticides in agriculture. Given its potential toxicity and residue in food (e.g., tea), establishing a rapid and reliable method for the determination of chlorpyrifos residue is crucial. In this study, a strategy combining surface-enhanced Raman spectroscopy (SERS) and intelligent variable selection models for detecting chlorpyrifos residue in tea was established. First, gold nanostars were fabricated as a SERS sensor for measuring the SERS spectra. Second, the raw SERS spectra were preprocessed to facilitate the quantitative analysis. Third, a partial least squares model and four outstanding intelligent variable selection models, Monte Carlo-based uninformative variable elimination, competitive adaptive reweighted sampling, iteratively retaining informative variables, and variable iterative space shrinkage approach, were developed for detecting chlorpyrifos residue in a comparative study. The repeatability and reproducibility tests demonstrated the excellent stability of the proposed strategy. Furthermore, the sensitivity of the proposed strategy was assessed by estimating limit of detection values of the various models. Finally, two-tailed paired t-tests confirmed that the accuracy of the proposed strategy was equivalent to that of gas chromatography–mass spectrometry. Hence, the proposed method provides a promising strategy for detecting chlorpyrifos residue in tea. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Enhanced Chaos Game Optimization for Multilevel Image Thresholding through Fitness Distance Balance Mechanism.
- Author
-
Miled, Achraf Ben, Elhossiny, Mohammed Ahmed, Ibrahim Elghazawy, Marwa Anwar, Mahmoud, Ashraf F. A., and Abdalla, Faroug A.
- Subjects
OPTIMIZATION algorithms ,COMPUTER vision ,IMAGE segmentation ,METAHEURISTIC algorithms ,ALGORITHMS ,DIGITAL image processing ,THRESHOLDING algorithms - Abstract
This study proposes a method to enhance the Chaos Game Optimization (CGO) algorithm for efficient multilevel image thresholding by incorporating a fitness distance balance mechanism. Multilevel thresholding is essential for detailed image segmentation in digital image processing, particularly in environments with complex image characteristics. This improved CGO algorithm adopts a hybrid metaheuristic framework that effectively addresses the challenges of premature convergence and the exploration-exploitation balance, typical of traditional thresholding methods. By integrating mechanisms that balance fitness and spatial diversity, the proposed algorithm achieves improved segmentation accuracy and computational efficiency. This approach was validated through extensive experiments on benchmark datasets, comparing favorably against existing state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Thresholding optimization of global navigation satellite system acquisition with constant false alarm rate detection using metaheuristic techniques.
- Author
-
Hassani, Mohamed Fouad, Toumi, Abida, Benkrinah, Sabra, and Sbaa, Salim
- Subjects
- *
GLOBAL Positioning System , *FALSE alarms , *GLOBAL optimization , *DETECTION alarms , *METAHEURISTIC algorithms , *RAYLEIGH fading channels , *THRESHOLDING algorithms - Abstract
This article explores the optimization of global navigation satellite system (GNSS) acquisition using metaheuristic techniques. It focuses on improving the detection performance of the GNSS acquisition system by optimizing the cell averaging constant false alarm rate (CA-CFAR) thresholding in Rayleigh fading channels. The study suggests the use of metaheuristic optimization algorithms such as particle swarm optimization (PSO), biogeography-based optimization (BBO), firefly algorithm (FA), and simulated annealing (SA). The results demonstrate that the optimized thresholds have a significant impact on the system's performance. The article also discusses the use of the constant false alarm rate (CFAR) technique, which utilizes two CA-CFAR detectors to estimate the noise power level and adaptively set a threshold for detection. The detection performance can be further improved by combining the outputs of the two detectors using fusion rules. The article presents and compares the results of simulations and theoretical analysis for different optimization algorithms, showing that the CA-CFAR detector with optimization outperforms the fixed threshold detector and the CA-CFAR detector without optimization. The best results are achieved when two different detectors are used with the CA-CFAR detector and the fusion rule is applied. The study suggests further research in non-homogeneous environments and the exploration of other types of detectors with new optimization methods. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
38. Simultaneous seismic data de‐aliasing and denoising with a fast adaptive method based on hybrid wavelet transform.
- Author
-
Zhang, Peng, Han, Xiaoying, Chen, Changle, and Liu, Xinming
- Subjects
- *
DISCRETE wavelet transforms , *WAVELET transforms , *THRESHOLDING algorithms , *CONVEX sets , *MISSING data (Statistics) , *SIGNAL-to-noise ratio - Abstract
Missing data and random noise are prevalent issues encountered during the processing of acquired seismic data. Interpolation and denoising represent economical solutions to address these limitations. Recovering regularly missing traces is challenging because of the spatial aliasing, and the extra difficulty is compounded by the presence of noise. Hence, developing an effective approach to realize denoising and anti‐aliasing is important. Projection onto convex sets is an effective method for recovering missing seismic data that is typically used for processing data with a good signal‐to‐noise ratio. The computational attractiveness of the projection onto convex sets reconstruction approach is compromised by its slow convergence rate. In this study, we aimed to efficiently implement simultaneous seismic data de‐aliasing and denoising. We combined a discrete wavelet transform with a seislet transform to construct a hybrid wavelet transform. A new fast adaptive method based on the fast projection onto convex sets method was proposed to recover the missing data and remove random noise. This approach adjusts the projection operator and iterative shrinkage threshold operator. The result is influenced by the threshold value. We enhanced the processing accuracy by adopting an optimal threshold strategy. Synthetic and field data tests indicate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. A New Approach for Super Resolution Object Detection Using an Image Slicing Algorithm and the Segment Anything Model.
- Author
-
Telçeken, Muhammed, Akgun, Devrim, Kacar, Sezgin, and Bingol, Bunyamin
- Subjects
- *
OBJECT recognition (Computer vision) , *ALGORITHMS , *REMOTE-sensing images , *THRESHOLDING algorithms , *HOUGH transforms , *DETECTORS - Abstract
Object detection in high resolution enables the identification and localization of objects for monitoring critical areas with precision. Although there have been improvements in object detection at high resolution, the variety of object scales, as well as the diversity of backgrounds and textures in high-resolution images, make it challenging for detectors to generalize successfully. This study introduces a new method for object detection in high-resolution images. The pre-processing stage of the method includes ISA and SAM to slice the input image and segment the objects in bounding boxes, respectively. In order to improve the resolution in the slices, the first layer of YOLO is designed as SRGAN. Thus, before applying YOLO detection, the resolution of the sliced images is increased to improve features. The proposed system is evaluated on xView and VisDrone datasets for object detection algorithms in satellite and aerial imagery contexts. The success of the algorithm is presented in four different YOLO architectures integrated with SRGAN. According to comparative evaluations, the proposed system with Yolov5 and Yolov8 produces the best results on xView and VisDrone datasets, respectively. Based on the comparisons with the literature, our proposed system produces better results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A new histogram equalization technique for contrast enhancement of grayscale images using the differential evolution algorithm.
- Author
-
Rivera-Aguilar, Beatriz A., Cuevas, Erik, Pérez, Marco, Camarena, Octavio, and Rodríguez, Alma
- Subjects
- *
DIFFERENTIAL evolution , *IMAGE intensifiers , *COMPUTER vision , *HISTOGRAMS , *ALGORITHMS , *GRAYSCALE model , *THRESHOLDING algorithms - Abstract
Image contrast enhancement is a crucial computer vision step aiming to improve the quality of the visual information in processed images. In the literature, several proposed methods for image contrast enhancement are Histogram Equalization-based (HE) techniques that use one transformation function and optimize its parameters for mapping the pixels to new gray-intensity values. However, using only one transformation function would leave other enhancement options unexplored. Therefore, the proposed approach generates several transformation functions and selects the one that best improves the image's contrast. This method is based on the Differential Evolution (DE) algorithm, which produces multiple candidate solutions representing transformation functions. The transformation functions map the input pixel values in their enhanced versions to equalize the histogram and improve the image's contrast. Furthermore, a new formulation is proposed as the objective function based on the number of edge pixels, the intensity of the pixels, image entropy, and the number of gray intensity levels. The performance of this approach has been tested on low-contrast dataset images and compared to similar HE techniques, such as AVHEQ, BBHE, RSESIHE, MMBEBHE, and ESIHE. The results demonstrate the proposed algorithm's robustness and high performance in improving the grayscale images' contrast. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. A new method for crop image segmentation based on 2D histogram using multi-strategy shuffled frog leaping algorithm.
- Author
-
Kumar, Arun, Kumar, A., Vishwakarma, Amit, and Singh, Himanshu
- Subjects
- *
PARTICLE swarm optimization , *COLOR variation (Biology) , *IMAGE segmentation , *DIFFERENTIAL evolution , *REMOTE sensing , *THRESHOLDING algorithms - Abstract
The economy of a country is directly impacted by agricultural productivity. If proper precautions are not taken in this area, plants suffer serious consequences, which have an impact on the quality, quantity, or productivity of the corresponding products. In this context, the farmers need an agriculture expert for the examination of the plant diseases, which takes a lot of time as well as continuous monitoring of plants. Hence, multilevel thresholding is a possible solution, which is useful for identifying the diseases in the crops by changing the color variation of a segmented image. However, it has a large range of applications in various domains such as remote sensing, the medical domain, the biometric domain. In this paper, an improved technique using horizontal and vertical crossover shuffled frog leaping algorithm (HVSFLA) is proposed for multilevel thresholding for the crop image based on a 2D histogram. The 2D histogram uses the grayscale value and non-local mean (NLM), whereas the one-dimension (1D) histogram uses only grayscale value. Therefore, in the proposed method, 2D Kapur's entropy integrating with non-local mean 2D histogram is exploited for multilevel thresholding of the crop images. To investigate the efficacy, the proposed method is compared with well-known optimization techniques like beta differential evolution, artificial bee colony, bacterial foraging optimization, and particle swarm optimization. It is evident from the experimental results that the proposed technique yields better results than the 1D histogram technique in terms of root-mean-square error, structural similarity index, peak signal-to-noise ratio, feature similarity index, coefficient of variation, and fitness function. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Robust treatment planning for small animal radio‐neuromodulation using focused kV x‐ray beams.
- Author
-
Qiu, Chenhui, Gu, Wenbo, Yan, Huagang, Sun, Weiyuan, Wang, Yuanyuan, Wen, Qiang, Sheng, Ke, and Liu, Wu
- Subjects
- *
THRESHOLDING algorithms , *ANIMAL immobilization , *STOCHASTIC programming , *VISUAL cortex , *ANIMAL experimentation - Abstract
Background: In preclinical radio‐neuromodulation research, small animal experiments are pivotal for unraveling radiobiological mechanism, investigating prescription and planning techniques, and assessing treatment effects and toxicities. However, the target size inside a rat brain is typically in the order of sub‐millimeters. The small target inside the visual cortex neural region in rat brain with a diameter of around 1 mm was focused in this work to observe the physiological change of this region. Delivering uniform doses to the small target while sparing health tissues is challenging. Focused kV x‐ray technique based on modern x‐ray polycapillary focusing lens is a promising modality for small animal radio‐neuromodulation. Purpose: The current manual planning method could lead to sub‐optimal plans, and the positioning uncertainties due to mechanical accuracy limitations, animal immobilization, and robotic arm motion are not considered. This work aims to design a robust inverse planning method to optimize the intensities of focused kV x‐ray beams located in beam trajectories to irradiate small mm‐sized targets in rat brains for radio‐neuromodulation. Methods: Focused kV x‐ray beams were generated through polycapillary x‐ray focusing lenses on achieving small (≤0.3 mm) focus perpendicular to the beam. The beam trajectories were manually designed in 3D space in scanning‐while‐rotating mode. Geant4 Monte Carlo (MC) simulation generated a dose calculation matrix for each focused kV x‐ray beam located in beam trajectories. In the proposed robust inverse planning method, an objective function combining a voxel‐wise stochastic programming approach and L1 norm regularization was established to overcome the positioning uncertainties and obtain a high‐quality plan. The fast iterative shrinkage thresholding algorithm (FISTA) was utilized to solve the objective function and obtain the optimal intensities. Four cases were employed to validate the feasibility and effectiveness of the proposed method. The manual and non‐robust inverse planning methods were also implemented for comparison. Results: The proposed robust inverse planning method achieved superior dose homogeneity and higher robustness against positioning uncertainties. On average, the clinical target volume (CTV) homogeneity index (HI) of robust inverse plan improved to 13.3 from 22.9 in non‐robust inverse plan and 53.8 in manual plan if positioning uncertainties were also present. The average bandwidth at D90 was reduced by 6.5 Gy in the robust inverse plan, compared to 9.6 Gy in non‐robust inverse plan and 12.5 Gy in manual plan. The average bandwidth at D80 was reduced by 3.4 Gy in robust inverse plan, compared to 5.5 Gy in non‐robust inverse plan and 8.5 Gy in manual plan. Moreover, the dose delivery time of manual plan was reduced by an average reduction of 54.7% with robust inverse plan and 29.0% with non‐robust inverse plan. Conclusion: Compared to manual and non‐robust inverse planning methods, the robust inverse planning method improved the dose homogeneity and delivery efficiency and was resistant to the uncertainties, which are crucial for radio‐neuromodulation utilizing focused kV x‐rays. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. HGSNet: A hypergraph network for subtle lesions segmentation in medical imaging.
- Author
-
Wang, Junze, Zhang, Wenjun, Li, Dandan, Li, Chao, and Jing, Weipeng
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *DIAGNOSTIC imaging , *IMAGE segmentation , *IMAGE processing , *CONVOLUTIONAL neural networks , *HYPERGRAPHS , *THRESHOLDING algorithms - Abstract
Lesion segmentation is a fundamental task in medical image processing, often facing the challenge of subtle lesions. It is important to detect these lesions, even though they can be difficult to identify. Convolutional neural networks, an effective method in medical image processing, often ignore the relationship between lesions, leading to topological errors during training. To tackle topological errors, move is made from pixel‐level to hypergraph representations. Hypergraphs can model lesions as vertices connected by hyperedges, capturing the topology between lesions. This paper introduces a novel dynamic hypergraph learning strategy called DHLS. DHLS allows for the dynamic construction of hypergraphs contingent upon input vertex variations. A hypergraph global‐aware segmentation network, termed HGSNet, is further proposed. HGSNet can capture the key high‐order structure information, which is able to enhance global topology expression. Additionally, a composite loss function is introduced. The function emphasizes the global aspect and the boundary of segmentation regions. The experimental setup compared HGSNet with other advanced models on medical image datasets from various organs. The results demonstrate that HGSNet outperforms other models and achieves state‐of‐the‐art performance on three public datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. A hybrid denoising approach for PPG signals utilizing variational mode decomposition and improved wavelet thresholding.
- Author
-
Hu, Qinghua, Li, Min, Jiang, Linwen, and Liu, Mei
- Subjects
- *
STANDARD deviations , *THRESHOLDING algorithms , *SIGNAL-to-noise ratio , *FEATURE extraction , *DATABASES - Abstract
BACKGROUND: Photoplethysmography (PPG) signals are sensitive to motion-induced interference, leading to the emergence of motion artifacts (MA) and baseline drift, which significantly affect the accuracy of PPG measurements. OBJECTIVE: The objective of our study is to effectively eliminate baseline drift and high-frequency noise from PPG signals, ensuring that the signal's critical frequency components remain within the range of 1 ∼ 10 Hz. METHODS: This paper introduces a novel hybrid denoising method for PPG signals, integrating Variational Mode Decomposition (VMD) with an improved wavelet threshold function. The method initially employs VMD to decompose PPG signals into a set of narrowband intrinsic mode function (IMF) components, effectively removing low-frequency baseline drift. Subsequently, an improved wavelet thresholding algorithm is applied to eliminate high-frequency noise, resulting in denoised PPG signals. The effectiveness of the denoising method was rigorously assessed through a comprehensive validation process. It was tested on real-world PPG measurements, PPG signals generated by the Fluke ProSim™ 8 Vital Signs Simulator with synthesized noise, and extended to the MIMIC-III waveform database. RESULTS: The application of the improved threshold function let to a substantial 11.47% increase in signal-to-noise ratio (SNR) and an impressive 26.75% reduction in root mean square error (RMSE) compared to the soft threshold function. Furthermore, the hybrid denoising method improved SNR by 15.54% and reduced RMSE by 37.43% compared to the improved threshold function. CONCLUSION: This study proposes an effective PPG denoising algorithm based on VMD and an improved wavelet threshold function, capable of simultaneously eliminating low-frequency baseline drift and high-frequency noise in PPG signals while faithfully preserving their morphological characteristics. This advancement establishes the foundation for time-domain feature extraction and model development in the domain of PPG signal analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Precise and parallel segmentation model (PPSM) via MCET using hybrid distributions.
- Author
-
Rawas, Soha and El-Zaart, Ali
- Subjects
LOGNORMAL distribution ,GAMMA distributions ,IMAGE processing ,THRESHOLDING algorithms ,BOOSTING algorithms ,SKIN imaging - Abstract
Purpose: Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets. Design/methodology/approach: The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Findings: On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability. Originality/value: A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. An efficient hybrid differential evolution-golden jackal optimization algorithm for multilevel thresholding image segmentation.
- Author
-
Meng, Xianmeng, Tan, Linglong, and Wang, Yueqin
- Subjects
OPTIMIZATION algorithms ,METAHEURISTIC algorithms ,IMAGE segmentation ,IMAGE processing ,SIGNAL-to-noise ratio ,THRESHOLDING algorithms ,DIFFERENTIAL evolution - Abstract
Image segmentation is a crucial process in the field of image processing. Multilevel threshold segmentation is an effective image segmentation method, where an image is segmented into different regions based on multilevel thresholds for information analysis. However, the complexity of multilevel thresholding increases dramatically as the number of thresholds increases. To address this challenge, this article proposes a novel hybrid algorithm, termed differential evolution-golden jackal optimizer (DEGJO), for multilevel thresholding image segmentation using the minimum cross-entropy (MCE) as a fitness function. The DE algorithm is combined with the GJO algorithm for iterative updating of position, which enhances the search capacity of the GJO algorithm. The performance of the DEGJO algorithm is assessed on the CEC2021 benchmark function and compared with state-of-the-art optimization algorithms. Additionally, the efficacy of the proposed algorithm is evaluated by performing multilevel segmentation experiments on benchmark images. The experimental results demonstrate that the DEGJO algorithm achieves superior performance in terms of fitness values compared to other metaheuristic algorithms. Moreover, it also yields good results in quantitative performance metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and feature similarity index (FSIM) measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Detection of Lung Cancer Using Multi-Stage Image Processing and Advanced Deep Learning InceptiMultiLayer-Net Model.
- Author
-
Ahammed, Syed Zaheer, Baskar, Radhika, and Nalinipriya, G.
- Subjects
LUNG cancer ,IMAGE processing ,DEEP learning ,COMPUTER-aided diagnosis ,EARLY detection of cancer ,THRESHOLDING algorithms - Abstract
This study aims to improve early lung cancer detection by creating a sophisticated Computer-Aided Diagnosis (CAD) system. This system employs advanced image processing techniques such as adaptive dynamic histogram equalization (ADHE), Local Binary Pattern (LBP), and Tsallis thresholding to effectively reduce noise, analyze textures, and segment regions. It also includes the InceptiMultiLayer-Net (IML-Net), an advanced version of the Inception V3 architecture designed to capture complex features in medical images. The IML-Net includes a multiclass Error-Correcting Output Codes (ECOC) Support Vector Machine (SVM) classifier, which improves the system's ability to handle complex classification tasks. The system also employs statistical features such as mean, variance, energy, entropy, and correlation to fully describe the characteristics of segmented regions. With an impressive 99.573% accuracy in identifying lung cancer-affected regions, as well as a sensitivity of 99.46% and a specificity of 99.24%, this CAD system has significant potential as an early lung cancer detection tool. These findings highlight the system's ability to assist clinicians in making accurate diagnoses, ultimately improving patient outcomes significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Improvement of Tradition Dance Classification Process Using Video Vision Transformer based on Tubelet Embedding.
- Author
-
Mulyanto, Edy, Yuniarno, Eko Mulyanto, Putra, Oddy Virgantara, Hafidz, Isa, Priyadi, Ardyono, and Purnomo, Mauridhi H.
- Subjects
TRANSFORMER models ,HISTORY of dance ,ARTIFICIAL neural networks ,OBJECT recognition (Computer vision) ,VIDEO processing ,THRESHOLDING algorithms - Abstract
Image processing has extensively addressed object detection, classification, clustering, and segmentation challenges. At the same time, the use of computers associated with complex video datasets spurred various strategies to classify videos automatically, particularly in detecting traditional dances. This research proposes advancement in classifying traditional dances by implementing a Video Vision Transformer (ViViT) that relies on tubelet embedding. The authors utilized IDEEH-10, a dataset of videos showcasing traditional dances. In addition, the ViViT artificial neural network model was used for video classification. The video representation is generated by projecting spatiotemporal tokens onto the transformer layer. Next, an embedding strategy is used to improve the classification accuracy of Traditional Dance Videos. The proposed concept treats video as a sequence of tubules mapped into tubule embeddings. Tubelet management has added TA (tubelet attention layer), CA (cross attention layer), and tubelet duration and scale management. From the test results, the proposed approach can better classify traditional dance videos compared to the LSTM, GRU, and RNN methods, with or without balancing data. Experimental results with 5 flods showed Loss between 0.003 to 0.011 with an average Lost of 0.0058. Experiments also produced an accuracy rate between 98.68 to 100 percent, resulting in an average accuracy of 99.216. This result is the best of several comparison methods. ViViT with tubeless embedding has a good level of accuracy with low losses, so that it can be used for dance video classification processes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Active-set based block coordinate descent algorithm in group LASSO for self-exciting threshold autoregressive model.
- Author
-
Nasir, Muhammad Jaffri Mohd, Khan, Ramzan Nazim, Nair, Gopalan, and Nur, Darfiana
- Subjects
ALGORITHMS ,AUTOREGRESSIVE models ,THRESHOLDING algorithms - Abstract
Group LASSO (gLASSO) estimator has been recently proposed to estimate thresholds for the self-exciting threshold autoregressive model, and a group least angle regression (gLAR) algorithm has been applied to obtain an approximate solution to the optimization problem. Although gLAR algorithm is computationally fast, it has been reported that the algorithm tends to estimate too many irrelevant thresholds along with the relevant ones. This paper develops an active-set based block coordinate descent (aBCD) algorithm as an exact optimization method for gLASSO to improve the performance of estimating relevant thresholds. Methods and strategy for choosing the appropriate values of shrinkage parameter for gLASSO are also discussed. To consistently estimate relevant thresholds from the threshold set obtained by the gLASSO, the backward elimination algorithm (BEA) is utilized. We evaluate numerical efficiency of the proposed algorithms, along with the Single-Line-Search (SLS) and the gLAR algorithms through simulated data and real data sets. Simulation studies show that the SLS and aBCD algorithms have similar performance in estimating thresholds although the latter method is much faster. In addition, the aBCD-BEA can sometimes outperform gLAR-BEA in terms of estimating the correct number of thresholds under certain conditions. The results from case studies have also shown that aBCD-BEA performs better in identifying important thresholds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Asphalt pavement patch identification with image features based on statistical properties using machine learning.
- Author
-
Alfwzan, Wafa F., Alballa, Tmader, Al-Dayel, Ibrahim A., and Selim, Mahmoud M.
- Subjects
- *
ASPHALT pavements , *MACHINE learning , *IMAGE recognition (Computer vision) , *THRESHOLDING algorithms , *SUPPORT vector machines , *FEATURE extraction , *IMAGE processing - Abstract
Finding patches is a crucial step in a pavement performance survey. The study develops a machine learning and image processing algorithm-established automatic method for identifying asphalt pavement patches. The GrayLevel Co-Occurrence Matrix and image texture-based features derived from color channel statistics are used as input parameters to describe the condition of the pavement. For feature extraction, image processing methods like the projective integral of images, steerable filters, and an improved image thresholding method were used. Support Vector Machine is employed for categorizing the differentiating patched regions from non-patch ones. The suggested combination of image texture analytical methods has been trained using an IA data set created from 400 image samples. The feature set, which combines the characteristics of cracked objects with those generated from the projective integral, can produce the desired result. To make the model's execution simpler, a patch recognition program was created and implemented in MATLAB. As a result, the recently created method has the potential to be an instrument for traffic management organizations through the assessment of pavement performance. The results of the experiments indicate that the newly developed method may achieve excellent accuracy prediction results with a classifier performance rate of about 96%. Such a method could enable transportation authorities to assess the state of asphalt pavement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.