7,012 results on '"denoising"'
Search Results
2. Efficient MRI image enhancement by improved denoising techniques for better skull stripping using attention module-based convolution neural network.
- Author
-
Jeme V, Jesline and Jerome S, Albert
- Abstract
Anatomical structure preservation throughout the denoising process is a challenge in the domain of medical imaging. The Rician noise introduced through the acquisition procedure by the Magnetic Resonance Imaging (MRI) scanner distorts the images. In this study, denoising using Wavelet-based Non-Local Median Filter (WBNLMF) and a novel contrast-enhancement method termed Improved Minimum Intensity Error Intuitionistic Fuzzy Contrast Enhancement (IMIEIFCET) is suggested. This methodology gives superior results while maintaining the edges and the brightness of the original image. An Attention Module-based Convolution Neural Network (AM-CNN) is suggested in the research as a methodology for skull stripping from MRI data. With a mean Dice coefficient of 0.998, a Sensitivity of 0.9975, and a Specificity of 0.9985, the proposed network exhibits result that are comparable to those of the specified Deep Learning (DL)-based technique. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A machine learning approach combined with wavelet analysis for automatic detection of Pc5 geomagnetic pulsations observed at geostationary orbits.
- Author
-
Pappoe, Justice Allotey, Yoshikawa, Akimasa, Kandil, Ali, and Mahrous, Ayman
- Abstract
Pc5 geomagnetic pulsations are ultra-low frequency (ULF) waves whose period lies within 150–600 s. Their detection is crucial for accurate space weather modelling, monitoring, and analysis. Identification of these pulsations using manual approaches is difficult and time-consuming because of the smaller magnitudes and the limited durations within which they occur. To overcome this challenge, we propose a robust Cascade Forward Neural Network (CFNN) model combined with the wavelet technique for automatically detecting Pc5 events from satellite datasets observed at geostationary orbits. The dataset used in this study is the magnetic field vector measurements retrieved from the Geostationary Operational Environmental Satellite-10 (GOES-10) from 2000 to 2009. Pc5 geomagnetic pulsations were extracted from the toroidal component of the field perturbation using a bandpass Butterworth filter. Continuous wavelet transform (CWT) analysis using the mother Morlet wavelet was utilized to validate the integrity of the extracted signal in the time–frequency domain. The extracted Pc5 signal was decomposed into details and approximations using Daubechies wavelet transform to separate the intelligible signal from the incoherent noise that left their trace on the magnetic field time series. The detail signal was subjected to denoising using the heuristic Stein Unbiased Risk Estimate (SURE) approach with soft thresholding to obtain the denoised Pc5 events. The denoised Pc5 events were utilized as the target in the machine learning whereas the inputs were obtained from 5-dimensional space transformation of the toroidal component of the time series. The developed algorithm performed well and demonstrated a detection accuracy of 80 % and a Root Mean Square Error (RMSE) of 0.13 nT indicating a high performance in detecting Pc5 events. For effective validation of the model, Pc5 events detected by our model were correlated with the Kp index and the amplitude of the Pc5 events observed at geostationary orbit. A good correlation was obtained in both cases, making our model a practical and preferred choice for Pc5 pulsation detection in contrast to conventional frequency analysis tools which take much time for signal processing on large data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Noise-tolerant NMF-based parallel algorithm for respiratory rate estimation.
- Author
-
Revuelta-Sanz, Pablo, Muñoz-Montoro, Antonio J., Torre-Cruz, Juan, Canadas-Quesada, Francisco J., and Ranilla, José
- Subjects
- *
MATRIX decomposition , *NONNEGATIVE matrices , *PARALLEL algorithms , *PARALLEL programming , *RESPIRATORY organs - Abstract
The accurate estimation of respiratory rate (RR) is crucial for assessing the respiratory system's health in humans, particularly during auscultation processes. Despite the numerous automated RR estimation approaches proposed in the literature, challenges persist in accurately estimating RR in noisy environments, typical of real-life situations. This becomes especially critical when periodic noise patterns interfere with the target signal. In this study, we present a parallel driver designed to address the challenges of RR estimation in real-world environments, combining multi-core architectures with parallel and high-performance techniques. The proposed system employs a nonnegative matrix factorization (NMF) approach to mitigate the impact of noise interference in the input signal. This NMF approach is guided by pre-trained bases of respiratory sounds and incorporates an orthogonal constraint to enhance accuracy. The proposed solution is tailored for real-time processing on low-power hardware. Experimental results across various scenarios demonstrate promising outcomes in terms of accuracy and computational efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. SMART-PET: a Self-SiMilARiTy-aware generative adversarial framework for reconstructing low-count [18F]-FDG-PET brain imaging.
- Author
-
Raymond, Confidence, Zhang, Dong, Cabello, Jorge, Liu, Linshan, Moyaert, Paulien, Burneo, Jorge G., Dada, Michael O., Hicks, Justin W., Finger, Elizabeth, Soddu, Andrea, Andrade, Andrea, Jurkiewicz, Michael T., and Anazodo, Udunna C.
- Abstract
Introduction: In Positron Emission Tomography (PET) imaging, the use of tracers increases radioactive exposure for longitudinal evaluations and in radiosensitive populations such as pediatrics. However, reducing injected PET activity potentially leads to an unfavorable compromise between radiation exposure and image quality, causing lower signal-to-noise ratios and degraded images. Deep learning-based denoising approaches can be employed to recover low count PET image signals: nonetheless, most of these methods rely on structural or anatomic guidance from magnetic resonance imaging (MRI) and fails to effectively preserve global spatial features in denoised PET images, without impacting signal-to-noise ratios. Methods: In this study, we developed a novel PET only deep learning framework, the Self-SiMilARiTy-Aware Generative Adversarial Framework (SMART), which leverages Generative Adversarial Networks (GANs) and a self-similarity-aware attention mechanism for denoising [18F]-fluorodeoxyglucose (18F-FDG) PET images. This study employs a combination of prospective and retrospective datasets in its design. In total, 114 subjects were included in the study, comprising 34 patients who underwent 18F-Fluorodeoxyglucose PET (FDG) PET imaging for drug-resistant epilepsy, 10 patients for frontotemporal dementia indications, and 70 healthy volunteers. To effectively denoise PET images without anatomical details from MRI, a self-similarity attention mechanism (SSAB) was devised. which learned the distinctive structural and pathological features. These SSAB-enhanced features were subsequently applied to the SMART GAN algorithm and trained to denoise the low-count PET images using the standard dose PET image acquired from each individual participant as reference. The trained GAN algorithm was evaluated using image quality measures including structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), normalized root mean square (NRMSE), Fréchet inception distance (FID), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Results: In comparison to the standard-dose, SMART-PET had on average a SSIM of 0.984 ± 0.007, PSNR of 38.126 ± 2.631 dB, NRMSE of 0.091 ± 0.028, FID of 0.455 ± 0.065, SNR of 0.002 ± 0.001, and CNR of 0.011 ± 0.011. Regions of interest measurements obtained with datasets decimated down to 10% of the original counts, showed a deviation of less than 1.4% when compared to the ground-truth values. Discussion: In general, SMART-PET shows promise in reducing noise in PET images and can synthesize diagnostic quality images with a 90% reduction in standard of care injected activity. These results make it a potential candidate for clinical applications in radiosensitive populations and for longitudinal neurological studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Irregular feature enhancer for low-dose CT denoising.
- Author
-
Deng, Jiehang, Hu, Zihang, He, Jinwen, Liu, Jiaxin, Qiao, Guoqing, Gu, Guosheng, and Weng, Shaowei
- Abstract
So far, deep learning-based networks have been widely applied in Low-Dose Computed Tomography (LDCT) image denoising. However, they usually adopt symmetric convolution to achieve regular feature extraction, but cannot effectively extract irregular features. Therefore, in this paper, an Irregular Feature Enhancer (IFE) focusing on effectively extracting irregular features is proposed by combining Symmetric-Asymmetric-Synergy Convolution Module (SASCM) with a hybrid loss module. The shape, size and aspect ratio of human tissues and lesions are irregular, whose features are difficult for symmetric square convolution to extract. Rather than simply stacking symmetric convolution layers used in traditional deep learning-based networks, the SASCM with certain combination order of symmetric and asymmetric convolutional layers is devised to extract the irregular features. To the best of our knowledge, the IFE is the first work to propose the hybrid loss combining MSE, multi-scale perception loss and gradient loss, and apply asymmetric convolution in the field of LDCT denoising. The ablation experiments demonstrate the effectiveness and feasibility of SASCM and the hybrid loss. The quantitative experimental results also show that in comparison with several related LDCT denoising methods, the proposed IFE performs the best in terms of PSNR and SSIM. Furthermore, it can be observed from the qualitative visualization that the proposed IFE can recover the best image detail structure information among the compared methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Improved Diffusion-Weighted Hyperpolarized 129Xe Lung MRI with Patch-Based Higher-Order, Singular Value Decomposition Denoising.
- Author
-
Soderlund, Stephanie A., Bdaiwi, Abdullah S., Plummer, Joseph W., Woods, Jason C., Walkup, Laura L., and Cleveland, Zackary I.
- Abstract
Hyperpolarized xenon (
129 Xe) MRI is a noninvasive method to assess pulmonary structure and function. To measure lung microstructure, diffusion-weighted imaging—commonly the apparent diffusion coefficient (ADC)—can be employed to map changes in alveolar-airspace size resulting from normal aging and pulmonary disease. However, low signal-to-noise ratio (SNR) decreases ADC measurement certainty, and biases ADC to spuriously low values. Further, these challenges are most severe in regions of the lung where alveolar simplification or emphysematous remodeling generate abnormally high ADCs. Here, we apply Global Local Higher Order Singular Value Decomposition (GLHOSVD) denoising to enhance image SNR, thereby reducing uncertainty and bias in diffusion measurements. GLHOSVD denoising was employed in simulated images and gas phantoms with known diffusion coefficients to validate its effectiveness and optimize parameters for analysis of diffusion-weighted129 Xe MRI. GLHOSVD was applied to data from 120 subjects (34 control, 39 cystic fibrosis (CF), 27 lymphangioleiomyomatosis (LAM), and 20 asthma). Image SNR, ADC, and distributed diffusivity coefficient (DDC) were compared before and after denoising using Wilcoxon signed-rank analysis for all images. Denoising significantly increased SNR in simulated, phantom, and in-vivo images, showing a greater than 2-fold increase (p < 0.001) across diffusion-weighted images. Although mean ADC and DDC remained unchanged (p > 0.05), ADC and DDC standard deviation decreased significantly in denoised images (p < 0.001). When applied to diffusion-weighted129 Xe images, GLHOSVD improved image quality and allowed airspace size to be quantified in high-diffusion regions of the lungs that were previously inaccessible to measurement due to prohibitively low SNR, thus providing insights into disease pathology. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
8. Reducing thermal noise in high‐resolution quantitative magnetic resonance imaging rotating frame relaxation mapping of the human brain at 3 T.
- Author
-
Ponticorvo, Sara, Canna, Antonietta, Moeller, Steen, Akcakaya, Mehmet, Metzger, Gregory J., Filip, Pavel, Eberly, Lynn E., Michaeli, Shalom, and Mangia, Silvia
- Subjects
MAGNETIC resonance imaging ,THERMAL noise ,PRINCIPAL components analysis ,NOISE control ,PARAMETER estimation - Abstract
Quantitative maps of rotating frame relaxation (RFR) time constants are sensitive and useful magnetic resonance imaging tools with which to evaluate tissue integrity in vivo. However, to date, only moderate image resolutions of 1.6 x 1.6 x 3.6 mm3 have been used for whole‐brain coverage RFR mapping in humans at 3 T. For more precise morphometrical examinations, higher spatial resolutions are desirable. Towards achieving the long‐term goal of increasing the spatial resolution of RFR mapping without increasing scan times, we explore the use of the recently introduced Transform domain NOise Reduction with DIstribution Corrected principal component analysis (T‐NORDIC) algorithm for thermal noise reduction. RFR acquisitions at 3 T were obtained from eight healthy participants (seven males and one female) aged 52 ± 20 years, including adiabatic T1ρ, T2ρ, and nonadiabatic Relaxation Along a Fictitious Field (RAFF) in the rotating frame of rank n = 4 (RAFF4) with both 1.6 x 1.6 x 3.6 mm3 and 1.25 x 1.25 x 2 mm3 image resolutions. We compared RFR values and their confidence intervals (CIs) obtained from fitting the denoised versus nondenoised images, at both voxel and regional levels separately for each resolution and RFR metric. The comparison of metrics obtained from denoised versus nondenoised images was performed with a two‐sample paired t‐test and statistical significance was set at p less than 0.05 after Bonferroni correction for multiple comparisons. The use of T‐NORDIC on the RFR images prior to the fitting procedure decreases the uncertainty of parameter estimation (lower CIs) at both spatial resolutions. The effect was particularly prominent at high‐spatial resolution for RAFF4. Moreover, T‐NORDIC did not degrade map quality, and it had minimal impact on the RFR values. Denoising RFR images with T‐NORDIC improves parameter estimation while preserving the image quality and accuracy of all RFR maps, ultimately enabling high‐resolution RFR mapping in scan times that are suitable for clinical settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Noise reduction in Hyperion high dynamic range hyperspectral data using machine learning and statistical techniques.
- Author
-
Nair, Priyanka, Srivastava, Devesh Kumar, and Bhatnagar, Roheet
- Subjects
MACHINE learning ,STATISTICAL learning ,NOISE control ,SIGNAL-to-noise ratio ,PRINCIPAL components analysis - Abstract
Numerous remote sensing applications rely heavily on hyperspectral imagery, but it is frequently plagued by noise, which degrades the data quality and hinders subsequent analysis. In this research paper, we present an in-depth analysis of noise removal techniques for hyperspectral imagery, specifically for data acquired from the Hyperion EO-1 sensor. Setting off with obtaining Hyperion data and the pre-processing stages, the paper discusses the acquisition and denoising of Hyperion data. The hyperspectral data considered is in the high dynamic range (HDR) format, which maintains the original imagery's complete dynamic range. The study explores various noise reduction methods, such as minimum noise fraction (MNF), principal component analysis (PCA), wavelet denoising, non-local means (NLM), and denoising autoencoders, aimed at enhancing the signal-to-noise ratio. The effectiveness of these techniques is evaluated through visual quality, mean square error (MSE), and peak signal-to-noise ratio (PSNR), alongside their impact on mineral exploration. Furthermore, the paper investigates the application of machine learning algorithms on denoised data for mineral identification, highlighting the potential of integrating denoising techniques with machine learning for improved mineral exploration. This comparative analysis aims to identify the most efficient noise removal methods for hyperspectral imagery, facilitating higher quality data for subsequent analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Wavelet-based Denoising of Magnetic Resonance Images Using Optimized Exponential Function Thresholding and Wiener Filter.
- Author
-
Moshfegh, M., Nikpour, M., and Mobini, M.
- Subjects
WIENER filters (Signal processing) ,MEDICAL sciences ,GENETIC algorithms ,SIGNAL denoising ,EXPONENTIAL functions - Abstract
Copyright of International Journal of Engineering Transactions C: Aspects is the property of International Journal of Engineering (IJE) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
11. Kernel ℓ1-norm principal component analysis for denoising.
- Author
-
Ling, Xiao, Bui, Anh, and Brooks, Paul
- Abstract
In this paper we describe a method for denoising data using kernel principal component analysis (KPCA) that is able to recover preimages of the intrinsic variables in the feature space using a single line search along the gradient descent direction of its squared projection error. This method combines a projection-free preimage estimation algorithm with an ℓ 1 -norm KPCA. These two stages provide distinct advantages over other KPCA preimage methods in the sense that they are insensitive to outliers and computationally efficient. The method can improve the results of a range of unsupervised learning tasks, such as denoising, and clustering. Numerical experiments in the Amsterdam Library of Object Images demonstrate that the proposed method performs better in terms of mean squared error than the ℓ 2 -norm analogue, as well as in synthetic data. The proposed method is applied to different datasets and the results are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Removing Instrumental Noise in Distributed Acoustic Sensing Data: A Comparison Between Two Deep Learning Approaches.
- Author
-
Gu, Xihao, Collet, Olivia, Tertyshnikov, Konstantin, and Pevzner, Roman
- Abstract
Over the last decade, distributed acoustic sensing (DAS) has received growing attention in the field of seismic acquisition and monitoring due to its potential high spatial sampling rate, low maintenance cost and high resistance to temperature and pressure. Despite its undeniable advantages, DAS faces some challenges, including a low signal-to-noise ratio, which partly results from the instrument-specific noise generated by DAS interrogators. We present a comparison between two deep learning approaches to address DAS hardware noise and enhance the quality of DAS data. These approaches have the advantage of including real instrumental noise in the neural network training dataset. For the supervised learning (SL) approach, real DAS instrumental noise measured on an acoustically isolated coil is added to synthetic data to generate training pairs of clean/noisy data. For the second method, the Noise2Noise (N2N) approach, the training is performed on noisy/noisy data pairs recorded simultaneously on the downgoing and upgoing parts of a downhole fiber-optic cable. Both approaches allow for the removal of unwanted noise that lies within the same frequency band of the useful signal, a result that cannot be achieved by conventional denoising techniques employing frequency filtering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Power-line interference and baseline wander elimination in ECG using VMD and EWT.
- Author
-
Mir, Haroon Yousuf and Singh, Omkar
- Subjects
- *
SIGNAL-to-noise ratio , *ROOT-mean-squares , *FILTER banks , *WAVELET transforms , *ADAPTIVE filters - Abstract
Electrocardiogram (ECG) is a critical biomedical signal and plays an imperative role in diagnosing cardiovascular disorders. During ECG data acquisition in clinical environment, noise is frequently present. Various noises such as powerline interference (PLI) and baseline wandering (BLW) distort the ECG signal which may lead to incorrect interpretation. Consequently, substantial emphasis has been dedicated to ECG denoising for reliable diagnosis and analysis. In this study, a novel hybrid ECG denoising method based on variational mode decomposition (VMD) and the empirical wavelet transform (EWT) is presented. For effective denoising using the VMD and EWT approach, the noisy ECG signal is decomposed within narrow-band variational mode functions (VMFs). The aim is to remove noise from these narrow-band VMFs. In current approach, the centre frequency of each VMF was computed and utilized to design an adaptive wavelet filter bank using EWT. This leads to effective removal of noise components from the signal. The proposed approach was applied to ECG signals obtained from the MIT-BIH Arrhythmia database. To evaluate the denoising performance, noise sources from the MIT-BIH Noise Stress Test Database (NSTDB) are used for simulation. The assessment of denoising performance in based on two key metrics: the percentage-root-mean-square difference (PRD) and the signal-to-noise ratio (SNR). The findings of the simulation experiment demonstrate that the suggested method has lower percentage root mean square difference and higher signal-to-noise ratio as compared to existing state of the art denoising methods. An average output SNR of 24.03 was achieved, along with a 5% reduction in PRD. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Improving Brain Metabolite Detection with a Combined Low-Rank Approximation and Denoising Diffusion Probabilistic Model Approach.
- Author
-
Jeon, Yeong-Jae, Nam, Kyung Min, Park, Shin-Eui, and Baek, Hyeon-Man
- Abstract
In vivo proton magnetic resonance spectroscopy (MRS) is a noninvasive technique for monitoring brain metabolites. However, it is challenged by a low signal-to-noise ratio (SNR), often necessitating extended scan times to compensate. One of the conventional techniques for noise reduction is signal averaging, which is inherently time-consuming and can lead to participant discomfort, thus posing limitations in clinical settings. This study aimed to develop a hybrid denoising strategy that integrates low-rank approximation and denoising diffusion probabilistic model (DDPM) to enhance MRS data quality and shorten scan times. Using publicly available 1H MRS datasets from 15 subjects, we applied the Casorati SVD and DDPM to obtain baseline and functional data during a pain stimulation task. This method significantly improved SNR, resulting in outcomes comparable to or better than averaging over 32 signals. It also provided the most consistent metabolite measurements and adequately tracked temporal changes in glutamate levels, correlating with pain intensity ratings after heating. These findings demonstrate that our approach enhances MRS data quality, offering a more efficient alternative to conventional methods and expanding the potential for the real-time monitoring of neurochemical changes. This contribution has the potential to advance MRS techniques by integrating advanced denoising methods to increase the acquisition speed and enhance the precision of brain metabolite analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. 基于 ISFO - KELM的SF6 电气设备故障组分 CO2 浓度反演模型.
- Author
-
黄杰, 张英, 张靖, and 王明伟
- Subjects
- *
MACHINE learning , *LEAST squares , *BACK propagation , *TUNABLE lasers , *LASER spectroscopy , *DIFFERENTIAL evolution - Abstract
The decomposition components inside SF6 electrical equipment can be detected by tunable absorption spectroscopy technique, in which the concentration of CO2 reflects the insulation defect situation inside the equipment. Therefore, potential insulation faults of the equipment can be found in time by measuring the CO2 concentration accurately. To overcome the problem of poor stability of traditional least squares concentration inversion model, ISFO-KELM gas concentration inversion model based on ISFO (Improved Sailed Fish Optimizer) and KELM (Kernel Based Extreme Learning Machine) is established in this study. The optimization ability and the ability to jump out of local optimal solution of ISFO are improved by using multi - strategy initialization method, Levy random step length, Cauchy mutation and adaptive t -distribution mutation techniques. The experimental results show that this model has high accuracy and robustness, and is superior to traditional methods such as least squares method, extreme learning machine, BP (Back Propagation) neural network in stability and generalization ability, which has important significance for evaluating the operation state of SF6 electrical equipment [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. mbDriver: identifying driver microbes in microbial communities based on time-series microbiome data.
- Author
-
Tan, Xiaoxiu, Xue, Feng, Zhang, Chenhong, and Wang, Tao
- Subjects
- *
NEGATIVE binomial distribution , *SHORT-chain fatty acids , *ULCERATIVE colitis , *DIETARY fiber , *STATISTICAL smoothing , *BINOMIAL distribution , *REGULARIZATION parameter - Abstract
Alterations in human microbial communities are intricately linked to the onset and progression of diseases. Identifying the key microbes driving these community changes is crucial, as they may serve as valuable biomarkers for disease prevention, diagnosis, and treatment. However, there remains a need for further research to develop effective methods for addressing this critical task. This is primarily because defining the driver microbe requires consideration not only of each microbe's individual contributions but also their interactions. This paper introduces a novel framework, called mbDriver, for identifying driver microbes based on microbiome abundance data collected at discrete time points. mbDriver comprises three main components: (i) data preprocessing of time-series abundance data using smoothing splines based on the negative binomial distribution, (ii) parameter estimation for the generalized Lotka-Volterra (gLV) model using regularized least squares, and (iii) quantification of each microbe's contribution to the community's steady state by manipulating the causal graph implied by gLV equations. The performance of nonparametric spline-based denoising and regularized least squares estimation is comprehensively evaluated on simulated datasets, demonstrating superiority over existing methods. Furthermore, the practical applicability and effectiveness of mbDriver are showcased using a dietary fiber intervention dataset and an ulcerative colitis dataset. Notably, driver microbes identified in the dietary fiber intervention dataset exhibit significant effects on the abundances of short-chain fatty acids, while those identified in the ulcerative colitis dataset show a significant correlation with metabolism-related pathways. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. scTCA: a hybrid Transformer-CNN architecture for imputation and denoising of scDNA-seq data.
- Author
-
Yu, Zhenhua, Liu, Furui, and Li, Yang
- Subjects
- *
DEEP learning , *TRANSFORMER models , *SIGNAL-to-noise ratio , *DNA sequencing , *HETEROGENEITY - Abstract
Single-cell DNA sequencing (scDNA-seq) has been widely used to unmask tumor copy number alterations (CNAs) at single-cell resolution. Despite that arm-level CNAs can be accurately detected from single-cell read counts, it is difficult to precisely identify focal CNAs as the read counts are featured with high dimensionality, high sparsity and low signal-to-noise ratio. This gives rise to a desperate demand for reconstructing high-quality scDNA-seq data. We develop a new method called scTCA for imputation and denoising of single-cell read counts, thus aiding in downstream analysis of both arm-level and focal CNAs. scTCA employs hybrid Transformer-CNN architectures to identify local and non-local correlations between genes for precise recovery of the read counts. Unlike conventional Transformers, the Transformer block in scTCA is a two-stage attention module containing a stepwise self-attention layer and a window Transformer, and can efficiently deal with the high-dimensional read counts data. We showcase the superior performance of scTCA through comparison with the state-of-the-arts on both synthetic and real datasets. The results indicate it is highly effective in imputation and denoising of scDNA-seq data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Characterization and Mitigation of a Simultaneous Multi‐Slice fMRI Artifact: Multiband Artifact Regression in Simultaneous Slices.
- Author
-
Tubiolo, Philip N., Williams, John C., and Van Snellenberg, Jared X.
- Subjects
- *
NEURAL development , *GRAY matter (Nerve tissue) , *COGNITIVE development , *FUNCTIONAL magnetic resonance imaging , *SHORT-term memory - Abstract
Simultaneous multi‐slice (multiband) acceleration in fMRI has become widespread, but may be affected by novel forms of signal artifact. Here, we demonstrate a previously unreported artifact manifesting as a shared signal between simultaneously acquired slices in all resting‐state and task‐based multiband fMRI datasets we investigated, including publicly available consortium data from the Human Connectome Project (HCP) and Adolescent Brain Cognitive Development (ABCD) Study. We propose Multiband Artifact Regression in Simultaneous Slices (MARSS), a regression‐based detection and correction technique that successfully mitigates this shared signal in unprocessed data. We demonstrate that the signal isolated by MARSS correction is likely nonneural, appearing stronger in neurovasculature than gray matter. Additionally, we evaluate MARSS both against and in tandem with sICA+FIX denoising, which is implemented in HCP resting‐state data, to show that MARSS mitigates residual artifact signal that is not modeled by sICA+FIX. MARSS correction leads to study‐wide increases in signal‐to‐noise ratio, decreases in cortical coefficient of variation, and mitigation of systematic artefactual spatial patterns in participant‐level task betas. Finally, MARSS correction has substantive effects on second‐level t‐statistics in analyses of task‐evoked activation. We recommend that investigators apply MARSS to multiband fMRI datasets with moderate or higher acceleration factors, in combination with established denoising methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Advanced Imaging Integration: Multi-Modal Raman Light Sheet Microscopy Combined with Zero-Shot Learning for Denoising and Super-Resolution.
- Author
-
Kumari, Pooja, Keck, Shaun, Sohn, Emma, Kern, Johann, and Raedle, Matthias
- Subjects
- *
MICROSCOPY , *DRUG discovery , *RAMAN scattering , *RAMAN microscopy , *CELL imaging , *RAYLEIGH scattering , *CELL culture - Abstract
This study presents an advanced integration of Multi-modal Raman Light Sheet Microscopy with zero-shot learning-based computational methods to significantly enhance the resolution and analysis of complex three-dimensional biological structures, such as 3D cell cultures and spheroids. The Multi-modal Raman Light Sheet Microscopy system incorporates Rayleigh scattering, Raman scattering, and fluorescence detection, enabling comprehensive, marker-free imaging of cellular architecture. These diverse modalities offer detailed spatial and molecular insights into cellular organization and interactions, critical for applications in biomedical research, drug discovery, and histological studies. To improve image quality without altering or introducing new biological information, we apply Zero-Shot Deconvolution Networks (ZS-DeconvNet), a deep-learning-based method that enhances resolution in an unsupervised manner. ZS-DeconvNet significantly refines image clarity and sharpness across multiple microscopy modalities without requiring large, labeled datasets, or introducing artifacts. By combining the strengths of multi-modal light sheet microscopy and ZS-DeconvNet, we achieve improved visualization of subcellular structures, offering clearer and more detailed representations of existing data. This approach holds significant potential for advancing high-resolution imaging in biomedical research and other related fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Enhanced Wavelet-Based Medical Image Denoising with Bayesian-Optimized Bilateral Filtering.
- Author
-
Taassori, Mehdi
- Subjects
- *
IMAGE denoising , *NOISE control , *DIAGNOSTIC imaging , *NOISE - Abstract
Medical image denoising is essential for improving the clarity and accuracy of diagnostic images. In this paper, we present an enhanced wavelet-based method for medical image denoising, aiming to effectively remove noise while preserving critical image details. After applying wavelet denoising, a bilateral filter is utilized as a post-processing step to further enhance image quality by reducing noise while maintaining edge sharpness. The bilateral filter's effectiveness heavily depends on its parameters, which must be carefully optimized. To achieve this, we employ Bayesian optimization, a powerful technique that efficiently identifies the optimal filter parameters, ensuring the best balance between noise reduction and detail preservation. The experimental results demonstrate a significant improvement in image denoising performance, validating the effectiveness of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A comparison of machine learning methods for recovering noisy and missing 4D flow MRI data.
- Author
-
Csala, Hunor, Amili, Omid, D'Souza, Roshan M., and Arzani, Amirhossein
- Subjects
- *
COMPUTATIONAL fluid dynamics , *SINGULAR value decomposition , *DEEP learning , *BLOOD flow measurement , *MAGNETIC resonance imaging - Abstract
Experimental blood flow measurement techniques are invaluable for a better understanding of cardiovascular disease formation, progression, and treatment. One of the emerging methods is time‐resolved three‐dimensional phase‐contrast magnetic resonance imaging (4D flow MRI), which enables noninvasive time‐dependent velocity measurements within large vessels. However, several limitations hinder the usability of 4D flow MRI and other experimental methods for quantitative hemodynamics analysis. These mainly include measurement noise, corrupt or missing data, low spatiotemporal resolution, and other artifacts. Traditional filtering is routinely applied for denoising experimental blood flow data without any detailed discussion on why it is preferred over other methods. In this study, filtering is compared to different singular value decomposition (SVD)‐based machine learning and autoencoder‐type deep learning methods for denoising and filling in missing data (imputation). An artificially corrupted and voxelized computational fluid dynamics (CFD) simulation as well as in vitro 4D flow MRI data are used to test the methods. SVD‐based algorithms achieve excellent results for the idealized case but severely struggle when applied to in vitro data. The autoencoders are shown to be versatile and applicable to all investigated cases. For denoising, the in vitro 4D flow MRI data, the denoising autoencoder (DAE), and the Noise2Noise (N2N) autoencoder produced better reconstructions than filtering both qualitatively and quantitatively. Deep learning methods such as N2N can result in noise‐free velocity fields even though they did not use clean data during training. This work presents one of the first comprehensive assessments and comparisons of various classical and modern machine‐learning methods for enhancing corrupt cardiovascular flow data in diseased arteries for both synthetic and experimental test cases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Inheriting Bayer's Legacy: Joint Remosaicing and Denoising for Quad Bayer Image Sensor.
- Author
-
Zeng, Haijin, Feng, Kai, Cao, Jiezhang, Huang, Shaoguang, Zhao, Yongqiang, Luong, Hiep, Aelterman, Jan, and Philips, Wilfried
- Subjects
- *
TRANSFORMER models , *IMAGE denoising , *IMAGE sensors , *SPATIAL resolution , *ZIPPERS - Abstract
Pixel binning-based Quad sensors (mega-pixel resolution camera sensor) offer a promising solution to address the hardware limitations of compact cameras for low-light imaging. However, the binning process leads to reduced spatial resolution and introduces non-Bayer CFA artifacts. In this paper, we propose a Quad CFA-driven remosaicing model that effectively converts noisy Quad Bayer and standard Bayer patterns compatible to existing Image Signal Processor (ISP) without any loss in resolution. To enhance the practicality of the remosaicing model for real-world images affected by mixed noise, we introduce a novel dual-head joint remosaicing and denoising network (DJRD), which addresses the order of denoising and remosaicing by performing them in parallel. In DJRD, we customize two denoising branches for Quad Bayer and Bayer inputs. These branches model non-local and local dependencies, CFA location, and frequency information using residual convolutional layers, Swin Transformer, and wavelet transform-based CNN. Furthermore, to improve the model's performance on challenging cases, we fine-tune DJRD to handle difficult scenarios by identifying problematic patches through Moire and zipper detection metrics. This post-training phase allows the model to focus on resolving complex image regions. Extensive experiments conducted on simulated and real images in both Bayer and sRGB domains demonstrate that DJRD outperforms competing models by approximately 3 dB, while maintaining the simplicity of implementation without adding any hardware. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. A novel coupled p(x) and fractional PDE denoising model with theoretical results.
- Author
-
Zaabouli, Z., El Hakoume, A., Afraites, L., and Laghrib, A.
- Subjects
- *
IMAGE denoising , *EQUATIONS - Abstract
In this paper, we formulate a new coupled PDE-based configuration for image denoising. We elaborate a new class of coupled PDE that involves a $ p(x) $ p (x) -Laplace and a controlled fractional-type operator, which takes into account the texture and smooth components during the recovering process. We give some essential theoretical results and we establish the well-posedness of the suggested coupled equation based on Galerkin approximation. Finally, we perform a fully discretization part of the system and illustrate some various numerical realizations to ensure the efficiency of our coupled PDE by conducting some comparison experiments against state of art PDE denoising models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Denoising Phase-Unwrapped Images in Laser Imaging via Statistical Analysis and DnCNN.
- Author
-
Xie, Yibo, Cheng, Jin, Zhou, Shun, Fan, Qing, Jia, Yue, Xiao, Jingjin, and Liu, Weiguo
- Abstract
Three-dimensional imaging plays a crucial role at the micro-scale in fields such as precision manufacturing and materials science. However, image noise significantly impacts the accuracy of point cloud reconstruction, making image denoising techniques a widely discussed topic. Statistical analysis of laser imaging noise has led to the conclusion that logarithmically transformed noise follows a Gumbel distribution. A corresponding neural network training set was developed to address the challenges of difficult data collection and the scarcity of phase-unwrapped image datasets. Building on this foundation, a phase-unwrapped image denoising method based on the Denoising Convolutional Neural Network (DnCNN) is proposed. This method aims to achieve three-dimensional filtering by performing two-dimensional image denoising. Experimental results show a significant reduction in the Cloud-to-Mesh Distance (C2M) statistics of the corresponding point clouds before and after planar filtering. Specifically, the statistic at 97.5% of the 2σ principle decreases from 0.8782 mm to 0.3384 mm, highlighting the effectiveness of the filtering algorithm in improving the planar fit. Moreover, the DnCNN method exhibits exceptional denoising performance when applied to real-world target data, such as plaster statues with complex depth variations and PCBs made from different materials, thereby enhancing accuracy and reliability in point cloud reconstruction. This study provides valuable insights into phase-unwrapped image noise suppression in laser imaging, particularly in micro-scale applications where precision is critical. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Noise‐reduction techniques for 1H‐FID‐MRSI at 14.1 T: Monte Carlo validation and in vivo application.
- Author
-
Alves, Brayan, Simicic, Dunja, Mosso, Jessie, Lê, Thanh Phong, Briand, Guillaume, Bogner, Wolfgang, Lanz, Bernard, Strasser, Bernhard, Klauser, Antoine, and Cudalbu, Cristina
- Subjects
MAGNETIC resonance imaging ,PROTON magnetic resonance ,MONTE Carlo method ,PRINCIPAL components analysis ,REGIONAL differences - Abstract
Proton magnetic resonance spectroscopic imaging (1H‐MRSI) is a powerful tool that enables the multidimensional non‐invasive mapping of the neurochemical profile at high resolution over the entire brain. The constant demand for higher spatial resolution in 1H‐MRSI has led to increased interest in post‐processing‐based denoising methods aimed at reducing noise variance. The aim of the present study was to implement two noise‐reduction techniques, Marchenko–Pastur principal component analysis (MP‐PCA) based denoising and low‐rank total generalized variation (LR‐TGV) reconstruction, and to test their potential with and impact on preclinical 14.1 T fast in vivo 1H‐FID‐MRSI datasets. Since there is no known ground truth for in vivo metabolite maps, additional evaluations of the performance of both noise‐reduction strategies were conducted using Monte Carlo simulations. Results showed that both denoising techniques increased the apparent signal‐to‐noise ratio (SNR) while preserving noise properties in each spectrum for both in vivo and Monte Carlo datasets. Relative metabolite concentrations were not significantly altered by either method and brain regional differences were preserved in both synthetic and in vivo datasets. Increased precision of metabolite estimates was observed for the two methods, with inconsistencies noted for lower‐concentration metabolites. Our study provided a framework for how to evaluate the performance of MP‐PCA and LR‐TGV methods for preclinical 1H‐FID MRSI data at 14.1 T. While gains in apparent SNR and precision were observed, concentration estimations ought to be treated with care, especially for low‐concentration metabolites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Self‐supervised learning for denoising of multidimensional MRI data.
- Author
-
Kang, Beomgu, Lee, Wonil, Seo, Hyunseok, Heo, Hye‐Young, and Park, HyunWook
- Subjects
MAGNETIZATION transfer ,DATA scrubbing ,MAGNETIC resonance imaging ,QUANTITATIVE research ,NOISE - Abstract
Purpose: To develop a fast denoising framework for high‐dimensional MRI data based on a self‐supervised learning scheme, which does not require ground truth clean image. Theory and Methods: Quantitative MRI faces limitations in SNR, because the variation of signal amplitude in a large set of images is the key mechanism for quantification. In addition, the complex non‐linear signal models make the fitting process vulnerable to noise. To address these issues, we propose a fast deep‐learning framework for denoising, which efficiently exploits the redundancy in multidimensional MRI data. A self‐supervised model was designed to use only noisy images for training, bypassing the challenge of clean data paucity in clinical practice. For validation, we used two different datasets of simulated magnetization transfer contrast MR fingerprinting (MTC‐MRF) dataset and in vivo DWI image dataset to show the generalizability. Results: The proposed method drastically improved denoising performance in the presence of mild‐to‐severe noise regardless of noise distributions compared to previous methods of the BM3D, tMPPCA, and Patch2self. The improvements were even pronounced in the following quantification results from the denoised images. Conclusion: The proposed MD‐S2S (Multidimensional‐Self2Self) denoising technique could be further applied to various multi‐dimensional MRI data and improve the quantification accuracy of tissue parameter maps. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A systematic review of deep learning-based denoising for low-dose computed tomography from a perceptual quality perspective.
- Author
-
Kim, Wonjin, Jeon, Sun-Young, Byun, Gyuri, Yoo, Hongki, and Choi, Jang-Hwan
- Abstract
Low-dose computed tomography (LDCT) scans are essential in reducing radiation exposure but often suffer from significant image noise that can impair diagnostic accuracy. While deep learning approaches have enhanced LDCT denoising capabilities, the predominant reliance on objective metrics like PSNR and SSIM has resulted in over-smoothed images that lack critical detail. This paper explores advanced deep learning methods tailored specifically to improve perceptual quality in LDCT images, focusing on generating diagnostic-quality images preferred in clinical practice. We review and compare current methodologies, including perceptual loss functions and generative adversarial networks, addressing the significant limitations of current benchmarks and the subjective nature of perceptual quality evaluation. Through a systematic analysis, this study underscores the urgent need for developing methods that balance both perceptual and diagnostic quality, proposing new directions for future research in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Remote sensing image denoising based on deformable convolution and attention-guided filtering in progressive framework.
- Author
-
Liu, Hualin, Li, Zhe, Lin, Shijie, and Cheng, Libo
- Abstract
Remote sensing image denoising tasks are challenged by complex noise distributions and multiple noise types, including a mixture of additive Gaussian white noise (AWGN) and impulse noise (IN). For better image recovery, complex contextual information needs to be balanced while maintaining spatial details. In this paper, a denoising model based on multilevel progressive image recovery is proposed to address the problem of remote sensing image denoising. In our model, the deformable convolution improves spatial feature sampling to effectively capture image details. Meanwhile, attention-guided filtering is used to pass the output images from the first and second stages to the third stage in order to prevent information loss and optimize the image recovery effect. The experimental results show that under the mixed noise scene of Gaussian and pepper noise, our proposed model shows superior performance relative to several existing methods in terms of both visual effect and objective evaluation indexes. Our model can effectively reduce the influence of image noise and recover more realistic image details. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Event Stream Denoising Method Based on Spatio-Temporal Density and Time Sequence Analysis.
- Author
-
Jiang, Haiyan, Wang, Xiaoshuang, Tang, Wei, Song, Qinghui, Song, Qingjun, and Hao, Wenchao
- Subjects
- *
NOISE control , *RETINAL imaging , *SEQUENCE analysis , *NOISE , *PIXELS , *HIGH dynamic range imaging - Abstract
An event camera is a neuromimetic sensor inspired by the human retinal imaging principle, which has the advantages of high dynamic range, high temporal resolution, and low power consumption. Due to the interference of hardware and software and other factors, the event stream output from the event camera usually contains a large amount of noise, and traditional denoising algorithms cannot be applied to the event stream. To better deal with different kinds of noise and enhance the robustness of the denoising algorithm, based on the spatio-temporal distribution characteristics of effective events and noise, an event stream noise reduction and visualization algorithm is proposed. The event stream enters fine filtering after filtering the BA noise based on spatio-temporal density. The fine filtering performs time sequence analysis on the event pixels and the neighboring pixels to filter out hot noise. The proposed visualization algorithm adaptively overlaps the events of the previous frame according to the event density difference to obtain clear and coherent event frames. We conducted denoising and visualization experiments on real scenes and public datasets, respectively, and the experiments show that our algorithm is effective in filtering noise and obtaining clear and coherent event frames under different event stream densities and noise backgrounds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. A Deep-learning-based Auto Encoder-Decoder Model for Denoising Electrocardiogram Signals.
- Author
-
Das, Maumita and Sahana, Bikash Chandra
- Subjects
- *
CONVOLUTIONAL neural networks , *ADDITIVE white Gaussian noise , *MEAN square algorithms , *SIGNAL-to-noise ratio , *SIGNAL denoising - Abstract
Learning-based denoising techniques have become superior to the traditional assumption-based denoising methods in this modern era. Also, with the advancement of wearable technologies and remote electrocardiogram (ECG) monitoring systems, the requirement for optimal storage has increased due to the limited availability of hardware resources. Therefore, denoising and compression both are essential at the preprocessing stage of the ECG signal. Deep learning-based denoising auto encoder-decoder (DAED) models guarantee cutting-edge performance for these tasks. This article presents a lightweight, adaptive, hybrid Convolutional Neural Network-Gated Recurrent Unit (CNN-GRU) based DAED model that achieves a signal compression ratio of 64 with high signal-to-noise ratio improvement for the elimination of ECG noises. The novelty of this work lies in the customization of the CNN layers and utilization of the advantages of the GRU layer in a proper channel for compression and denoising ECG signals. The comparative study with other complex deep learning-based DAED arrangements and state-of-the-art denoising techniques shows the proposed model has simplicity in construction and an improved signal-to-noise ratio with minimum mean square error. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. LuminanceGAN: Controlling the brightness of generated images for various night conditions.
- Author
-
Seo, Junghyun, Wang, Sungjun, Jeon, Hyeonjae, Kim, Taesoo, Jin, Yongsik, Kwon, Soon, Kim, Jeseok, and Lim, Yongseob
- Abstract
There are diverse datasets available for training deep learning models utilized in autonomous driving. However, most of these datasets are composed of images obtained in day conditions, leading to a data imbalance issue when dealing with night condition images. Several day-to-night image translation models have been proposed to resolve the insufficiency of the night condition dataset, but these models often generate artifacts and cannot control the brightness of the generated image. In this study, we propose a LuminanceGAN, for controlling the brightness degree in night conditions to generate realistic night image outputs. The proposed novel Y-control loss converges the brightness degree of the output image to a specific luminance value. Furthermore, the implementation of the self-attention module effectively reduces artifacts in the generated images. Consequently, in qualitative comparisons, our model demonstrates superior performance in day-to-night image translation. Additionally, a quantitative evaluation was conducted using lane detection models, showing that our proposed method improves performance in night lane detection tasks. Moreover, the quality of the generated indoor dark images was assessed using an evaluation metric. It can be proven that our model generates images most similar to real dark images compared to other image translation models. [Display omitted] • Our novel Y-control loss is proposed to control the brightness degree of the generated night condition images. • We reduced the artifacts in the generated night condition images by incorporating a self-attention module. • Addressing data imbalance issues with varied brightness night images generated by our proposed model. • Our model exhibits a general applicability across various domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. LL-Diff: Low-Light Image Enhancement Utilizing Langevin Sampling Diffusion.
- Author
-
Ding, Boren, Zhang, Xiaofeng, Yu, Zekun, and Hui, Zheng
- Subjects
- *
IMAGE intensifiers , *PARTICLE motion , *SAMPLING methods , *SPEED , *NOISE - Abstract
In this paper, we propose a new algorithm called LL-Diff, which is innovative compared to traditional augmentation methods in that it introduces the sampling method of Langevin dynamics. This sampling approach simulates the motion of particles in complex environments and can better handle noise and details in low-light conditions. We also incorporate a causal attention mechanism to achieve causality and address the issue of confounding effects. This attention mechanism enables us to better capture local information while avoiding over-enhancement. We have conducted experiments on the LOL-V1 and LOL-V2 datasets, and the results show that LL-Diff significantly improves computational speed and several evaluation metrics, demonstrating the superiority and effectiveness of our method for low-light image enhancement tasks. The code will be released on GitHub when the paper has been accepted. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Estimation of Interferences in Magnetoencephalography (MEG) Brain Data Using Intelligent Methods for BCI-based Neurorehabilitation Applications.
- Author
-
Philip, Beril Susan, Chihi, Inès, Prasad, Girijesh, and Hemanth, Jude
- Subjects
- *
STANDARD deviations , *BRAIN-computer interfaces , *COGNITIVE therapy , *MOTOR imagery (Cognition) , *BRAIN mapping - Abstract
Brain-Computer Interface (BCI) neurorehabilitation offers the potential to improve recovery and quality of life for stroke survivors. It aims to restore lost physical and mental abilities through motor and cognitive therapies. Magnetoencephalography (MEG) signals are a major advancement in BCI technology as they provide accurate and consistent assessments of brain activity for control and interaction applications. MEG is indispensable for recording the magnetic fields produced in the brain during motor imagery tasks due to its capability to evaluate cerebral activity with remarkable temporal resolution. However, one of the major challenges associated with MEG recording is the loss of signal quality due to physiological artifacts and ambient noise. Additionally, the head movement of the individual during the recording process can result in the introduction of artifacts into the recorded data, which can distort the spatial mapping of brain activity. This, in turn, can jeopardize the reliability and accuracy of the results obtained. This study aims to identify the most effective technique for removing artifacts from MEG signals by conducting a comparative performance analysis of prominent denoising algorithms, such as Infomax, FastICA, SOBI, and SWT. The findings conclude that Infomax is the most effective algorithm for removing physiological artifacts from a signal while maintaining the integrity and essential features of the original data. FastICA was found to be the second most effective algorithm. Infomax outperformed FastICA in Power Spectral Density (PSD) and Percentage Root mean square error Difference (PRD) measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Assessment of multi-modal magnetic resonance imaging for glioma based on a deep learning reconstruction approach with the denoising method.
- Author
-
Sun, Jun, Xu, Siyao, Guo, Yiding, Ding, Jinli, Zhuo, Zhizheng, Zhou, Dabiao, and Liu, Yaou
- Subjects
- *
MAGNETIC resonance imaging , *DEEP learning , *GRAY matter (Nerve tissue) , *SIGNAL-to-noise ratio , *GLIOMAS - Abstract
Background: Deep learning reconstruction (DLR) with denoising has been reported as potentially improving the image quality of magnetic resonance imaging (MRI). Multi-modal MRI is a critical non-invasive method for tumor detection, surgery planning, and prognosis assessment; however, the DLR on multi-modal glioma imaging has not been assessed. Purpose: To assess multi-modal MRI for glioma based on the DLR method. Material and Methods: We assessed multi-modal images of 107 glioma patients (49 preoperative and 58 postoperative). All the images were reconstructed with both DLR and conventional reconstruction methods, encompassing T1-weighted (T1W), contrast-enhanced T1W (CE-T1), T2-weighted (T2W), and T2 fluid-attenuated inversion recovery (T2-FLAIR). The image quality was evaluated using signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and edge sharpness. Visual assessment and diagnostic assessment were performed blindly by neuroradiologists. Results: In contrast with conventionally reconstructed images, (residual) tumor SNR for all modalities and tumor to white/gray matter CNR from DLR images were higher in T1W, T2W, and T2-FLAIR sequences. The visual assessment of DLR images demonstrated the superior visualization of tumor in T2W, edema in T2-FLAIR, enhanced tumor and necrosis part in CE-T1, and fewer artifacts in all modalities. Improved diagnostic efficiency and confidence were observed for preoperative cases with DLR images. Conclusion: DLR of multi-modal MRI reconstruction prototype for glioma has demonstrated significant improvements in image quality. Moreover, it increased diagnostic efficiency and confidence of glioma. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Satellite Remote Sensing Grayscale Image Colorization Based on Denoising Generative Adversarial Network.
- Author
-
Fu, Qing, Xia, Siyuan, Kang, Yifei, Sun, Mingwei, and Tan, Kai
- Subjects
- *
GENERATIVE adversarial networks , *GRAYSCALE model , *REMOTE sensing , *IMAGE denoising - Abstract
Aiming to solve the challenges of difficult training, mode collapse in current generative adversarial networks (GANs), and the efficiency issue of requiring multiple samples for Denoising Diffusion Probabilistic Models (DDPM), this paper proposes a satellite remote sensing grayscale image colorization method using a denoising GAN. Firstly, a denoising optimization method based on U-ViT for the generator network is introduced to further enhance the model's generation capability, along with two optimization strategies to significantly reduce the computational burden. Secondly, the discriminator network is optimized by proposing a feature statistical discrimination network, which imposes fewer constraints on the generator network. Finally, grayscale image colorization comparative experiments are conducted on three real satellite remote sensing grayscale image datasets. The results compared with existing typical colorization methods demonstrate that the proposed method can generate color images of higher quality, achieving better performance in both subjective human visual perception and objective metric evaluation. Experiments in building object detection show that the generated color images can improve target detection performance compared to the original grayscale images, demonstrating significant practical value. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Deep learning-based multi-frequency denoising for myocardial perfusion SPECT.
- Author
-
Du, Yu, Sun, Jingzhang, Li, Chien-Ying, Yang, Bang-Hung, Wu, Tung-Hsin, and Mok, Greta S. P.
- Subjects
- *
GENERATIVE adversarial networks , *COMPUTED tomography , *DEEP learning , *FOURIER transforms , *PHYSICAL mobility , *SINGLE-photon emission computed tomography , *IMAGE denoising - Abstract
Background: Deep learning (DL)-based denoising has been proven to improve image quality and quantitation accuracy of low dose (LD) SPECT. However, conventional DL-based methods used SPECT images with mixed frequency components. This work aims to develop an integrated multi-frequency denoising network to further enhance LD myocardial perfusion (MP) SPECT denoising. Methods: Fifty anonymized patients who underwent routine 99mTc-sestamibi stress SPECT/CT scans were retrospectively recruited. Three LD datasets were obtained by reducing the 10 s acquisition time of full dose (FD) SPECT to be 5, 2 and 1 s per projection based on the list mode data for a total of 60 projections. FD and LD projections were Fourier transformed to magnitude and phase images, which were then separated into two or three frequency bands. Each frequency band was then inversed Fourier transformed back to the image domain. We proposed a 3D integrated attention-guided multi-frequency conditional generative adversarial network (AttMFGAN) and compared with AttGAN, and separate AttGAN for multi-frequency bands denoising (AttGAN-MF).The multi-frequency FD and LD projections of 35, 5 and 10 patients were paired for training, validation and testing. The LD projections to be tested were separated to multi-frequency components and input to corresponding networks to get the denoised components, which were summed to get the final denoised projections. Voxel-based error indices were measured on the cardiac region on the reconstructed images. The perfusion defect size (PDS) was also analyzed. Results: AttGAN-MF and AttMFGAN have superior performance on all physical and clinical indices as compared to conventional AttGAN. The integrated AttMFGAN is better than AttGAN-MF. Multi-frequency denoising with two frequency bands have generally better results than corresponding three-frequency bands methods. Conclusions: AttGAN-MF and AttMFGAN are promising to further improve LD MP SPECT denoising. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Enhancing clinical diagnostics: novel denoising methodology for brain MRI with adaptive masking and modified non-local block.
- Author
-
Velayudham, A, Kumar, K Madhan, and Priya M S, Krishna
- Abstract
Medical image denoising has been a subject of extensive research, with various techniques employed to enhance image quality and facilitate more accurate diagnostics. The evolution of denoising methods has highlighted impressive results but struggled to strike equilibrium between noise reduction and edge preservation which limits its applicability in various domains. This paper manifests the novel methodology that integrates an adaptive masking strategy, transformer-based U-Net Prior generator, edge enhancement module, and modified non-local block (MNLB) for denoising brain MRI clinical images. The adaptive masking strategy maintains the vital information through dynamic mask generation while the prior generator by capturing hierarchical features regenerates the high-quality prior MRI images. Finally, these images are fed to the edge enhancement module to boost structural information by maintaining crucial edge details, and the MNLB produces the denoised output by deriving non-local contextual information. The comprehensive experimental assessment is performed by employing two datasets namely the brain tumor MRI dataset and Alzheimer's dataset for diverse metrics and compared with conventional denoising approaches. The proposed denoising methodology achieves a PSNR of 40.965 and SSIM of 0.938 on the Alzheimer's dataset and also achieves a PSNR of 40.002 and SSIM of 0.926 on the brain tumor MRI dataset at a noise level of 50% revealing its supremacy in noise minimization. Furthermore, the impact of different masking ratios on denoising performance is analyzed which reveals that the proposed method showed PSNR of 40.965, SSIM of 0.938, MAE of 5.847, and MSE of 3.672 at the masking ratio of 60%. Moreover, the findings pave the way for the advancement of clinical image processing, facilitating precise detection of tumors in clinical MRI images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. 基于多类数据处理方法联合分析的运营桥梁安全性预警分级.
- Author
-
马耀华
- Abstract
Copyright of Fly Ash Comprehensive Utilization is the property of Hebei Fly Ash Comprehensive Utilization Magazine Co., Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
39. 同步提取变换去噪的雷达信号调制识别方法.
- Author
-
邓志安, 王治国, 王盛鳌, and 司伟建
- Subjects
CONVOLUTIONAL neural networks ,SIGNAL-to-noise ratio ,VITERBI decoding ,IMAGE denoising ,FEATURE extraction ,TIME-frequency analysis - Abstract
Copyright of Systems Engineering & Electronics is the property of Journal of Systems Engineering & Electronics Editorial Department and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
40. Diffusion tensor brain imaging at 0.55T: A feasibility study.
- Author
-
Kung, Hao‐Ting, Cui, Sophia X., Kaplan, Jonas T., Joshi, Anand A., Leahy, Richard M., Nayak, Krishna S., and Haldar, Justin P.
- Subjects
DIFFUSION tensor imaging ,DIFFUSION magnetic resonance imaging ,FEASIBILITY studies ,SIGNAL-to-noise ratio ,SCANNING systems - Abstract
Purpose: To investigate the feasibility of diffusion tensor brain imaging at 0.55T with comparisons against 3T. Methods: Diffusion tensor imaging data with 2 mm isotropic resolution was acquired on a cohort of five healthy subjects using both 0.55T and 3T scanners. The signal‐to‐noise ratio (SNR) of the 0.55T data was improved using a previous SNR‐enhancing joint reconstruction method that jointly reconstructs the entire set of diffusion weighted images from k‐space using shared‐edge constraints. Quantitative diffusion tensor parameters were estimated and compared across field strengths. We also performed a test–retest assessment of repeatability at each field strength. Results: After applying SNR‐enhancing joint reconstruction, the diffusion tensor parameters obtained from 0.55T data were strongly correlated (R2≥0.70$$ {R}^2\ge 0.70 $$) with those obtained from 3T data. Test–retest analysis showed that SNR‐enhancing reconstruction improved the repeatability of the 0.55T diffusion tensor parameters. Conclusion: High‐resolution in vivo diffusion MRI of the human brain is feasible at 0.55T when appropriate noise‐mitigation strategies are applied. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Better electrobiological markers and a improved automated diagnostic classifier for schizophrenia—based on a new EEG effective information estimation framework.
- Author
-
Jing, Tianyu, Wang, Jiao, Guo, Zhifen, Ma, Fengbin, Xu, Xindong, and Fu, Longyue
- Subjects
PSYCHIATRIC diagnosis ,SIGNAL-to-noise ratio ,AUTOMATIC classification ,SIGNAL denoising ,MENTAL illness - Abstract
Advances in AI techniques have fueled research on using EEG data for psychiatric disorder diagnosis. Despite EEG's cost-effectiveness and high temporal resolution, low Signal-to-Noise Ratio (SNR) hampers critical marker extraction and model improvement, while denoising techniques will lead to a loss of effective information in EEG. The aim of this study is to employ AI methods for the processing of raw EEG data. The primary objectives of the processing are twofold: first, to acquire more reliable markers for schizophrenia, and second, to construct a superior automatic classification for schizophrenia. To remove the noises and retain task-related (classification tasks) effective information mostly, we introduce an Effective Information Estimation Framework (EIEF) based on three key principles: the task-centered approach, leveraging 1D-CNNs' test metrics to gauge effective information proportion, and feedback. We address a theoretical foundation by integrating these principles into mathematical derivations to propose the mathematical model of EIEF. In experiments, we established a paradigm pool of 66 denoising paradigms, with EIEF successfully identifying the optimal paradigms (on two datasets) for restoring effective information. Utilizing the processed dataset, we trained a 3D-CNN for automatic schizophrenia diagnosis, achieving outstanding test accuracies of 99.94 % on dataset 1 and 98.02 % on dataset 2 in subject-dependent evaluations, and accuracies of 89.85 % on dataset 1 and 98.02 % on dataset 2 in subject-independent evaluations. Additionally, we extracted 38 features from each channel of both processed and raw datasets, revealing that 20.86 % (dataset 1) of feature distribution differences between the patients and the healthy exhibited significant changes after implementing the optimal paradigm. We enhance model performance and extract more reliable electrobiological markers. These findings have promising implications for advancing the field of the clinical diagnosis and pathological analysis of Schizophrenia. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Interpolation-Filtering Method for Image Improvement in Digital Holography.
- Author
-
Kozlov, Alexander V., Cheremkhin, Pavel A., Svistunov, Andrey S., Rodin, Vladislav G., Starikov, Rostislav S., and Evtikhiev, Nikolay N.
- Subjects
SPECKLE interference ,DIGITAL image processing ,IMAGE reconstruction ,HOLOGRAPHY ,INTERPOLATION - Abstract
Digital holography is actively used for the characterization of objects and 3D-scenes, tracking changes in medium parameters, 3D shape reconstruction, detection of micro-object positions, etc. To obtain high-quality images of objects, it is often necessary to register a set of holograms or to select a noise suppression method for specific experimental conditions. In this paper, we propose a method to improve filtering in digital holography. The method requires a single hologram only. It utilizes interpolation upscaling of the reconstructed image size, filtering (e.g., median, BM3D, or NLM), and interpolation to the original image size. The method is validated on computer-generated and experimentally registered digital holograms. Interpolation methods coefficients and filter parameters were analyzed. The quality is improved in comparison with digital image filtering up to 1.4 times in speckle contrast on the registered holograms and up to 17% and 29% in SSIM and NSTD values on the computer-generated holograms. The proposed method is convenient in practice since its realization requires small changes of standard filters, improving the quality of the reconstructed image. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. A Microseismic Signal Denoising Algorithm Combining VMD and Wavelet Threshold Denoising Optimized by BWOA.
- Author
-
Rao, Dijun, Huang, Min, Shi, Xiuzhi, Yu, Zhi, and He, Zhengxiang
- Subjects
OPTIMIZATION algorithms ,HILBERT-Huang transform ,SIGNAL denoising ,STANDARD deviations ,PROBLEM solving ,SIGNAL-to-noise ratio - Abstract
The denoising of microseismic signals is a prerequisite for subsequent analysis and research. In this research, a new microseismic signal denoising algorithm called the Black Widow Optimization Algorithm (BWOA) optimized Variational Mode Decomposition (VMD) joint Wavelet Threshold Denoising (WTD) algorithm (BVW) is proposed. The BVW algorithm integrates VMD and WTD, both of which are optimized by BWOA. Specifically, this algorithm utilizes VMD to decompose the microseismic signal to be denoised into several Band-Limited Intrinsic Mode Functions (BLIMFs). Subsequently, these BLIMFs whose correlation coefficients with the microseismic signal to be denoised are higher than a threshold are selected as the effective mode functions, and the effective mode functions are denoised using WTD to filter out the residual low- and intermediate-frequency noise. Finally, the denoised microseismic signal is obtained through reconstruction. The ideal values of VMD parameters and WTD parameters are acquired by searching with BWOA to achieve the best VMD decomposition performance and solve the problem of relying on experience and requiring a large workload in the application of the WTD algorithm. The outcomes of simulated experiments indicate that this algorithm is capable of achieving good denoising performance under noise of different intensities, and the denoising performance is significantly better than the commonly used VMD and Empirical Mode Decomposition (EMD) algorithms. The BVW algorithm is more efficient in filtering noise, the waveform after denoising is smoother, the amplitude of the waveform is the closest to the original signal, and the signal-to-noise ratio (SNR) and the root mean square error after denoising are more satisfying. The case based on Fankou Lead-Zinc Mine shows that for microseismic signals with different intensities of noise monitored on-site, compared with VMD and EMD, the BVW algorithm is more efficient in filtering noise, and the SNR after denoising is higher. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. PENGUIN: A rapid and efficient image preprocessing tool for multiplexed spatial proteomics
- Author
-
A.M. Sequeira, M.E. Ijsselsteijn, M. Rocha, and Noel F.C.C. de Miranda
- Subjects
Multiplex imaging ,Immunophenotyping ,Denoising ,Normalization ,Background subtraction ,Spatial omics ,Biotechnology ,TP248.13-248.65 - Abstract
Multiplex spatial proteomic methodologies can provide a unique perspective on the molecular and cellular composition of complex biological systems. Several challenges are associated to the analysis of imaging data, specifically in regard to the normalization of signal-to-noise ratios across images and subtracting background noise. However, there is a lack of user-friendly solutions for denoising multiplex imaging data that can be applied to large datasets. We have developed PENGUIN –Percentile Normalization GUI Image deNoising: a straightforward image preprocessing tool for multiplexed spatial proteomics data. Compared to existing approaches, PENGUIN distinguishes itself by eliminating the need for manual annotation or machine learning models. It effectively preserves signal intensity differences while reducing noise, improving downstream tasks such as cell segmentation and phenotyping. PENGUIN's simplicity, speed, and intuitive interface, available as both a script and a Jupyter notebook, make it easy to adjust image processing parameters, providing a user-friendly experience. We further demonstrate the effectiveness of PENGUIN by comparing it to conventional image processing techniques and solutions tailored for multiplex imaging data.
- Published
- 2024
- Full Text
- View/download PDF
45. Wavelet-based vibration denoising for structural health monitoring
- Author
-
Ahmed Silik, Mohammad Noori, Zhishen Wu, Wael A. Altabey, Ji Dang, and Nabeel S. D. Farhan
- Subjects
Discrete wavelet transform ,Denoising ,Thresholding ,Structural responses ,Cities. Urban geography ,GF125 ,Technology - Abstract
Abstract In the context of civil engineering applications, vibration responses are complex, exhibiting variations in time and space and often containing nonlinearity and uncertainties not considered during data collection. These responses can also be contaminated by various sources, impacting damage identification processes. A significant challenge is how to effectively remove noise from these data to obtain reliable damage indicators that are unresponsive to noise and environmental factors.This study proposes a new denoising algorithm based on discrete wavelet transform (DWT) that addresses this issue. The suggested method offers a strategy for denoising using distinct thresholds for positive and negative coefficient values at each band and applying denoising process to both detail and trend components. The results prove the effectiveness of the technique and show that Bayes thresholding performs better than the other techniques in terms of the evaluated metrics. This suggests that Bayes thresholding is a more accurate and robust technique for thresholding compared to other common techniques.
- Published
- 2024
- Full Text
- View/download PDF
46. WiTUnet: A U-shaped architecture integrating CNN and Transformer for improved feature alignment and local information fusion
- Author
-
Bin Wang, Fei Deng, Peifan Jiang, Shuang Wang, Xiao Han, and Zhixuan Zhang
- Subjects
Low-dose computed tomography (LDCT) ,Denoising ,Convolutional Neural Network ,Transformer ,Medicine ,Science - Abstract
Abstract Low-dose computed tomography (LDCT) has emerged as the preferred technology for diagnostic medical imaging due to the potential health risks associated with X-ray radiation and conventional computed tomography (CT) techniques. While LDCT utilizes a lower radiation dose compared to standard CT, it results in increased image noise, which can impair the accuracy of diagnoses. To mitigate this issue, advanced deep learning-based LDCT denoising algorithms have been developed. These primarily utilize Convolutional Neural Networks (CNNs) or Transformer Networks and often employ the Unet architecture, which enhances image detail by integrating feature maps from the encoder and decoder via skip connections. However, existing methods focus excessively on the optimization of the encoder and decoder structures while overlooking potential enhancements to the Unet architecture itself. This oversight can be problematic due to significant differences in feature map characteristics between the encoder and decoder, where simple fusion strategies may hinder effective image reconstruction. In this paper, we introduce WiTUnet, a novel LDCT image denoising method that utilizes nested, dense skip pathway in place of traditional skip connections to improve feature integration. Additionally, to address the high computational demands of conventional Transformers on large images, WiTUnet incorporates a windowed Transformer structure that processes images in smaller, non-overlapping segments, significantly reducing computational load. Moreover, our approach includes a Local Image Perception Enhancement (LiPe) module within both the encoder and decoder to replace the standard multi-layer perceptron (MLP) in Transformers, thereby improving the capture and representation of local image features. Through extensive experimental comparisons, WiTUnet has demonstrated superior performance over existing methods in critical metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), and Root Mean Square Error (RMSE), significantly enhancing noise removal and image quality. The code is available on github https://github.com/woldier/WiTUNet .
- Published
- 2024
- Full Text
- View/download PDF
47. Deep learning-based multi-frequency denoising for myocardial perfusion SPECT
- Author
-
Yu Du, Jingzhang Sun, Chien-Ying Li, Bang-Hung Yang, Tung-Hsin Wu, and Greta S. P. Mok
- Subjects
Deep learning ,Myocardial perfusion SPECT ,Generative adversarial network ,Denoising ,Medical physics. Medical radiology. Nuclear medicine ,R895-920 - Abstract
Abstract Background Deep learning (DL)-based denoising has been proven to improve image quality and quantitation accuracy of low dose (LD) SPECT. However, conventional DL-based methods used SPECT images with mixed frequency components. This work aims to develop an integrated multi-frequency denoising network to further enhance LD myocardial perfusion (MP) SPECT denoising. Methods Fifty anonymized patients who underwent routine 99mTc-sestamibi stress SPECT/CT scans were retrospectively recruited. Three LD datasets were obtained by reducing the 10 s acquisition time of full dose (FD) SPECT to be 5, 2 and 1 s per projection based on the list mode data for a total of 60 projections. FD and LD projections were Fourier transformed to magnitude and phase images, which were then separated into two or three frequency bands. Each frequency band was then inversed Fourier transformed back to the image domain. We proposed a 3D integrated attention-guided multi-frequency conditional generative adversarial network (AttMFGAN) and compared with AttGAN, and separate AttGAN for multi-frequency bands denoising (AttGAN-MF).The multi-frequency FD and LD projections of 35, 5 and 10 patients were paired for training, validation and testing. The LD projections to be tested were separated to multi-frequency components and input to corresponding networks to get the denoised components, which were summed to get the final denoised projections. Voxel-based error indices were measured on the cardiac region on the reconstructed images. The perfusion defect size (PDS) was also analyzed. Results AttGAN-MF and AttMFGAN have superior performance on all physical and clinical indices as compared to conventional AttGAN. The integrated AttMFGAN is better than AttGAN-MF. Multi-frequency denoising with two frequency bands have generally better results than corresponding three-frequency bands methods. Conclusions AttGAN-MF and AttMFGAN are promising to further improve LD MP SPECT denoising.
- Published
- 2024
- Full Text
- View/download PDF
48. AI algorithmically-enhanced motion suppression simulating an osteochondral defect in a young child
- Author
-
Gregory A. Aird, MD, Paul G. Thacker, MD, MHA, and Kimberly K. Amrami, MD
- Subjects
Denoising ,Artificial intelligence ,MRI ,Medical physics. Medical radiology. Nuclear medicine ,R895-920 - Abstract
Artificial intelligence (AI) in radiology has rapidly increased in our field and stands to allow more accurate diagnosis, quicker interpretations, easier workflows, and improved image quality. However, with superior image quality produced with the help of AI algorithms, one could begin to discount or even eliminate the review of nonalgorithmic enhanced images. At least currently, these images remain important. This case report demonstrates a unique anomaly simulating disease resulting from AI-enhanced motion suppression. On the original images, patient motion and an atypical linear motion artifact is obvious. However, the images reproduced using our AI motion artifact suppression algorithm suppressed nearly all (but not all) of the motion artifact resulting in what appeared to be an osteochondral lesion in a child's knee. This case illustrates the necessity for the interpreting radiologist to review both original acquisitions as well as AI-enhanced images, at least for the time being.
- Published
- 2024
- Full Text
- View/download PDF
49. Joint learning of nonlinear representation and projection for fast constrained MRSI reconstruction.
- Author
-
Li, Yahang, Ruhm, Loreen, Wang, Zepeng, Zhao, Ruiyang, Anderson, Aaron, Arnold, Paul, Huesmann, Graham, Henning, Anke, and Lam, Fan
- Abstract
Purpose: To develop and evaluate a novel method for computationally efficient reconstruction from noisy MR spectroscopic imaging (MRSI) data. Methods: The proposed method features (a) a novel strategy that jointly learns a nonlinear low‐dimensional representation of high‐dimensional spectroscopic signals and a neural‐network‐based projector to recover the low‐dimensional embeddings from noisy/limited data; (b) a formulation that integrates the forward encoding model, a regularizer exploiting the learned representation, and a complementary spatial constraint; and (c) a highly efficient algorithm enabled by the learned projector within an alternating direction method of multipliers (ADMM) framework, circumventing the computationally expensive network inversion subproblem. Results: The proposed method has been evaluated using simulations as well as in vivo 1$$ {}^1 $$H and 31$$ {}^{31} $$P MRSI data, demonstrating improved performance over state‐of‐the‐art methods, with about 6×$$ \times $$ fewer averages needed than standard Fourier reconstruction for similar metabolite estimation variances and up to 100×$$ \times $$ reduction in processing time compared to a prior neural network constrained reconstruction method. Computational and theoretical analyses were performed to offer further insights into the effectiveness of the proposed method. Conclusion: A novel method was developed for fast, high‐SNR spatiospectral reconstruction from noisy MRSI data. We expect our method to be useful for enhancing the quality of MRSI or other high‐dimensional spatiospectral imaging data. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
50. Detrended partial cross-correlation analysis-random matrix theory for denoising network construction.
- Author
-
Wang, Fang, Zhang, Zehui, Wang, Min, and Ling, Guang
- Abstract
A denoised complex network framework employing a detrended partial cross-correlation analysis-based coefficient for achieving the intrinsic scale-dependent correlations between each pair of variables is developed to explore the interrelatedness of multiple nonstationary variables in the real-world. In doing this, we start with introducing the detrended partial cross-correlation coefficient into random matrix theory, and executing a denoising process through correlation matrix reconfiguration, which is followed by utilizing the denoised correlation matrix to construct a planar maximally filtered graph network. It allows us assess the interactions among complex objects more accurately. The effectiveness of our proposed method is validated through the numerical experiments simulating the eigenvalue distribution, and the results show that our method accurately locates the maximum eigenvalue at a specific scale, but existing methods fail to achieve. As a practical application, we also apply the proposed denoising network framework to investigate the co-movement behavior of PM 2.5 air pollution of North China and the linkage of commodity futures prices in China. The results show that the denoising process significantly enhances the information content of the network, revealing several interesting insights regarding network properties. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.