303 results on '"Jong Chul Ye"'
Search Results
2. Generative Models for Inverse Imaging Problems: From mathematical foundations to physics-driven applications
- Author
-
Zhizhen Zhao, Jong Chul Ye, and Yoram Bresler
- Subjects
Applied Mathematics ,Signal Processing ,Electrical and Electronic Engineering - Published
- 2023
3. Task-Agnostic Vision Transformer for Distributed Learning of Image Processing
- Author
-
Boah Kim, Jong Chul Ye, and Jeongsol Kim
- Subjects
Computer Graphics and Computer-Aided Design ,Software - Published
- 2023
4. Multi-Scale Hybrid Vision Transformer for Learning Gastric Histology: AI-Based Decision Support System for Gastric Cancer Treatment
- Author
-
Yujin Oh, Go Eun Bae, Kyung-Hee Kim, Min-Kyung Yeo, and Jong Chul Ye
- Subjects
Health Information Management ,Health Informatics ,Electrical and Electronic Engineering ,Computer Science Applications - Published
- 2023
5. Enhanced Diagnosis of Plaque Erosion by Deep Learning in Patients With Acute Coronary Syndromes
- Author
-
Sangjoon, Park, Makoto, Araki, Akihiro, Nakajima, Hang, Lee, Valentin, Fuster, Jong Chul, Ye, and Ik-Kyung, Jang
- Subjects
Deep Learning ,Cross-Sectional Studies ,Treatment Outcome ,Humans ,Acute Coronary Syndrome ,Cardiology and Cardiovascular Medicine ,Plaque, Atherosclerotic ,Tomography, Optical Coherence - Abstract
Acute coronary syndromes caused by plaque erosion might be potentially managed conservatively without stenting. Currently, the diagnosis of plaque erosion requires expertise in optical coherence tomographic (OCT) image interpretation. In addition, the current deep learning (DL) approaches for OCT image interpretation are based on a single frame, without integrating the information from adjacent frames.The aim of this study was to develop a novel DL model to facilitate an accurate diagnosis of plaque erosion.A novel "Transformer"-based DL model was developed that integrates information from adjacent frames emulating the cardiologists who review consecutive OCT frames to make a diagnosis and compared with the standard convolutional neural network (CNN) DL model. A total of 237,021 cross-sectional OCT images from 581 patients were used for training and internal validation, and 65,394 images from 292 patients from another dataset were used for external validation. Model performances were evaluated using the area under the receiver-operating characteristic curve (AUC).For the frame-level diagnosis of plaque erosion, the Transformer model showed superior performance than the CNN model, with an AUC of 0.94 compared with 0.85 in the external validation. For the lesion-level diagnosis, the Transformer model showed improved diagnostic performance compared with the CNN model, with an AUC of 0.91 compared with 0.84 in the external validation.This newly developed Transformer model will help cardiologists diagnose plaque erosion with high accuracy in patients with acute coronary syndromes.
- Published
- 2022
6. Physics-Driven Machine Learning for Computational Imaging: Part 2 [From the Guest Editors]
- Author
-
Bihan Wen, Saiprasad Ravishankar, Zhizhen Zhao, Raja Giryes, and Jong Chul Ye
- Subjects
Applied Mathematics ,Signal Processing ,Electrical and Electronic Engineering - Published
- 2023
7. Physics-Driven Machine Learning for Computational Imaging [From the Guest Editor]
- Author
-
Bihan Wen, Saiprasad Ravishankar, Zhizhen Zhao, Raja Giryes, and Jong Chul Ye
- Subjects
Applied Mathematics ,Signal Processing ,Electrical and Electronic Engineering - Published
- 2023
8. Low-Dose Sparse-View HAADF-STEM-EDX Tomography of Nanocrystals Using Unsupervised Deep Learning
- Author
-
Eunju Cha, Hyungjin Chung, Jaeduck Jang, Junho Lee, Eunha Lee, and Jong Chul Ye
- Subjects
Microscopy, Electron, Scanning Transmission ,Electron Microscope Tomography ,Deep Learning ,General Engineering ,Nanoparticles ,General Physics and Astronomy ,General Materials Science ,Tomography, X-Ray Computed - Abstract
High-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) can be acquired together with energy dispersive X-ray (EDX) spectroscopy to give complementary information on the nanoparticles being imaged. Recent deep learning approaches show potential for accurate 3D tomographic reconstruction for these applications, but a large number of high-quality electron micrographs are usually required for supervised training, which may be difficult to collect due to the damage on the particles from the electron beam. To overcome these limitations and enable tomographic reconstruction even in low-dose sparse-view conditions, here we present an unsupervised deep learning method for HAADF-STEM-EDX tomography. Specifically, to improve the EDX image quality from low-dose condition, a HAADF-constrained unsupervised denoising approach is proposed. Additionally, to enable extreme sparse-view tomographic reconstruction, an unsupervised view enrichment scheme is proposed in the projection domain. Extensive experiments with different types of quantum dots show that the proposed method offers a high-quality reconstruction even with only
- Published
- 2022
9. Optical coherence tomography in coronary atherosclerosis assessment and intervention
- Author
-
Makoto Araki, Seung-Jung Park, Harold L. Dauerman, Shiro Uemura, Jung-Sun Kim, Carlo Di Mario, Thomas W. Johnson, Giulio Guagliumi, Adnan Kastrati, Michael Joner, Niels Ramsing Holm, Fernando Alfonso, William Wijns, Tom Adriaenssens, Holger Nef, Gilles Rioufol, Nicolas Amabile, Geraud Souteyrand, Nicolas Meneveau, Edouard Gerbaud, Maksymilian P. Opolski, Nieves Gonzalo, Guillermo J. Tearney, Brett Bouma, Aaron D. Aguirre, Gary S. Mintz, Gregg W. Stone, Christos V. Bourantas, Lorenz Räber, Sebastiano Gili, Kyoichi Mizuno, Shigeki Kimura, Toshiro Shinke, Myeong-Ki Hong, Yangsoo Jang, Jin Man Cho, Bryan P. Yan, Italo Porto, Giampaolo Niccoli, Rocco A. Montone, Vikas Thondapu, Michail I. Papafaklis, Lampros K. Michalis, Harmony Reynolds, Jacqueline Saw, Peter Libby, Giora Weisz, Mario Iannaccone, Tommaso Gori, Konstantinos Toutouzas, Taishi Yonetsu, Yoshiyasu Minami, Masamichi Takano, O. Christopher Raffel, Osamu Kurihara, Tsunenari Soeda, Tomoyo Sugiyama, Hyung Oh Kim, Tetsumin Lee, Takumi Higuma, Akihiro Nakajima, Erika Yamamoto, Krzysztof L. Bryniarski, Luca Di Vito, Rocco Vergallo, Francesco Fracassi, Michele Russo, Lena M. Seegers, Iris McNulty, Sangjoon Park, Marc Feldman, Javier Escaned, Francesco Prati, Eloisa Arbustini, Fausto J. Pinto, Ron Waksman, Hector M. Garcia-Garcia, Akiko Maehara, Ziad Ali, Aloke V. Finn, Renu Virmani, Annapoorna S. Kini, Joost Daemen, Teruyoshi Kume, Kiyoshi Hibi, Atsushi Tanaka, Takashi Akasaka, Takashi Kubo, Satoshi Yasuda, Kevin Croce, Juan F. Granada, Amir Lerman, Abhiram Prasad, Evelyn Regar, Yoshihiko Saito, Mullasari Ajit Sankardas, Vijayakumar Subban, Neil J. Weissman, Yundai Chen, Bo Yu, Stephen J. Nicholls, Peter Barlis, Nick E. J. West, Armin Arbab-Zadeh, Jong Chul Ye, Jouke Dijkstra, Hang Lee, Jagat Narula, Filippo Crea, Sunao Nakamura, Tsunekazu Kakuta, James Fujimoto, Valentin Fuster, Ik-Kyung Jang, CarMeN, laboratoire, Massachusetts General Hospital [Boston, MA, USA], Harvard Medical School [Boston] (HMS), Asan Medical Center [Seoul, South Korea] (AMC), University of Vermont [Burlington], Kawasaki Medical School [Okayama, Japan] (KMS), Yonsei University College of Medicine [Seoul, South Korea] (YUCM), Azienda Ospedaliero-Universitaria Careggi [Firenze] (AOUC), University Hospitals Bristol, Azienda Ospedaliera Ospedale Papa Giovanni XXIII [Bergamo, Italy], Technische Universität München = Technical University of Munich (TUM), Munich Heart Alliance [Munich, Allemagne] (MHA), German Heart Center = Deutsches Herzzentrum München [Munich, Germany] (GHC), Aarhus University Hospital [Skejby, Denmark] (AUH), Hospital Universitario de La Princesa, National University of Ireland [Galway] (NUI Galway), University Hospitals Leuven [Leuven], Technische Hochschule Mittelhessen - University of Applied Sciences [Giessen] (THM), Cardiovasculaire, métabolisme, diabétologie et nutrition (CarMeN), Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Hospices Civils de Lyon (HCL)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE), Hospices Civils de Lyon (HCL), Université de Lyon, Institut Mutualiste de Montsouris (IMM), CHU Clermont-Ferrand, Centre Hospitalier Régional Universitaire de Besançon (CHRU Besançon), Centre de recherche Cardio-Thoracique de Bordeaux [Bordeaux] (CRCTB), Université Bordeaux Segalen - Bordeaux 2-CHU Bordeaux [Bordeaux]-Institut National de la Santé et de la Recherche Médicale (INSERM), National Institute of Cardiology [Warsaw, Poland] (NIC), Instituto de Investigación Sanitaria del Hospital Clínico San Carlos [Madrid, Spain] (IdISSC), Massachusetts General Hospital [Boston], Cardiovascular Research Foundation [New York, NY, USA] (CRF), Icahn School of Medicine at Mount Sinai [New York] (MSSM), Barts Health NHS Trust [London, UK], Queen Mary University of London (QMUL), Bern University Hospital [Berne] (Inselspital), Centro Cardiologico Monzino [Milan, Italy] (2CM), Istituti di Ricovero e Cura a Carattere Scientifico (IRCCS), Mitsukoshi Health and Welfare Foundation [Tokyo, Japan] (MHWF), Yokohama Minami Kyosai Hospital [Kanagawa, Japan] (YMKH), Showa University Hospital [Tokyo, Japan] (SUH), Kyung Hee University [Seoul, South Korea] (KHU), The Chinese University of Hong Kong [Hong Kong], Università degli studi di Genova = University of Genoa (UniGe), Università degli studi di Parma = University of Parma (UNIPR), Catholic University of the Sacred Heart [Rome, Italy] (CUSH), University Hospital [Ioannina, Greece] (UH), New York University School of Medicine (NYU Grossman School of Medicine), Vancouver General Hospital [Vancouver, British Columbia, Canada] (VGH), University of British Columbia (UBC), Brigham and Women’s Hospital [Boston, MA], New York Presbyterian Hospital, Columbia University Medical Center (CUMC), Columbia University [New York], Ospedale San Giovanni Bosco [Turin, Italy] (OSGB), Johannes Gutenberg - Universität Mainz = Johannes Gutenberg University (JGU), National and Kapodistrian University of Athens (NKUA), Tokyo Medical and Dental University [Japan] (TMDU), Kitasato University, Nippon Medical School Chiba Hokusoh Hospital [Chiba, Japan] (NMSC2H), The Prince Charles Hospital, Nara Medical University [Nara, Japan] (NMU), Tsuchiura Kyodo General Hospital [Ibaraki, Japan] (TKGH), Japanese Red Cross Musashino Hospital [Tokyo], St. Marianna University School of Medicine [Kanagawa, Japan], Kyoto University Graduate School of Medicine [Kyoto, Japan] (KUGSM), Jagiellonian University - Medical College (JUMC), Uniwersytet Jagielloński w Krakowie = Jagiellonian University (UJ), Mazzoni Hospital [Ascoli Piceno, Italy] (MH), Korea Advanced Institute of Science and Technology (KAIST), University of Texas Health Science Center, The University of Texas Health Science Center at Houston (UTHealth), Saint Camillus International University of Health Sciences [Rome, Italy] (SCIUHS), Fondazione IRCCS Policlinico San Matteo [Pavia], Università degli Studi di Pavia = University of Pavia (UNIPV), Universidade de Lisboa = University of Lisbon (ULISBOA), MedStar Washington Hospital Center [Washington, DC, USA] (MedStar WHC), CV Path Institute [Gaithersburg, MD, USA] (CV-PI), Erasmus University Medical Center [Rotterdam] (Erasmus MC), Yokohama City University (YCU), Wakayama University, Tohoku University [Sendai], Mayo Clinic [Rochester, MN, USA], Mayo Clinic [Rochester], University hospital of Zurich [Zurich], Gifu University Graduate School of Medicine, Madras Medical Mission [Chennai, India] (3M), MedStar Health Research Institute [Washington, DC, USA] (MedStar-HRI), Chinese People's Liberation Army General Hospital [Beijing, China] (CPLAGH), Harbin Medical University [China] (HMU), Monash university, University of Melbourne, Royal Papworth Hospital [Cambridge, UK] (RPH), Johns Hopkins University (JHU), Leiden University Medical Center (LUMC), The Open University of Japan [Chiba] (OUJ), and Massachusetts Institute of Technology (MIT)
- Subjects
[SDV] Life Sciences [q-bio] ,[SDV]Life Sciences [q-bio] ,Cardiology and Cardiovascular Medicine - Abstract
Optical coherence tomography (OCT) has been widely adopted in research on coronary atherosclerosis and adopted clinically to optimize percutaneous coronary intervention. In this Review, Jang and colleagues summarize this rapidly progressing field, with the aim of standardizing the use of OCT in coronary atherosclerosis.Since optical coherence tomography (OCT) was first performed in humans two decades ago, this imaging modality has been widely adopted in research on coronary atherosclerosis and adopted clinically for the optimization of percutaneous coronary intervention. In the past 10 years, substantial advances have been made in the understanding of in vivo vascular biology using OCT. Identification by OCT of culprit plaque pathology could potentially lead to a major shift in the management of patients with acute coronary syndromes. Detection by OCT of healed coronary plaque has been important in our understanding of the mechanisms involved in plaque destabilization and healing with the rapid progression of atherosclerosis. Accurate detection by OCT of sequelae from percutaneous coronary interventions that might be missed by angiography could improve clinical outcomes. In addition, OCT has become an essential diagnostic modality for myocardial infarction with non-obstructive coronary arteries. Insight into neoatherosclerosis from OCT could improve our understanding of the mechanisms of very late stent thrombosis. The appropriate use of OCT depends on accurate interpretation and understanding of the clinical significance of OCT findings. In this Review, we summarize the state of the art in cardiac OCT and facilitate the uniform use of this modality in coronary atherosclerosis. Contributions have been made by clinicians and investigators worldwide with extensive experience in OCT, with the aim that this document will serve as a standard reference for future research and clinical application.
- Published
- 2022
10. Diagnosis of coronary layered plaque by deep learning
- Author
-
Makoto Araki, Sangjoon Park, Akihiro Nakajima, Hang Lee, Jong Chul Ye, and Ik-Kyung Jang
- Subjects
Multidisciplinary - Abstract
Healed coronary plaques, morphologically characterized by a layered phenotype, are signs of previous plaque destabilization and healing. Recent optical coherence tomography (OCT) studies demonstrated that layered plaque is associated with higher levels of local and systemic inflammation and rapid plaque progression. However, the diagnosis of layered plaque needs expertise in OCT image analysis and is susceptible to inter-observer variability. We developed a deep learning (DL) model for an accurate diagnosis of layered plaque. A Visual Transformer (ViT)-based DL model that integrates information from adjacent frames emulating the cardiologists who review consecutive OCT frames to make a diagnosis was developed and compared with the standard convolutional neural network (CNN) model. A total of 237,021 cross-sectional OCT images from 581 patients collected from 8 sites were used for training and internal validation, and 65,394 images from 292 patients collected from another site were used for external validation. In the five-fold cross-validation, the ViT-based model provided better performance (area under the curve [AUC]: 0.860; 95% confidence interval [CI]: 0.855–0.866) than the standard CNN-based model (AUC: 0.799; 95% CI: 0.792–0.805). The ViT-based model (AUC: 0.845; 95% CI: 0.837–0.853) also surpassed the standard CNN-based model (AUC: 0.791; 95% CI: 0.782–0.800) in the external validation. The ViT-based DL model can accurately diagnose a layered plaque, which could help risk stratification for cardiac events.
- Published
- 2023
11. Unsupervised Deep Learning Methods for Biological Image Reconstruction and Enhancement: An overview from a signal processing perspective
- Author
-
Mehmet Akcakaya, Burhaneddin Yaman, Hyungjin Chung, and Jong Chul Ye
- Subjects
Applied Mathematics ,Signal Processing ,Electrical and Electronic Engineering ,Article - Abstract
Recently, deep learning approaches have become the main research frontier for biological image reconstruction and enhancement problems thanks to their high performance, along with their ultra-fast inference times. However, due to the difficulty of obtaining matched reference data for supervised learning, there has been increasing interest in unsupervised learning approaches that do not need paired reference data. In particular, self-supervised learning and generative models have been successfully used for various biological imaging applications. In this paper, we overview these approaches from a coherent perspective in the context of classical inverse problems, and discuss their applications to biological imaging, including electron, fluorescence and deconvolution microscopy, optical diffraction tomography and functional neuroimaging.
- Published
- 2022
12. Switchable and Tunable Deep Beamformer Using Adaptive Instance Normalization for Medical Ultrasound
- Author
-
Jong Chul Ye, Shujaat Khan, and Jaeyoung Huh
- Subjects
Flexibility (engineering) ,Normalization (statistics) ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,Computer science ,business.industry ,Deep learning ,Phase (waves) ,Inference ,Data Compression ,Computer Science Applications ,Image (mathematics) ,Image Processing, Computer-Assisted ,Electronic engineering ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Medical ultrasound ,Algorithms ,Software ,Ultrasonography ,Generator (mathematics) - Abstract
Recent proposals of deep learning-based beamformers for ultrasound imaging (US) have attracted significant attention as computational efficient alternatives to adaptive and compressive beamformers. Moreover, deep beamformers are versatile in that image post-processing algorithms can be readily combined. Unfortunately, with the existing technology, a large number of beamformers need to be trained and stored for different probes, organs, depth ranges, operating frequency, and desired target 'styles', demanding significant resources such as training data, etc. To address this problem, here we propose a switchable and tunable deep beamformer that can switch between various types of outputs such as DAS, MVBF, DMAS, GCF, etc., and also adjust noise removal levels at the inference phase, by using a simple switch or tunable nozzle. This novel mechanism is implemented through Adaptive Instance Normalization (AdaIN) layers, so that distinct outputs can be generated using a single generator by merely changing the AdaIN codes. Experimental results using B-mode focused ultrasound confirm the flexibility and efficacy of the proposed method for various applications.
- Published
- 2022
13. Unsupervised CT Metal Artifact Learning Using Attention-Guided β-CycleGAN
- Author
-
Jong Chul Ye, Junghyun Lee, and Jawook Gu
- Subjects
Radiological and Ultrasound Technology ,Computer science ,business.industry ,Feature vector ,Deep learning ,Supervised learning ,Pattern recognition ,Iterative reconstruction ,Computer Science Applications ,Reduction (complexity) ,Metal Artifact ,Metals ,Feature (computer vision) ,Image Processing, Computer-Assisted ,Unsupervised learning ,Attention ,Artificial intelligence ,Electrical and Electronic Engineering ,Artifacts ,Tomography, X-Ray Computed ,business ,Software - Abstract
Metal artifact reduction (MAR) is one of the most important research topics in computed tomography (CT). With the advance of deep learning approaches for image reconstruction, various deep learning methods have been suggested for metal artifact reduction, among which supervised learning methods are most popular. However, matched metal-artifact-free and metal artifact corrupted image pairs are difficult to obtain in real CT acquisition. Recently, a promising unsupervised learning for MAR was proposed using feature disentanglement, but the resulting network architecture is so complicated that it is difficult to handle large size clinical images. To address this, here we propose a simple and effective unsupervised learning method for MAR. The proposed method is based on a novel β -cycleGAN architecture derived from the optimal transport theory for appropriate feature space disentanglement. Moreover, by adding the convolutional block attention module (CBAM) layers in the generator, we show that the metal artifacts can be more focused so that it can be effectively removed. Experimental results confirm that we can achieve improved metal artifact reduction that preserves the detailed texture of the original image.
- Published
- 2021
14. Optical Coherence Tomography of Plaque Erosion
- Author
-
Ik-Kyung Jang, Valentin Fuster, Peter Libby, Dhaval Kolte, Taishi Yonetsu, and Jong Chul Ye
- Subjects
medicine.medical_specialty ,Endothelium ,medicine.diagnostic_test ,business.industry ,Cardiovascular risk factors ,Neutrophil extracellular traps ,medicine.anatomical_structure ,Optical coherence tomography ,Coronary thrombosis ,Internal medicine ,Antithrombotic ,medicine ,Cardiology ,In patient ,Cardiology and Cardiovascular Medicine ,business ,Plaque erosion - Abstract
Plaque erosion, a distinct histopathological and clinical entity, accounts for over 30% of acute coronary syndromes (ACS). Optical coherence tomography allows in vivo diagnosis of plaque erosion. Local flow perturbation with activation of Toll-like receptor 2 and CD8+ T cells and subsequent desquamation of endothelium and neutrophil extracellular trap formation contribute to mechanisms of plaque erosion. Compared with ACS patients with plaque rupture, those with plaque erosion are younger, have fewer traditional cardiovascular risk factors, have lower plaque burden, and are more likely to present with non-ST-segment elevation ACS. Early evidence suggests that in patients with ACS caused by plaque erosion, antithrombotic therapy without stenting may be a safe and effective option. Future randomized trials are needed to validate these findings. Clinical studies to develop noninvasive point-of-care biomarkers that distinguish plaque rupture from erosion, and to test novel therapies that target molecular pathways involved in plaque erosion are needed.
- Published
- 2021
15. Unsupervised Denoising for Satellite Imagery Using Wavelet Directional CycleGAN
- Author
-
Dae-Soon Park, Joonyoung Song, Doochun Seo, Hyunho Kim, Jong Chul Ye, and Jae-Heon Jeong
- Subjects
Noise measurement ,business.industry ,Computer science ,Noise reduction ,Supervised learning ,Multispectral image ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0211 other engineering and technologies ,02 engineering and technology ,Wavelet ,Computer Science::Computer Vision and Pattern Recognition ,General Earth and Planetary Sciences ,Unsupervised learning ,Computer vision ,Noise (video) ,Artificial intelligence ,Electrical and Electronic Engineering ,Image sensor ,business ,021101 geological & geomatics engineering - Abstract
Multispectral satellite imaging sensors acquire various spectral band images and have a unique spectroscopic property in each band. Unfortunately, image artifacts from imaging sensor noise often affect the quality of scenes and have a negative impact on applications for satellite imagery. Recently, deep learning approaches have been extensively explored to remove noise in satellite imagery. Most deep learning denoising methods, however, follow a supervised learning scheme, which requires matched noisy image and clean image pairs that are difficult to collect in real situations. In this article, we propose a novel unsupervised multispectral denoising method for satellite imagery using a wavelet directional cycle-consistent adversarial network (WavCycleGAN). The proposed method is based on an unsupervised learning scheme using adversarial loss and cycle-consistency loss to overcome the lack of paired data. Moreover, in contrast to the standard image-domain cycleGAN, we introduce a wavelet directional learning scheme for effective denoising without sacrificing high-frequency components such as edges and detailed information. Experimental results for the removal of vertical stripes and wave noise in satellite imaging sensors demonstrate that the proposed method effectively removes noise and preserves important high-frequency features of satellite images.
- Published
- 2021
16. Wavelet subband discriminator for efficient unsupervised chest X-ray image restoration
- Author
-
Joonyoung Song and Jong Chul Ye
- Subjects
General Medicine - Abstract
Chest X-ray (CXR) images are commonly used to show the internal structure of the human body without invasive intervention. The quality of CXR is an important factor as it affects the accuracy of a clinical diagnosis. Unfortunately, it is difficult to always get good quality CXR scans due to noises and scatters.Recently, wavelet directional CycleGAN (WavCycleGAN) has shown promising results in image restoration tasks by removing noise and artifacts without sacrificing high-frequency components of the input image. Unfortunately, WavCycleGAN directly reconstructs wavelet directional images that require a wavelet transform in both the training and test phases, resulting in additional processing steps and unnatural artifacts originating from the wavelet domain image. In addition, WavCycleGAN can only process artifact-related subbands, so it is difficult to apply WavCycleGAN when different levels of artifacts are present in all subbands. To address this, here we present a novel unsupervised CXR image restoration scheme with similar or even better artifact removal performance than WavCycleGAN in spite of wavelet transform being only applied in the training phase.We introduce a novel wavelet subband discriminator which can be combined with CycleGAN or switchable CycleGAN, where wavelet transform is applied only in the training phase for discriminators to match the distribution of wavelet subband components. In our framework, the image restoration network can be still applied in the image domain to prevent unnatural artifacts of the wavelet domain image with the help of the image-domain cycle-consistency loss. In addition, using wavelet subband discriminator makes it possible to remove artifacts in all subbands by utilizing frequency-specific wavelet subband discriminators.Through extensive experiments for noise and scatter removal in CXRs, we confirm that our method provides competitive performance compared to existing approaches without additional processing steps in the test phase. Furthermore, we show that our wavelet subband discriminator combined with the switchable CycleGAN can provide the flexibility by generating different levels of artifact removal.The proposed wavelet subband discriminator can be combined with the existing CycleGAN or switchable CycleGAN structures to construct an efficient unsupervised CXR image reconstruction. The advantage of our wavelet subband discriminator-based CXR image restoration is that, unlike traditional WavCycleGAN, it does not require any additional processing steps in the testing phase and does not generate unnatural artifacts originating from the wavelet domain image. We believe that our wavelet subband discriminator can be applied to various CXR image applications.
- Published
- 2022
17. Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
- Author
-
Sangjoon Park, Gwanghyun Kim, Yujin Oh, Joon Beom Seo, Sang Min Lee, Jin Hwan Kim, Sungjun Moon, Jae-Kwang Lim, Chang Min Park, and Jong Chul Ye
- Subjects
Radiography ,Multidisciplinary ,X-Rays ,COVID-19 ,Humans ,General Physics and Astronomy ,Diagnosis, Computer-Assisted ,General Chemistry ,Algorithms ,General Biochemistry, Genetics and Molecular Biology - Abstract
Although deep learning-based computer-aided diagnosis systems have recently achieved expert-level performance, developing a robust model requires large, high-quality data with annotations that are expensive to obtain. This situation poses a conundrum that annually-collected chest x-rays cannot be utilized due to the absence of labels, especially in deprived areas. In this study, we present a framework named distillation for self-supervision and self-train learning (DISTL) inspired by the learning process of the radiologists, which can improve the performance of vision transformer simultaneously with self-supervision and self-training through knowledge distillation. In external validation from three hospitals for diagnosis of tuberculosis, pneumothorax, and COVID-19, DISTL offers gradually improved performance as the amount of unlabeled data increase, even better than the fully supervised model with the same amount of labeled data. We additionally show that the model obtained with DISTL is robust to various real-world nuisances, offering better applicability in clinical setting.
- Published
- 2022
18. Deep Learning in Biological Image and Signal Processing [From the Guest Editors]
- Author
-
Erik Meijering, Vince D. Calhoun, Gloria Menegaz, David J. Miller, and Jong Chul Ye
- Subjects
Applied Mathematics ,Signal Processing ,Electrical and Electronic Engineering - Published
- 2022
19. Variational Formulation of Unsupervised Deep Learning for Ultrasound Image Artifact Removal
- Author
-
Jong Chul Ye, Jaeyoung Huh, and Shujaat Khan
- Subjects
Artifact (error) ,Acoustics and Ultrasonics ,business.industry ,Deep learning ,Supervised learning ,Reference data (financial markets) ,Pattern recognition ,01 natural sciences ,Image Artifact ,Deep Learning ,0103 physical sciences ,Image Processing, Computer-Assisted ,Unsupervised learning ,Artificial intelligence ,Electrical and Electronic Engineering ,Artifacts ,business ,010301 acoustics ,Instrumentation ,Ultrasound image ,Supervised training ,Ultrasonography - Abstract
Recently, deep learning approaches have been successfully used for ultrasound (US) image artifact removal. However, paired high-quality images for supervised training are difficult to obtain in many practical situations. Inspired by the recent theory of unsupervised learning using optimal transport driven CycleGAN (OT-CycleGAN), here, we investigate the applicability of unsupervised deep learning for US artifact removal problems without matched reference data. Two types of OT-CycleGAN approaches are employed: one with the partial knowledge of the image degradation physics and the other with the lack of such knowledge. Various US artifact removal problems are then addressed using the two types of OT-CycleGAN. Experimental results for various unsupervised US artifact removal tasks confirmed that our unsupervised learning method delivers results comparable to supervised learning in many practical applications.
- Published
- 2021
20. DeepRegularizer: Rapid Resolution Enhancement of Tomographic Imaging Using Deep Learning
- Author
-
Geon Kim, Dongmin Ryu, Donghun Ryu, Yong-Ki Lee, YoonSeok Baek, Young Seo Kim, Yoosik Kim, Hyungjoo Cho, Hyun-Seok Min, YongKeun Park, and Jong Chul Ye
- Subjects
Iterative method ,Computer science ,FOS: Physical sciences ,Lateral resolution ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,Diffraction tomography ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Optical transfer function ,FOS: Electrical engineering, electronic engineering, information engineering ,Image Processing, Computer-Assisted ,Humans ,Tomography, Optical ,Physics - Biological Physics ,Electrical and Electronic Engineering ,Tomographic reconstruction ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science Applications ,Transformation (function) ,Biological Physics (physics.bio-ph) ,Neural Networks, Computer ,Tomography ,Artificial intelligence ,business ,Algorithms ,Software ,Optics (physics.optics) ,Physics - Optics - Abstract
Optical diffraction tomography measures the three-dimensional refractive index map of a specimen and visualizes biochemical phenomena at the nanoscale in a non-destructive manner. One major drawback of optical diffraction tomography is poor axial resolution due to limited access to the three-dimensional optical transfer function. This missing cone problem has been addressed through regularization algorithms that use a priori information, such as non-negativity and sample smoothness. However, the iterative nature of these algorithms and their parameter dependency make real-time visualization impossible. In this article, we propose and experimentally demonstrate a deep neural network, which we term DeepRegularizer, that rapidly improves the resolution of a three-dimensional refractive index map. Trained with pairs of datasets (a raw refractive index tomogram and a resolution-enhanced refractive index tomogram via the iterative total variation algorithm), the three-dimensional U-net-based convolutional neural network learns a transformation between the two tomogram domains. The feasibility and generalizability of our network are demonstrated using bacterial cells and a human leukaemic cell line, and by validating the model across different samples. DeepRegularizer offers more than an order of magnitude faster regularization performance compared to the conventional iterative method. We envision that the proposed data-driven approach can bypass the high time complexity of various image reconstructions in other imaging modalities.
- Published
- 2021
21. Deep learning–based denoising algorithm in comparison to iterative reconstruction and filtered back projection: a 12-reader phantom study
- Author
-
Kyeorye Lee, Kyoung Ho Lee, Jong Chul Ye, Young Hoon Kim, Eun Hee Kang, Hae Young Kim, Won Chang, Ji Hoon Park, Yoon Jin Lee, Youngjune Kim, and Dong Yul Oh
- Subjects
medicine.medical_specialty ,Image quality ,Iterative reconstruction ,Radiation Dosage ,Imaging phantom ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Humans ,Medicine ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Ground truth ,Receiver operating characteristic ,Radon transform ,Phantoms, Imaging ,business.industry ,General Medicine ,030220 oncology & carcinogenesis ,Radiographic Image Interpretation, Computer-Assisted ,Standard algorithms ,Noise (video) ,Artificial intelligence ,Radiology ,Tomography, X-Ray Computed ,business ,Algorithms - Abstract
(1) To compare low-contrast detectability of a deep learning–based denoising algorithm (DLA) with ADMIRE and FBP, and (2) to compare image quality parameters of DLA with those of reconstruction methods from two different CT vendors (ADMIRE, IMR, and FBP). Using abdominal CT images of 100 patients reconstructed via ADMIRE and FBP, we trained DLA by feeding FBP images as input and ADMIRE images as the ground truth. To measure the low-contrast detectability, the randomized repeat scans of Catphan® phantom were performed under various conditions of radiation exposures. Twelve radiologists evaluated the presence/absence of a target on a five-point confidence scale. The multi-reader multi-case area under the receiver operating characteristic curve (AUC) was calculated, and non-inferiority tests were performed. Using American College of Radiology CT accreditation phantom, contrast-to-noise ratio, target transfer function, noise magnitude, and detectability index (d’) of DLA, ADMIRE, IMR, and FBPs were computed. The AUC of DLA in low-contrast detectability was non-inferior to that of ADMIRE (p < .001) and superior to that of FBP (p < .001). DLA improved the image quality in terms of all physical measurements compared to FBPs from both CT vendors and showed profiles of physical measurements similar to those of ADMIRE. The low-contrast detectability of the proposed deep learning–based denoising algorithm was non-inferior to that of ADMIRE and superior to that of FBP. The DLA could successfully improve image quality compared with FBP while showing the similar physical profiles of ADMIRE. • Low-contrast detectability in the images denoised using the deep learning algorithm was non-inferior to that in the images reconstructed using standard algorithms. • The proposed deep learning algorithm showed similar profiles of physical measurements to advanced iterative reconstruction algorithm (ADMIRE).
- Published
- 2021
22. Reusability report: Feature disentanglement in generating a three-dimensional structure from a two-dimensional slice with sliceGAN
- Author
-
Hyungjin Chung and Jong Chul Ye
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Structure (category theory) ,Pattern recognition ,Human-Computer Interaction ,Artificial Intelligence ,Feature (computer vision) ,Wafer ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Reusability - Published
- 2021
23. Multi-domain CT translation by a routable translation network
- Author
-
Hyunjong Kim, Gyutaek Oh, Joon Beom Seo, Hye Jeon Hwang, Sang Min Lee, Jihye Yun, and Jong Chul Ye
- Subjects
Radiological and Ultrasound Technology ,Image Processing, Computer-Assisted ,Radiology, Nuclear Medicine and imaging ,Tomography, X-Ray Computed ,Algorithms - Abstract
Objective. To unify the style of computed tomography (CT) images from multiple sources, we propose a novel multi-domain image translation network to convert CT images from different scan parameters and manufacturers by simply changing a routing vector. Approach. Unlike the existing multi-domain translation techniques, our method is based on a shared encoder and a routable decoder architecture to maximize the expressivity and conditioning power of the network. Main results. Experimental results show that the proposed CT image conversion can minimize the variation of image characteristics caused by imaging parameters, reconstruction algorithms, and hardware designs. Quantitative results and clinical evaluation from radiologists also show that our method can provide accurate translation results. Significance. Quantitative evaluation of CT images from multi-site or longitudinal studies has been a difficult problem due to the image variation depending on CT scan parameters and manufacturers. The proposed method can be utilized to address this for the quantitative analysis of multi-domain CT images.
- Published
- 2022
24. Contrast Agent Removal for Brain CT Angiography Using Switchable CycleGAN with AdaIN and Histogram Equalization
- Author
-
Inhwa Han, Boah Kim, Eung Yeop Kim, and Jong Chul Ye
- Published
- 2022
25. Deep learning enables reference-free isotropic super-resolution for volumetric fluorescence microscopy
- Author
-
Hyoungjun Park, Myeongsu Na, Bumju Kim, Soohyun Park, Ki Hean Kim, Sunghoe Chang, and Jong Chul Ye
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Multidisciplinary ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,General Physics and Astronomy ,Machine Learning (stat.ML) ,General Chemistry ,General Biochemistry, Genetics and Molecular Biology ,Machine Learning (cs.LG) ,Deep Learning ,Imaging, Three-Dimensional ,Artificial Intelligence (cs.AI) ,Microscopy, Fluorescence ,Statistics - Machine Learning ,Anisotropy - Abstract
Volumetric imaging by fluorescence microscopy is often limited by anisotropic spatial resolution, in which the axial resolution is inferior to the lateral resolution. To address this problem, we present a deep-learning-enabled unsupervised super-resolution technique that enhances anisotropic images in volumetric fluorescence microscopy. In contrast to the existing deep learning approaches that require matched high-resolution target images, our method greatly reduces the effort to be put into practice as the training of a network requires only a single 3D image stack, without a priori knowledge of the image formation process, registration of training data, or separate acquisition of target data. This is achieved based on the optimal transport-driven cycle-consistent generative adversarial network that learns from an unpaired matching between high-resolution 2D images in the lateral image plane and low-resolution 2D images in other planes. Using fluorescence confocal microscopy and light-sheet microscopy, we demonstrate that the trained network not only enhances axial resolution but also restores suppressed visual details between the imaging planes and removes imaging artifacts.
- Published
- 2022
26. Medical ultrasound image speckle reduction and resolution enhancement using texture compensated multi-resolution convolution neural network
- Author
-
Muhammad Moinuddin, Shujaat Khan, Abdulrahman U. Alsaggaf, Mohammed Jamal Abdulaal, Ubaid M. Al-Saggaf, and Jong Chul Ye
- Subjects
Physiology ,Physiology (medical) - Abstract
Ultrasound (US) imaging is a mature technology that has widespread applications especially in the healthcare sector. Despite its widespread use and popularity, it has an inherent disadvantage that ultrasound images are prone to speckle and other kinds of noise. The image quality in the low-cost ultrasound imaging systems is degraded due to the presence of such noise and low resolution of such ultrasound systems. Herein, we propose a method for image enhancement where, the overall quality of the US images is improved by simultaneous enhancement of US image resolution and noise suppression. To avoid over-smoothing and preserving structural/texture information, we devise texture compensation in our proposed method to retain the useful anatomical features. Moreover, we also utilize US image formation physics knowledge to generate augmentation datasets which can improve the training of our proposed method. Our experimental results showcase the performance of the proposed network as well as the effectiveness of the utilization of US physics knowledge to generate augmentation datasets.
- Published
- 2022
27. Multi-Domain Unpaired Ultrasound Image Artifact Removal Using a Single Convolutional Neural Network
- Author
-
Jaeyoung Huh, Shujaat Khan, and Jong Chul Ye
- Published
- 2022
28. Deep learning STEM-EDX tomography of nanocrystals
- Author
-
Byeong Gyu Chae, Yoseob Han, Jun-Ho Lee, Hee Goo Kim, Jong Chul Ye, Shinae Jun, Jaeduck Jang, Tae-Gon Kim, Eunha Lee, Sungwoo Hwang, Eunju Cha, Hyungjin Chung, and Myoungho Jeong
- Subjects
0301 basic medicine ,Materials science ,Tomographic reconstruction ,Computer Networks and Communications ,business.industry ,Nanoparticle ,Human-Computer Interaction ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Artificial Intelligence ,Quantum dot ,Scanning transmission electron microscopy ,Optoelectronics ,Quantum efficiency ,Computer Vision and Pattern Recognition ,Tomography ,Spectroscopy ,business ,Nanoscopic scale ,030217 neurology & neurosurgery ,Software - Abstract
Energy-dispersive X-ray spectroscopy (EDX) is often performed simultaneously with high-angle annular dark-field scanning transmission electron microscopy (STEM) for nanoscale physico-chemical analysis. However, high-quality STEM-EDX tomographic imaging is still challenging due to fundamental limitations such as sample degradation with prolonged scan time and the low probability of X-ray generation. To address this, we propose an unsupervised deep learning method for high-quality 3D EDX tomography of core–shell nanocrystals, which can be usually permanently dammaged by prolonged electron beam. The proposed deep learning STEM-EDX tomography method was used to accurately reconstruct Au nanoparticles and InP/ZnSe/ZnS core–shell quantum dots, used in commercial display devices. Furthermore, the shape and thickness uniformity of the reconstructed ZnSe/ZnS shell closely correlates with optical properties of the quantum dots, such as quantum efficiency and chemical stability. Advanced electron microscopy and spectroscopy techniques can reveal useful structural and chemical details at the nanoscale. An unsupervised deep learning approach helps to reconstruct 3D images and observe the relationship between optical and structural properties of semiconductor nanocrystals, of interest in optoelectronic applications.
- Published
- 2021
29. Unpaired Training of Deep Learning tMRA for Flexible Spatio-Temporal Resolution
- Author
-
Eung Yeop Kim, Eunju Cha, Jong Chul Ye, and Hyungjin Chung
- Subjects
Discriminator ,Radiological and Ultrasound Technology ,Artificial neural network ,Computer science ,business.industry ,Deep learning ,Angiography ,Iterative reconstruction ,Magnetic Resonance Imaging ,030218 nuclear medicine & medical imaging ,Computer Science Applications ,Data set ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Temporal resolution ,Dynamic contrast-enhanced MRI ,Computer Simulation ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Image resolution ,Algorithm ,Software ,Generator (mathematics) - Abstract
Time-resolved MR angiography (tMRA) has been widely used for dynamic contrast enhanced MRI (DCE-MRI) due to its highly accelerated acquisition. In tMRA, the periphery of the ${\textit k}$ -space data are sparsely sampled so that neighbouring frames can be merged to construct one temporal frame. However, this view-sharing scheme fundamentally limits the temporal resolution, and it is not possible to change the view-sharing number to achieve different spatio-temporal resolution trade-offs. Although many deep learning approaches have been recently proposed for MR reconstruction from sparse samples, the existing approaches usually require matched fully sampled ${\textit k}$ -space reference data for supervised training, which is not suitable for tMRA due to the lack of high spatio-temporal resolution ground-truth images. To address this problem, here we propose a novel unpaired training scheme for deep learning using optimal transport driven cycle-consistent generative adversarial network (cycleGAN). In contrast to the conventional cycleGAN with two pairs of generator and discriminator, the new architecture requires just a single pair of generator and discriminator, which makes the training much simpler but still improves the performance. Reconstruction results using in vivo tMRA and simulation data set confirm that the proposed method can immediately generate high quality reconstruction results at various choices of view-sharing numbers, allowing us to exploit better trade-off between spatial and temporal resolution in time-resolved MR angiography.
- Published
- 2021
30. Missing Cone Artifact Removal in ODT Using Unsupervised Deep Learning in the Projection Domain
- Author
-
Jaeyoung Huh, Geon Kim, Jong Chul Ye, Hyungjin Chung, and YongKeun Park
- Subjects
Artifact (error) ,Computer science ,business.industry ,Plane (geometry) ,Deep learning ,Holography ,Pattern recognition ,Iterative reconstruction ,Computer Science Applications ,law.invention ,Computational Mathematics ,law ,Signal Processing ,Probability distribution ,Tomography ,Artificial intelligence ,Projection (set theory) ,business - Abstract
Optical diffraction tomography (ODT) produces a three-dimensional distribution of the refractive index (RI) by measuring scattering fields at various angles. Although the distribution of the RI is highly informative, due to the missing cone problem stemming from the limited-angle acquisition of holograms, reconstructions have very poor resolution along the axial direction compared to the horizontal imaging plane. To solve this issue, we present a novel unsupervised deep learning framework that learns the probability distribution of missing projection views through an optimal transport-driven CycleGAN. The experimental results show that missing cone artifacts in ODT data can be significantly resolved by the proposed method.
- Published
- 2021
31. Deep Learning-Enabled Detection of Pneumoperitoneum in Supine and Erect Abdominal Radiography: Modeling Using Transfer Learning and Semi-Supervised Learning
- Author
-
Sangjoon Park, Jong Chul Ye, Eun Sun Lee, Gyeongme Cho, Jin Woo Yoon, Joo Hyeok Choi, Ijin Joo, and Yoon Jin Lee
- Subjects
Radiology, Nuclear Medicine and imaging - Published
- 2023
32. Deep learning for tomographic image reconstruction
- Author
-
Jong Chul Ye, Bruno De Man, and Ge Wang
- Subjects
0301 basic medicine ,Modern medicine ,Tomographic reconstruction ,Computer Networks and Communications ,Computer science ,business.industry ,Image quality ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,Convolutional neural network ,Field (computer science) ,Human-Computer Interaction ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Artificial Intelligence ,Medical imaging ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Software - Abstract
Deep-learning-based tomographic imaging is an important application of artificial intelligence and a new frontier of machine learning. Deep learning has been widely used in computer vision and image analysis, which deal with existing images, improve these images, and produce features from them. Since 2016, deep learning techniques have been actively researched for tomographic imaging, especially in the context of biomedicine, with impressive results and great potential. Tomographic reconstruction produces images of multi-dimensional structures from externally measured ‘encoded’ data in the form of various tomographic transforms (integrals, harmonics, echoes and so on). In this Review, we provide a general background, highlight representative results with an emphasis on medical imaging, and discuss key issues that need to be addressed in this emerging field. In particular, tomographic imaging is an integral part of modern medicine, and will play a key role in personalized, preventive and precision medicine and make it intelligent, inexpensive and indiscriminate. The popularity of deep learning is leading to new areas in biomedical applications. Wang and colleagues summarize in this Review the recent development and future directions of deep neural networks for superior image quality in the tomographic imaging field.
- Published
- 2020
33. Differentiated Backprojection Domain Deep Learning for Conebeam Artifact Removal
- Author
-
Junyoung Kim, Yoseob Han, and Jong Chul Ye
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Iterative method ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (stat.ML) ,Computed tomography ,Iterative reconstruction ,Machine Learning (cs.LG) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,symbols.namesake ,Deep Learning ,0302 clinical medicine ,Statistics - Machine Learning ,Image Processing, Computer-Assisted ,FOS: Electrical engineering, electronic engineering, information engineering ,medicine ,Electrical and Electronic Engineering ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,Phantoms, Imaging ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Electrical Engineering and Systems Science - Image and Video Processing ,Cone-Beam Computed Tomography ,Reconstruction method ,Computer Science Applications ,Coronal plane ,symbols ,Hilbert transform ,Deconvolution ,Artificial intelligence ,Artifacts ,Tomography, X-Ray Computed ,business ,Algorithm ,Algorithms ,Software - Abstract
Conebeam CT using a circular trajectory is quite often used for various applications due to its relative simple geometry. For conebeam geometry, Feldkamp, Davis and Kress algorithm is regarded as the standard reconstruction method, but this algorithm suffers from so-called conebeam artifacts as the cone angle increases. Various model-based iterative reconstruction methods have been developed to reduce the cone-beam artifacts, but these algorithms usually require multiple applications of computational expensive forward and backprojections. In this paper, we develop a novel deep learning approach for accurate conebeam artifact removal. In particular, our deep network, designed on the differentiated backprojection domain, performs a data-driven inversion of an ill-posed deconvolution problem associated with the Hilbert transform. The reconstruction results along the coronal and sagittal directions are then combined using a spectral blending technique to minimize the spectral leakage. Experimental results show that our method outperforms the existing iterative methods despite significantly reduced runtime complexity., Comment: This paper is accepted for IEEE Trans. Medical Imaging
- Published
- 2020
34. Editorial: Introduction to the Issue on Domain Enriched Learning for Medical Imaging
- Author
-
Jong Chul Ye, Scott T. Acton, Abd-Krim Seghouane, Vishal Monga, and Arrate Muñoz-Barrutia
- Subjects
Structure (mathematical logic) ,Computer science ,business.industry ,Deep learning ,Image segmentation ,Data science ,Domain (software engineering) ,Variety (cybernetics) ,Signal Processing ,Medical imaging ,Domain knowledge ,Segmentation ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
The nineteen papers in this special section focus on domain enriched learning for medical imaging. In recent years, learning based methods have emerged to complement traditional model and feature based methods for a variety of medical imaging problems such as image formation, classification and segmentation, quality enhancement etc. In the case of deep neural networks, many solutions have achieved unprecedented performance gains and have defined a new state of the art. Despite the progress, compelling open challenges remain. One such key challenge is that many learning frameworks (notably deep learning) are purely data-driven approaches and their performance depends strongly on the quantity and quality of training image data available. When training is limited or noisy, the performance drops sharply. Deep neural networks based approaches additionally face the challenge of often not being straightforward to interpret. Fortunately, exciting recent progress has emerged in enriching learning frameworks with domain knowledge and signal structure. As a couple of representative examples: in image reconstruction problems, this may involve using statistical/structural image priors; for image segmentation, shape and anatomical knowledge (conveyed by an expert) may be leveraged, etc. This special issue brings together contributions that combine signal, image priors and other flavors of domain knowledge with machine learning methods for solving many diverse medical imaging problems.
- Published
- 2020
35. Optical coherence tomography in coronary atherosclerosis assessment and intervention
- Author
-
Makoto, Araki, Seung-Jung, Park, Harold L, Dauerman, Shiro, Uemura, Jung-Sun, Kim, Carlo, Di Mario, Thomas W, Johnson, Giulio, Guagliumi, Adnan, Kastrati, Michael, Joner, Niels Ramsing, Holm, Fernando, Alfonso, William, Wijns, Tom, Adriaenssens, Holger, Nef, Gilles, Rioufol, Nicolas, Amabile, Geraud, Souteyrand, Nicolas, Meneveau, Edouard, Gerbaud, Maksymilian P, Opolski, Nieves, Gonzalo, Guillermo J, Tearney, Brett, Bouma, Aaron D, Aguirre, Gary S, Mintz, Gregg W, Stone, Christos V, Bourantas, Lorenz, Räber, Sebastiano, Gili, Kyoichi, Mizuno, Shigeki, Kimura, Toshiro, Shinke, Myeong-Ki, Hong, Yangsoo, Jang, Jin Man, Cho, Bryan P, Yan, Italo, Porto, Giampaolo, Niccoli, Rocco A, Montone, Vikas, Thondapu, Michail I, Papafaklis, Lampros K, Michalis, Harmony, Reynolds, Jacqueline, Saw, Peter, Libby, Giora, Weisz, Mario, Iannaccone, Tommaso, Gori, Konstantinos, Toutouzas, Taishi, Yonetsu, Yoshiyasu, Minami, Masamichi, Takano, O Christopher, Raffel, Osamu, Kurihara, Tsunenari, Soeda, Tomoyo, Sugiyama, Hyung Oh, Kim, Tetsumin, Lee, Takumi, Higuma, Akihiro, Nakajima, Erika, Yamamoto, Krzysztof L, Bryniarski, Luca, Di Vito, Rocco, Vergallo, Francesco, Fracassi, Michele, Russo, Lena M, Seegers, Iris, McNulty, Sangjoon, Park, Marc, Feldman, Javier, Escaned, Francesco, Prati, Eloisa, Arbustini, Fausto J, Pinto, Ron, Waksman, Hector M, Garcia-Garcia, Akiko, Maehara, Ziad, Ali, Aloke V, Finn, Renu, Virmani, Annapoorna S, Kini, Joost, Daemen, Teruyoshi, Kume, Kiyoshi, Hibi, Atsushi, Tanaka, Takashi, Akasaka, Takashi, Kubo, Satoshi, Yasuda, Kevin, Croce, Juan F, Granada, Amir, Lerman, Abhiram, Prasad, Evelyn, Regar, Yoshihiko, Saito, Mullasari Ajit, Sankardas, Vijayakumar, Subban, Neil J, Weissman, Yundai, Chen, Bo, Yu, Stephen J, Nicholls, Peter, Barlis, Nick E J, West, Armin, Arbab-Zadeh, Jong Chul, Ye, Jouke, Dijkstra, Hang, Lee, Jagat, Narula, Filippo, Crea, Sunao, Nakamura, Tsunekazu, Kakuta, James, Fujimoto, Valentin, Fuster, and Ik-Kyung, Jang
- Subjects
Percutaneous Coronary Intervention ,Myocardial Infarction ,Humans ,Stents ,Coronary Artery Disease ,Atherosclerosis ,Coronary Angiography ,Coronary Vessels ,Plaque, Atherosclerotic ,Tomography, Optical Coherence ,Article - Abstract
Since optical coherence tomography (OCT) was first performed in humans two decades ago, this imaging modality has been widely adopted in research on coronary atherosclerosis and adopted clinically for the optimization of percutaneous coronary intervention. In the past 10 years, substantial advances have been made in the understanding of in vivo vascular biology using OCT. Identification by OCT of culprit plaque pathology could potentially lead to a major shift in the management of patients with acute coronary syndromes. Detection by OCT of healed coronary plaque has been important in our understanding of the mechanisms involved in plaque destabilization and healing with the rapid progression of atherosclerosis. Accurate detection by OCT of sequelae from percutaneous coronary interventions that might be missed by angiography could improve clinical outcomes. In addition, OCT has become an essential diagnostic modality for myocardial infarction with non-obstructive coronary arteries. Insight into neoatherosclerosis from OCT could improve our understanding of the mechanisms of very late stent thrombosis. The appropriate use of OCT depends on accurate interpretation and understanding of the clinical significance of OCT findings. In this Review, we summarize the state of the art in cardiac OCT and facilitate the uniform use of this modality in coronary atherosclerosis. Contributions have been made by clinicians and investigators worldwide with extensive experience in OCT, with the aim that this document will serve as a standard reference for future research and clinical application.
- Published
- 2022
36. CXR Segmentation by AdaIN-Based Domain Adaptation and Knowledge Distillation
- Author
-
Jong Chul Ye and Yujin Oh
- Published
- 2022
37. DiffuseMorph: Unsupervised Deformable Image Registration Using Diffusion Model
- Author
-
Inhwa Han, Boah Kim, and Jong Chul Ye
- Published
- 2022
38. Summary and Outlook
- Author
-
Jong Chul Ye
- Published
- 2022
39. Normalization and Attention
- Author
-
Jong Chul Ye
- Published
- 2022
40. Diffusion Deformable Model for 4D Temporal Medical Image Generation
- Author
-
Boah Kim and Jong Chul Ye
- Published
- 2022
41. Patch-Wise Deep Metric Learning for Unsupervised Low-Dose CT Denoising
- Author
-
Chanyong Jung, Joonhyung Lee, Sunkyoung You, and Jong Chul Ye
- Published
- 2022
42. Geometry of Deep Neural Networks
- Author
-
Jong Chul Ye
- Published
- 2022
43. Generative Models and Unsupervised Learning
- Author
-
Jong Chul Ye
- Published
- 2022
44. Geometry of Deep Learning
- Author
-
Jong Chul Ye
- Published
- 2022
45. Bibliography
- Author
-
Jong Chul Ye
- Published
- 2022
46. Generalization Capability of Deep Learning
- Author
-
Jong Chul Ye
- Published
- 2022
47. Adaptive and Compressive Beamforming Using Deep Learning for Medical Ultrasound
- Author
-
Shujaat Khan, Jaeyoung Huh, and Jong Chul Ye
- Subjects
FOS: Computer and information sciences ,Beamforming ,Computer Science - Machine Learning ,Acoustics and Ultrasonics ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Image processing ,01 natural sciences ,Machine Learning (cs.LG) ,Deep Learning ,0103 physical sciences ,Image Processing, Computer-Assisted ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Electrical and Electronic Engineering ,010301 acoustics ,Instrumentation ,Ultrasonography ,Artificial neural network ,Phantoms, Imaging ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Detector ,Electrical Engineering and Systems Science - Image and Video Processing ,Radio frequency ,Artificial intelligence ,business ,Algorithm ,Adaptive beamformer ,Algorithms ,Communication channel - Abstract
In ultrasound (US) imaging, various types of adaptive beamforming techniques have been investigated to improve the resolution and contrast-to-noise ratio of the delay and sum (DAS) beamformers. Unfortunately, the performance of these adaptive beamforming approaches degrade when the underlying model is not sufficiently accurate and the number of channels decreases. To address this problem, here we propose a deep learning-based beamformer to generate significantly improved images over widely varying measurement conditions and channel subsampling patterns. In particular, our deep neural network is designed to directly process full or sub-sampled radio-frequency (RF) data acquired at various subsampling rates and detector configurations so that it can generate high quality ultrasound images using a single beamformer. The origin of such input-dependent adaptivity is also theoretically analyzed. Experimental results using B-mode focused ultrasound confirm the efficacy of the proposed methods., This is a significantly extended version of the original paper in arXiv:1901.01706. This paper is accepted for IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control
- Published
- 2020
48. Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets
- Author
-
Sangjoon Park, Jong Chul Ye, and Yujin Oh
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Coronavirus disease 2019 (COVID-19) ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Training data sets ,Radiography ,Pneumonia, Viral ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (stat.ML) ,Disease ,Machine learning ,computer.software_genre ,Convolutional neural network ,Machine Learning (cs.LG) ,030218 nuclear medicine & medical imaging ,Betacoronavirus ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Statistics - Machine Learning ,Image Interpretation, Computer-Assisted ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Electrical and Electronic Engineering ,Lung ,Pandemics ,Radiological and Ultrasound Technology ,Artificial neural network ,SARS-CoV-2 ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,COVID-19 ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,Triage ,Computer Science Applications ,Data set ,Radiography, Thoracic ,Artificial intelligence ,Coronavirus Infections ,business ,computer ,Algorithms ,Software - Abstract
Under the global pandemic of COVID-19, the use of artificial intelligence to analyze chest X-ray (CXR) image for COVID-19 diagnosis and patient triage is becoming important. Unfortunately, due to the emergent nature of the COVID-19 pandemic, a systematic collection of the CXR data set for deep neural network training is difficult. To address this problem, here we propose a patch-based convolutional neural network approach with a relatively small number of trainable parameters for COVID-19 diagnosis. The proposed method is inspired by our statistical analysis of the potential imaging biomarkers of the CXR radiographs. Experimental results show that our method achieves state-of-the-art performance and provides clinically interpretable saliency maps, which are useful for COVID-19 diagnosis and patient triage., Accepted for IEEE Trans. on Medical Imaging Special Issue on Imaging-based Diagnosis of COVID-19
- Published
- 2020
49. Assessing the importance of magnetic resonance contrasts using collaborative generative adversarial networks
- Author
-
Dongwook Lee, Jong Chul Ye, and Won-Jin Moon
- Subjects
0301 basic medicine ,Computer Networks and Communications ,Computer science ,Machine learning ,computer.software_genre ,03 medical and health sciences ,Adversarial system ,0302 clinical medicine ,Artificial Intelligence ,medicine ,Medical imaging ,Set (psychology) ,medicine.diagnostic_test ,business.industry ,Contrast (statistics) ,Magnetic resonance imaging ,Human-Computer Interaction ,Generative model ,030104 developmental biology ,Scalability ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery ,Software ,Generative grammar - Abstract
A unique advantage of magnetic resonance imaging (MRI) is its mechanism for generating various image contrasts depending on tissue-specific parameters, which provides useful clinical information. Unfortunately, a complete set of MR contrasts is often difficult to obtain in a real clinical environment. Recently, there have been claims that generative models such as generative adversarial networks (GANs) can synthesize MR contrasts that are not acquired. However, the poor scalability of existing GAN-based image synthesis poses a fundamental challenge to understanding the nature of MR contrasts: which contrasts matter, and which cannot be synthesized by generative models? Here, we show that these questions can be addressed systematically by learning the joint manifold of multiple MR contrasts using collaborative generative adversarial networks. Our experimental results show that the exogenous contrast provided by contrast agents is not replaceable, but endogenous contrasts such as T1 and T2 can be synthesized from other contrasts. These findings provide important guidance for the acquisition-protocol design of MR in clinical environments. Magnetic resonance scans use different contrast agents to generate different images, each giving specific clinical information. Lee et al. use a collaborative generative model to synthesize some magnetic resonance contrasts from others, providing guidance for how clinical imaging times can be reduced.
- Published
- 2020
50. CycleGAN With a Blur Kernel for Deconvolution Microscopy: Optimal Transport Geometry
- Author
-
Byeongsu Sim, Sungjun Lim, Sunghoe Chang, Sangeun Lee, Hyoungjun Park, and Jong Chul Ye
- Subjects
Optimization algorithm ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Convolutional neural network ,Computer Science Applications ,Computational Mathematics ,Kernel (image processing) ,Robustness (computer science) ,Signal Processing ,Microscopy ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,020201 artificial intelligence & image processing ,Deconvolution ,Algorithm ,Supervised training ,0105 earth and related environmental sciences - Abstract
Deconvolution microscopy has been extensively used to improve the resolution of the wide-field fluorescent microscopy, but the performance of classical approaches critically depends on the accuracy of a model and optimization algorithms. Recently, the convolutional neural network (CNN) approaches have been studied as a fast and high performance alternative. Unfortunately, the CNN approaches usually require matched high resolution images for supervised training. In this article, we present a novel unsupervised cycle-consistent generative adversarial network (cycleGAN) with a linear blur kernel, which can be used for both blind- and non-blind image deconvolution. In contrast to the conventional cycleGAN approaches that require two deep generators, the proposed cycleGAN approach needs only a single deep generator and a linear blur kernel, which significantly improves the robustness and efficiency of network training. We show that the proposed architecture is indeed a dual formulation of an optimal transport problem that uses a special form of the penalized least squares cost as a transport cost. Experimental results using simulated and real experimental data confirm the efficacy of the algorithm.
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.