12 results on '"Jong Chul Ye"'
Search Results
2. Optical coherence tomography in coronary atherosclerosis assessment and intervention
- Author
-
Makoto Araki, Seung-Jung Park, Harold L. Dauerman, Shiro Uemura, Jung-Sun Kim, Carlo Di Mario, Thomas W. Johnson, Giulio Guagliumi, Adnan Kastrati, Michael Joner, Niels Ramsing Holm, Fernando Alfonso, William Wijns, Tom Adriaenssens, Holger Nef, Gilles Rioufol, Nicolas Amabile, Geraud Souteyrand, Nicolas Meneveau, Edouard Gerbaud, Maksymilian P. Opolski, Nieves Gonzalo, Guillermo J. Tearney, Brett Bouma, Aaron D. Aguirre, Gary S. Mintz, Gregg W. Stone, Christos V. Bourantas, Lorenz Räber, Sebastiano Gili, Kyoichi Mizuno, Shigeki Kimura, Toshiro Shinke, Myeong-Ki Hong, Yangsoo Jang, Jin Man Cho, Bryan P. Yan, Italo Porto, Giampaolo Niccoli, Rocco A. Montone, Vikas Thondapu, Michail I. Papafaklis, Lampros K. Michalis, Harmony Reynolds, Jacqueline Saw, Peter Libby, Giora Weisz, Mario Iannaccone, Tommaso Gori, Konstantinos Toutouzas, Taishi Yonetsu, Yoshiyasu Minami, Masamichi Takano, O. Christopher Raffel, Osamu Kurihara, Tsunenari Soeda, Tomoyo Sugiyama, Hyung Oh Kim, Tetsumin Lee, Takumi Higuma, Akihiro Nakajima, Erika Yamamoto, Krzysztof L. Bryniarski, Luca Di Vito, Rocco Vergallo, Francesco Fracassi, Michele Russo, Lena M. Seegers, Iris McNulty, Sangjoon Park, Marc Feldman, Javier Escaned, Francesco Prati, Eloisa Arbustini, Fausto J. Pinto, Ron Waksman, Hector M. Garcia-Garcia, Akiko Maehara, Ziad Ali, Aloke V. Finn, Renu Virmani, Annapoorna S. Kini, Joost Daemen, Teruyoshi Kume, Kiyoshi Hibi, Atsushi Tanaka, Takashi Akasaka, Takashi Kubo, Satoshi Yasuda, Kevin Croce, Juan F. Granada, Amir Lerman, Abhiram Prasad, Evelyn Regar, Yoshihiko Saito, Mullasari Ajit Sankardas, Vijayakumar Subban, Neil J. Weissman, Yundai Chen, Bo Yu, Stephen J. Nicholls, Peter Barlis, Nick E. J. West, Armin Arbab-Zadeh, Jong Chul Ye, Jouke Dijkstra, Hang Lee, Jagat Narula, Filippo Crea, Sunao Nakamura, Tsunekazu Kakuta, James Fujimoto, Valentin Fuster, Ik-Kyung Jang, CarMeN, laboratoire, Massachusetts General Hospital [Boston, MA, USA], Harvard Medical School [Boston] (HMS), Asan Medical Center [Seoul, South Korea] (AMC), University of Vermont [Burlington], Kawasaki Medical School [Okayama, Japan] (KMS), Yonsei University College of Medicine [Seoul, South Korea] (YUCM), Azienda Ospedaliero-Universitaria Careggi [Firenze] (AOUC), University Hospitals Bristol, Azienda Ospedaliera Ospedale Papa Giovanni XXIII [Bergamo, Italy], Technische Universität München = Technical University of Munich (TUM), Munich Heart Alliance [Munich, Allemagne] (MHA), German Heart Center = Deutsches Herzzentrum München [Munich, Germany] (GHC), Aarhus University Hospital [Skejby, Denmark] (AUH), Hospital Universitario de La Princesa, National University of Ireland [Galway] (NUI Galway), University Hospitals Leuven [Leuven], Technische Hochschule Mittelhessen - University of Applied Sciences [Giessen] (THM), Cardiovasculaire, métabolisme, diabétologie et nutrition (CarMeN), Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université de Lyon-Hospices Civils de Lyon (HCL)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE), Hospices Civils de Lyon (HCL), Université de Lyon, Institut Mutualiste de Montsouris (IMM), CHU Clermont-Ferrand, Centre Hospitalier Régional Universitaire de Besançon (CHRU Besançon), Centre de recherche Cardio-Thoracique de Bordeaux [Bordeaux] (CRCTB), Université Bordeaux Segalen - Bordeaux 2-CHU Bordeaux [Bordeaux]-Institut National de la Santé et de la Recherche Médicale (INSERM), National Institute of Cardiology [Warsaw, Poland] (NIC), Instituto de Investigación Sanitaria del Hospital Clínico San Carlos [Madrid, Spain] (IdISSC), Massachusetts General Hospital [Boston], Cardiovascular Research Foundation [New York, NY, USA] (CRF), Icahn School of Medicine at Mount Sinai [New York] (MSSM), Barts Health NHS Trust [London, UK], Queen Mary University of London (QMUL), Bern University Hospital [Berne] (Inselspital), Centro Cardiologico Monzino [Milan, Italy] (2CM), Istituti di Ricovero e Cura a Carattere Scientifico (IRCCS), Mitsukoshi Health and Welfare Foundation [Tokyo, Japan] (MHWF), Yokohama Minami Kyosai Hospital [Kanagawa, Japan] (YMKH), Showa University Hospital [Tokyo, Japan] (SUH), Kyung Hee University [Seoul, South Korea] (KHU), The Chinese University of Hong Kong [Hong Kong], Università degli studi di Genova = University of Genoa (UniGe), Università degli studi di Parma = University of Parma (UNIPR), Catholic University of the Sacred Heart [Rome, Italy] (CUSH), University Hospital [Ioannina, Greece] (UH), New York University School of Medicine (NYU Grossman School of Medicine), Vancouver General Hospital [Vancouver, British Columbia, Canada] (VGH), University of British Columbia (UBC), Brigham and Women’s Hospital [Boston, MA], New York Presbyterian Hospital, Columbia University Medical Center (CUMC), Columbia University [New York], Ospedale San Giovanni Bosco [Turin, Italy] (OSGB), Johannes Gutenberg - Universität Mainz = Johannes Gutenberg University (JGU), National and Kapodistrian University of Athens (NKUA), Tokyo Medical and Dental University [Japan] (TMDU), Kitasato University, Nippon Medical School Chiba Hokusoh Hospital [Chiba, Japan] (NMSC2H), The Prince Charles Hospital, Nara Medical University [Nara, Japan] (NMU), Tsuchiura Kyodo General Hospital [Ibaraki, Japan] (TKGH), Japanese Red Cross Musashino Hospital [Tokyo], St. Marianna University School of Medicine [Kanagawa, Japan], Kyoto University Graduate School of Medicine [Kyoto, Japan] (KUGSM), Jagiellonian University - Medical College (JUMC), Uniwersytet Jagielloński w Krakowie = Jagiellonian University (UJ), Mazzoni Hospital [Ascoli Piceno, Italy] (MH), Korea Advanced Institute of Science and Technology (KAIST), University of Texas Health Science Center, The University of Texas Health Science Center at Houston (UTHealth), Saint Camillus International University of Health Sciences [Rome, Italy] (SCIUHS), Fondazione IRCCS Policlinico San Matteo [Pavia], Università degli Studi di Pavia = University of Pavia (UNIPV), Universidade de Lisboa = University of Lisbon (ULISBOA), MedStar Washington Hospital Center [Washington, DC, USA] (MedStar WHC), CV Path Institute [Gaithersburg, MD, USA] (CV-PI), Erasmus University Medical Center [Rotterdam] (Erasmus MC), Yokohama City University (YCU), Wakayama University, Tohoku University [Sendai], Mayo Clinic [Rochester, MN, USA], Mayo Clinic [Rochester], University hospital of Zurich [Zurich], Gifu University Graduate School of Medicine, Madras Medical Mission [Chennai, India] (3M), MedStar Health Research Institute [Washington, DC, USA] (MedStar-HRI), Chinese People's Liberation Army General Hospital [Beijing, China] (CPLAGH), Harbin Medical University [China] (HMU), Monash university, University of Melbourne, Royal Papworth Hospital [Cambridge, UK] (RPH), Johns Hopkins University (JHU), Leiden University Medical Center (LUMC), The Open University of Japan [Chiba] (OUJ), and Massachusetts Institute of Technology (MIT)
- Subjects
[SDV] Life Sciences [q-bio] ,[SDV]Life Sciences [q-bio] ,Cardiology and Cardiovascular Medicine - Abstract
Optical coherence tomography (OCT) has been widely adopted in research on coronary atherosclerosis and adopted clinically to optimize percutaneous coronary intervention. In this Review, Jang and colleagues summarize this rapidly progressing field, with the aim of standardizing the use of OCT in coronary atherosclerosis.Since optical coherence tomography (OCT) was first performed in humans two decades ago, this imaging modality has been widely adopted in research on coronary atherosclerosis and adopted clinically for the optimization of percutaneous coronary intervention. In the past 10 years, substantial advances have been made in the understanding of in vivo vascular biology using OCT. Identification by OCT of culprit plaque pathology could potentially lead to a major shift in the management of patients with acute coronary syndromes. Detection by OCT of healed coronary plaque has been important in our understanding of the mechanisms involved in plaque destabilization and healing with the rapid progression of atherosclerosis. Accurate detection by OCT of sequelae from percutaneous coronary interventions that might be missed by angiography could improve clinical outcomes. In addition, OCT has become an essential diagnostic modality for myocardial infarction with non-obstructive coronary arteries. Insight into neoatherosclerosis from OCT could improve our understanding of the mechanisms of very late stent thrombosis. The appropriate use of OCT depends on accurate interpretation and understanding of the clinical significance of OCT findings. In this Review, we summarize the state of the art in cardiac OCT and facilitate the uniform use of this modality in coronary atherosclerosis. Contributions have been made by clinicians and investigators worldwide with extensive experience in OCT, with the aim that this document will serve as a standard reference for future research and clinical application.
- Published
- 2022
3. Diagnosis of coronary layered plaque by deep learning
- Author
-
Makoto Araki, Sangjoon Park, Akihiro Nakajima, Hang Lee, Jong Chul Ye, and Ik-Kyung Jang
- Subjects
Multidisciplinary - Abstract
Healed coronary plaques, morphologically characterized by a layered phenotype, are signs of previous plaque destabilization and healing. Recent optical coherence tomography (OCT) studies demonstrated that layered plaque is associated with higher levels of local and systemic inflammation and rapid plaque progression. However, the diagnosis of layered plaque needs expertise in OCT image analysis and is susceptible to inter-observer variability. We developed a deep learning (DL) model for an accurate diagnosis of layered plaque. A Visual Transformer (ViT)-based DL model that integrates information from adjacent frames emulating the cardiologists who review consecutive OCT frames to make a diagnosis was developed and compared with the standard convolutional neural network (CNN) model. A total of 237,021 cross-sectional OCT images from 581 patients collected from 8 sites were used for training and internal validation, and 65,394 images from 292 patients collected from another site were used for external validation. In the five-fold cross-validation, the ViT-based model provided better performance (area under the curve [AUC]: 0.860; 95% confidence interval [CI]: 0.855–0.866) than the standard CNN-based model (AUC: 0.799; 95% CI: 0.792–0.805). The ViT-based model (AUC: 0.845; 95% CI: 0.837–0.853) also surpassed the standard CNN-based model (AUC: 0.791; 95% CI: 0.782–0.800) in the external validation. The ViT-based DL model can accurately diagnose a layered plaque, which could help risk stratification for cardiac events.
- Published
- 2023
4. Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
- Author
-
Sangjoon Park, Gwanghyun Kim, Yujin Oh, Joon Beom Seo, Sang Min Lee, Jin Hwan Kim, Sungjun Moon, Jae-Kwang Lim, Chang Min Park, and Jong Chul Ye
- Subjects
Radiography ,Multidisciplinary ,X-Rays ,COVID-19 ,Humans ,General Physics and Astronomy ,Diagnosis, Computer-Assisted ,General Chemistry ,Algorithms ,General Biochemistry, Genetics and Molecular Biology - Abstract
Although deep learning-based computer-aided diagnosis systems have recently achieved expert-level performance, developing a robust model requires large, high-quality data with annotations that are expensive to obtain. This situation poses a conundrum that annually-collected chest x-rays cannot be utilized due to the absence of labels, especially in deprived areas. In this study, we present a framework named distillation for self-supervision and self-train learning (DISTL) inspired by the learning process of the radiologists, which can improve the performance of vision transformer simultaneously with self-supervision and self-training through knowledge distillation. In external validation from three hospitals for diagnosis of tuberculosis, pneumothorax, and COVID-19, DISTL offers gradually improved performance as the amount of unlabeled data increase, even better than the fully supervised model with the same amount of labeled data. We additionally show that the model obtained with DISTL is robust to various real-world nuisances, offering better applicability in clinical setting.
- Published
- 2022
5. Deep learning–based denoising algorithm in comparison to iterative reconstruction and filtered back projection: a 12-reader phantom study
- Author
-
Kyeorye Lee, Kyoung Ho Lee, Jong Chul Ye, Young Hoon Kim, Eun Hee Kang, Hae Young Kim, Won Chang, Ji Hoon Park, Yoon Jin Lee, Youngjune Kim, and Dong Yul Oh
- Subjects
medicine.medical_specialty ,Image quality ,Iterative reconstruction ,Radiation Dosage ,Imaging phantom ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Humans ,Medicine ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Ground truth ,Receiver operating characteristic ,Radon transform ,Phantoms, Imaging ,business.industry ,General Medicine ,030220 oncology & carcinogenesis ,Radiographic Image Interpretation, Computer-Assisted ,Standard algorithms ,Noise (video) ,Artificial intelligence ,Radiology ,Tomography, X-Ray Computed ,business ,Algorithms - Abstract
(1) To compare low-contrast detectability of a deep learning–based denoising algorithm (DLA) with ADMIRE and FBP, and (2) to compare image quality parameters of DLA with those of reconstruction methods from two different CT vendors (ADMIRE, IMR, and FBP). Using abdominal CT images of 100 patients reconstructed via ADMIRE and FBP, we trained DLA by feeding FBP images as input and ADMIRE images as the ground truth. To measure the low-contrast detectability, the randomized repeat scans of Catphan® phantom were performed under various conditions of radiation exposures. Twelve radiologists evaluated the presence/absence of a target on a five-point confidence scale. The multi-reader multi-case area under the receiver operating characteristic curve (AUC) was calculated, and non-inferiority tests were performed. Using American College of Radiology CT accreditation phantom, contrast-to-noise ratio, target transfer function, noise magnitude, and detectability index (d’) of DLA, ADMIRE, IMR, and FBPs were computed. The AUC of DLA in low-contrast detectability was non-inferior to that of ADMIRE (p < .001) and superior to that of FBP (p < .001). DLA improved the image quality in terms of all physical measurements compared to FBPs from both CT vendors and showed profiles of physical measurements similar to those of ADMIRE. The low-contrast detectability of the proposed deep learning–based denoising algorithm was non-inferior to that of ADMIRE and superior to that of FBP. The DLA could successfully improve image quality compared with FBP while showing the similar physical profiles of ADMIRE. • Low-contrast detectability in the images denoised using the deep learning algorithm was non-inferior to that in the images reconstructed using standard algorithms. • The proposed deep learning algorithm showed similar profiles of physical measurements to advanced iterative reconstruction algorithm (ADMIRE).
- Published
- 2021
6. Reusability report: Feature disentanglement in generating a three-dimensional structure from a two-dimensional slice with sliceGAN
- Author
-
Hyungjin Chung and Jong Chul Ye
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Structure (category theory) ,Pattern recognition ,Human-Computer Interaction ,Artificial Intelligence ,Feature (computer vision) ,Wafer ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Reusability - Published
- 2021
7. Deep learning enables reference-free isotropic super-resolution for volumetric fluorescence microscopy
- Author
-
Hyoungjun Park, Myeongsu Na, Bumju Kim, Soohyun Park, Ki Hean Kim, Sunghoe Chang, and Jong Chul Ye
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Multidisciplinary ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,General Physics and Astronomy ,Machine Learning (stat.ML) ,General Chemistry ,General Biochemistry, Genetics and Molecular Biology ,Machine Learning (cs.LG) ,Deep Learning ,Imaging, Three-Dimensional ,Artificial Intelligence (cs.AI) ,Microscopy, Fluorescence ,Statistics - Machine Learning ,Anisotropy - Abstract
Volumetric imaging by fluorescence microscopy is often limited by anisotropic spatial resolution, in which the axial resolution is inferior to the lateral resolution. To address this problem, we present a deep-learning-enabled unsupervised super-resolution technique that enhances anisotropic images in volumetric fluorescence microscopy. In contrast to the existing deep learning approaches that require matched high-resolution target images, our method greatly reduces the effort to be put into practice as the training of a network requires only a single 3D image stack, without a priori knowledge of the image formation process, registration of training data, or separate acquisition of target data. This is achieved based on the optimal transport-driven cycle-consistent generative adversarial network that learns from an unpaired matching between high-resolution 2D images in the lateral image plane and low-resolution 2D images in other planes. Using fluorescence confocal microscopy and light-sheet microscopy, we demonstrate that the trained network not only enhances axial resolution but also restores suppressed visual details between the imaging planes and removes imaging artifacts.
- Published
- 2022
8. Deep learning STEM-EDX tomography of nanocrystals
- Author
-
Byeong Gyu Chae, Yoseob Han, Jun-Ho Lee, Hee Goo Kim, Jong Chul Ye, Shinae Jun, Jaeduck Jang, Tae-Gon Kim, Eunha Lee, Sungwoo Hwang, Eunju Cha, Hyungjin Chung, and Myoungho Jeong
- Subjects
0301 basic medicine ,Materials science ,Tomographic reconstruction ,Computer Networks and Communications ,business.industry ,Nanoparticle ,Human-Computer Interaction ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Artificial Intelligence ,Quantum dot ,Scanning transmission electron microscopy ,Optoelectronics ,Quantum efficiency ,Computer Vision and Pattern Recognition ,Tomography ,Spectroscopy ,business ,Nanoscopic scale ,030217 neurology & neurosurgery ,Software - Abstract
Energy-dispersive X-ray spectroscopy (EDX) is often performed simultaneously with high-angle annular dark-field scanning transmission electron microscopy (STEM) for nanoscale physico-chemical analysis. However, high-quality STEM-EDX tomographic imaging is still challenging due to fundamental limitations such as sample degradation with prolonged scan time and the low probability of X-ray generation. To address this, we propose an unsupervised deep learning method for high-quality 3D EDX tomography of core–shell nanocrystals, which can be usually permanently dammaged by prolonged electron beam. The proposed deep learning STEM-EDX tomography method was used to accurately reconstruct Au nanoparticles and InP/ZnSe/ZnS core–shell quantum dots, used in commercial display devices. Furthermore, the shape and thickness uniformity of the reconstructed ZnSe/ZnS shell closely correlates with optical properties of the quantum dots, such as quantum efficiency and chemical stability. Advanced electron microscopy and spectroscopy techniques can reveal useful structural and chemical details at the nanoscale. An unsupervised deep learning approach helps to reconstruct 3D images and observe the relationship between optical and structural properties of semiconductor nanocrystals, of interest in optoelectronic applications.
- Published
- 2021
9. Deep learning for tomographic image reconstruction
- Author
-
Jong Chul Ye, Bruno De Man, and Ge Wang
- Subjects
0301 basic medicine ,Modern medicine ,Tomographic reconstruction ,Computer Networks and Communications ,Computer science ,business.industry ,Image quality ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,Convolutional neural network ,Field (computer science) ,Human-Computer Interaction ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Artificial Intelligence ,Medical imaging ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Software - Abstract
Deep-learning-based tomographic imaging is an important application of artificial intelligence and a new frontier of machine learning. Deep learning has been widely used in computer vision and image analysis, which deal with existing images, improve these images, and produce features from them. Since 2016, deep learning techniques have been actively researched for tomographic imaging, especially in the context of biomedicine, with impressive results and great potential. Tomographic reconstruction produces images of multi-dimensional structures from externally measured ‘encoded’ data in the form of various tomographic transforms (integrals, harmonics, echoes and so on). In this Review, we provide a general background, highlight representative results with an emphasis on medical imaging, and discuss key issues that need to be addressed in this emerging field. In particular, tomographic imaging is an integral part of modern medicine, and will play a key role in personalized, preventive and precision medicine and make it intelligent, inexpensive and indiscriminate. The popularity of deep learning is leading to new areas in biomedical applications. Wang and colleagues summarize in this Review the recent development and future directions of deep neural networks for superior image quality in the tomographic imaging field.
- Published
- 2020
10. Assessing the importance of magnetic resonance contrasts using collaborative generative adversarial networks
- Author
-
Dongwook Lee, Jong Chul Ye, and Won-Jin Moon
- Subjects
0301 basic medicine ,Computer Networks and Communications ,Computer science ,Machine learning ,computer.software_genre ,03 medical and health sciences ,Adversarial system ,0302 clinical medicine ,Artificial Intelligence ,medicine ,Medical imaging ,Set (psychology) ,medicine.diagnostic_test ,business.industry ,Contrast (statistics) ,Magnetic resonance imaging ,Human-Computer Interaction ,Generative model ,030104 developmental biology ,Scalability ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery ,Software ,Generative grammar - Abstract
A unique advantage of magnetic resonance imaging (MRI) is its mechanism for generating various image contrasts depending on tissue-specific parameters, which provides useful clinical information. Unfortunately, a complete set of MR contrasts is often difficult to obtain in a real clinical environment. Recently, there have been claims that generative models such as generative adversarial networks (GANs) can synthesize MR contrasts that are not acquired. However, the poor scalability of existing GAN-based image synthesis poses a fundamental challenge to understanding the nature of MR contrasts: which contrasts matter, and which cannot be synthesized by generative models? Here, we show that these questions can be addressed systematically by learning the joint manifold of multiple MR contrasts using collaborative generative adversarial networks. Our experimental results show that the exogenous contrast provided by contrast agents is not replaceable, but endogenous contrasts such as T1 and T2 can be synthesized from other contrasts. These findings provide important guidance for the acquisition-protocol design of MR in clinical environments. Magnetic resonance scans use different contrast agents to generate different images, each giving specific clinical information. Lee et al. use a collaborative generative model to synthesize some magnetic resonance contrasts from others, providing guidance for how clinical imaging times can be reduced.
- Published
- 2020
11. Compressed sensing MRI: a review from signal processing perspective
- Author
-
Jong Chul Ye
- Subjects
Cultural Studies ,Linguistics and Language ,History ,lcsh:Medical technology ,Computer science ,lcsh:Biotechnology ,Dynamic imaging ,Review ,Language and Linguistics ,lcsh:TP248.13-248.65 ,MRI ,compressed sensing ,k-space ,medicine ,Computer vision ,Signal processing ,Modality (human–computer interaction) ,medicine.diagnostic_test ,business.industry ,Echo (computing) ,Perspective (graphical) ,Magnetic resonance imaging ,Compressed sensing ,lcsh:R855-855.5 ,Anthropology ,Artificial intelligence ,business - Abstract
Magnetic resonance imaging (MRI) is an inherently slow imaging modality, since it acquires multi-dimensional k-space data through 1-D free induction decay or echo signals. This often limits the use of MRI, especially for high resolution or dynamic imaging. Accordingly, many investigators has developed various acceleration techniques to allow fast MR imaging. For the last two decades, one of the most important breakthroughs in this direction is the introduction of compressed sensing (CS) that allows accurate reconstruction from sparsely sampled k-space data. The recent FDA approval of compressed sensing products for clinical scans clearly reflect the maturity of this technology. Therefore, this paper reviews the basic idea of CS and how this technology have been evolved for various MR imaging problems.
- Published
- 2019
12. [Untitled]
- Author
-
Pierre Moulin, Jong Chul Ye, and Yoram Bresler
- Subjects
Level set (data structures) ,Level set method ,Pixel ,Computer science ,Iterative reconstruction ,Inverse problem ,Image (mathematics) ,symbols.namesake ,Fourier transform ,Artificial Intelligence ,Conjugate gradient method ,symbols ,Computer Vision and Pattern Recognition ,Algorithm ,Software - Abstract
We address an ill-posed inverse problem of image estimation from sparse samples of its Fourier transform. The problem is formulated as joint estimation of the supports of unknown sparse objects in the image, and pixel values on these supports. The domain and the pixel values are alternately estimated using the level-set method and the conjugate gradient method, respectively. Our level-set evolution shows a unique switching behavior, which stabilizes the level-set evolution. Furthermore, the trade-off between the stability and the speed of evolution can be easily controlled by the number of the conjugate gradient steps, thus avoiding the re-initialization steps in conventional level set approaches.
- Published
- 2002
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.