21 results on '"Jung, Ho Yub"'
Search Results
2. Consistent color and detail transfer from multiple source images for video and images
- Author
-
Heo, Yong Seok, Lee, Soochahn, and Jung, Ho Yub
- Published
- 2016
- Full Text
- View/download PDF
3. Window annealing for pixel-labeling problems
- Author
-
Jung, Ho Yub, Lee, Kyoung Mu, and Lee, Sang Uk
- Published
- 2013
- Full Text
- View/download PDF
4. Semi-supervised atmospheric component learning in low-light image problem.
- Author
-
Fahim, Masud An Nur Islam, Saqib, Nazmus, and Jung, Ho Yub
- Subjects
LIGHT transmission ,IMAGE reconstruction ,WEATHER ,SUPERVISED learning ,PHOTOGRAPHS ,NETWORK performance - Abstract
Ambient lighting conditions play a crucial role in determining the perceptual quality of images from photographic devices. In general, inadequate transmission light and undesired atmospheric conditions jointly degrade the image quality. If we know the desired ambient factors associated with the given low-light image, we can recover the enhanced image easily. Typical deep networks perform enhancement mappings without investigating the light distribution and color formulation properties. This leads to a lack of image instance-adaptive performance in practice. On the other hand, physical model-driven schemes suffer from the need for inherent decompositions and multiple objective minimizations. Moreover, the above approaches are rarely data efficient or free of postprediction tuning. Influenced by the above issues, this study presents a semisupervised training method using no-reference image quality metrics for low-light image restoration. We incorporate the classical haze distribution model to explore the physical properties of the given image to learn the effect of atmospheric components and minimize a single objective for restoration. We validate the performance of our network for six widely used low-light datasets. Experimental studies show that our proposed study achieves a competitive performance for no-reference metrics compared to current state-of-the-art methods. We also show the improved generalization performance of our proposed method which is efficient in preserving face identities in extreme low-light scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Can hippocampal subfield measures supply information that could be used to improve the diagnosis of Alzheimer's disease?
- Author
-
Kannappan, Balaji, te Nijenhuis, Jan, Choi, Yu Yong, Lee, Jang Jae, Choi, Kyu Yeong, Balzekas, Irena, Jung, Ho Yub, Choe, Youngshik, Song, Min Kyung, Chung, Ji Yeon, Ha, Jung-Min, Choi, Seong-Min, Kim, Hoowon, Kim, Byeong C., Jo, Hang Joon, and Lee, Kun Ho
- Subjects
ALZHEIMER'S disease ,INFORMATION measurement ,HIPPOCAMPUS (Brain) ,DENTATE gyrus ,GRANULE cells ,NEUROPSYCHOLOGICAL tests - Abstract
The diagnosis of Alzheimer's disease (AD) needs to be improved. We investigated if hippocampal subfield volume measured by structural imaging, could supply information, so that the diagnosis of AD could be improved. In this study, subjects were classified based on clinical, neuropsychological, and amyloid positivity or negativity using PET scans. Data from 478 elderly Korean subjects grouped as cognitively unimpaired β-amyloid-negative (NC), cognitively unimpaired β-amyloid-positive (aAD), mild cognitively impaired β-amyloid-positive (pAD), mild cognitively impaired—specific variations not due to dementia β-amyloid-negative (CIND), severe cognitive impairment β-amyloid-positive (ADD+) and severe cognitive impairment β-amyloid-negative (ADD-) were used. NC and aAD groups did not show significant volume differences in any subfields. The CIND did not show significant volume differences when compared with either the NC or the aAD (except L-HATA). However, pAD showed significant volume differences in Sub, PrS, ML, Tail, GCMLDG, CA1, CA4, HATA, and CA3 when compared with the NC and aAD. The pAD group also showed significant differences in the hippocampal tail, CA1, CA4, molecular layer, granule cells/molecular layer/dentate gyrus, and CA3 when compared with the CIND group. The ADD- group had significantly larger volumes than the ADD+ group in the bilateral tail, SUB, PrS, and left ML. The results suggest that early amyloid depositions in cognitive normal stages are not accompanied by significant bilateral subfield volume atrophy. There might be intense and accelerated subfield volume atrophy in the later stages associated with the cognitive impairment in the pAD stage, which subsequently could drive the progression to AD dementia. Early subfield volume atrophy associated with the β-amyloid burden may be characterized by more symmetrical atrophy in CA regions than in other subfields. We conclude that the hippocampal subfield volumetric differences from structural imaging show promise for improving the diagnosis of Alzheimer's disease. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Denoising Single Images by Feature Ensemble Revisited.
- Author
-
Fahim, Masud An Nur Islam, Saqib, Nazmus, Siam, Shafkat Khan, and Jung, Ho Yub
- Subjects
IMAGE reconstruction ,COMPUTER vision ,IMAGE denoising - Abstract
Image denoising is still a challenging issue in many computer vision subdomains. Recent studies have shown that significant improvements are possible in a supervised setting. However, a few challenges, such as spatial fidelity and cartoon-like smoothing, remain unresolved or decisively overlooked. Our study proposes a simple yet efficient architecture for the denoising problem that addresses the aforementioned issues. The proposed architecture revisits the concept of modular concatenation instead of long and deeper cascaded connections, to recover a cleaner approximation of the given image. We find that different modules can capture versatile representations, and a concatenated representation creates a richer subspace for low-level image restoration. The proposed architecture's number of parameters remains smaller than in most of the previous networks and still achieves significant improvements over the current state-of-the-art networks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Rethinking Gradient Weight's Influence over Saliency Map Estimation.
- Author
-
Fahim, Masud An Nur Islam, Saqib, Nazmus, Siam, Shafkat Khan, and Jung, Ho Yub
- Subjects
ARTIFICIAL neural networks - Abstract
Class activation map (CAM) helps to formulate saliency maps that aid in interpreting the deep neural network's prediction. Gradient-based methods are generally faster than other branches of vision interpretability and independent of human guidance. The performance of CAM-like studies depends on the governing model's layer response and the influences of the gradients. Typical gradient-oriented CAM studies rely on weighted aggregation for saliency map estimation by projecting the gradient maps into single-weight values, which may lead to an over-generalized saliency map. To address this issue, we use a global guidance map to rectify the weighted aggregation operation during saliency estimation, where resultant interpretations are comparatively cleaner and instance-specific. We obtain the global guidance map by performing elementwise multiplication between the feature maps and their corresponding gradient maps. To validate our study, we compare the proposed study with nine different saliency visualizers. In addition, we use seven commonly used evaluation metrics for quantitative comparison. The proposed scheme achieves significant improvement over the test images from the ImageNet, MS-COCO 14, and PASCAL VOC 2012 datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Ensemble of ROI-based convolutional neural network classifiers for staging the Alzheimer disease spectrum from magnetic resonance imaging.
- Author
-
Ahmed, Samsuddin, Kim, Byeong C., Lee, Kun Ho, and Jung, Ho Yub
- Subjects
CONVOLUTIONAL neural networks ,ALZHEIMER'S disease ,MILD cognitive impairment ,VOLUMETRIC analysis ,DISEASE progression - Abstract
Patches from three orthogonal views of selected cerebral regions can be utilized to learn convolutional neural network (CNN) models for staging the Alzheimer disease (AD) spectrum including preclinical AD, mild cognitive impairment due to AD, and dementia due to AD and normal controls. Hippocampi, amygdalae and insulae were selected from the volumetric analysis of structured magnetic resonance images (MRIs). Three-view patches (TVPs) from these regions were fed to the CNN for training. MRIs were classified with the SoftMax-normalized scores of individual model predictions on TVPs. The significance of each region of interest (ROI) for staging the AD spectrum was evaluated and reported. The results of the ensemble classifier are compared with state-of-the-art methods using the same evaluation metrics. Patch-based ROI ensembles provide comparable diagnostic performance for AD staging. In this work, TVP-based ROI analysis using a CNN provides informative landmarks in cerebral MRIs and may have significance in clinical studies and computer-aided diagnosis system design. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Automatic aortic valve landmark localization in coronary CT angiography using colonial walk.
- Author
-
Al, Walid Abdullah, Jung, Ho Yub, Yun, Il Dong, Jang, Yeonggul, Park, Hyung-Bok, and Chang, Hyuk-Jae
- Subjects
- *
HEART valve prosthesis implantation , *CORONARY circulation , *ARTIFICIAL intelligence , *RANDOM walks ,AORTIC valve surgery - Abstract
The minimally invasive transcatheter aortic valve implantation (TAVI) is the most prevalent method to treat aortic valve stenosis. For pre-operative surgical planning, contrast-enhanced coronary CT angiography (CCTA) is used as the imaging technique to acquire 3-D measurements of the valve. Accurate localization of the eight aortic valve landmarks in CT images plays a vital role in the TAVI workflow because a small error risks blocking the coronary circulation. In order to examine the valve and mark the landmarks, physicians prefer a view parallel to the hinge plane, instead of using the conventional axial, coronal or sagittal view. However, customizing the view is a difficult and time-consuming task because of unclear aorta pose and different artifacts of CCTA. Therefore, automatic localization of landmarks can serve as a useful guide to the physicians customizing the viewpoint. In this paper, we present an automatic method to localize the aortic valve landmarks using colonial walk, a regression tree-based machine-learning algorithm. For efficient learning from the training set, we propose a two-phase optimized search space learning model in which a representative point inside the valvular area is first learned from the whole CT volume. All eight landmarks are then learned from a smaller area around that point. Experiment with preprocedural CCTA images of TAVI undergoing patients showed that our method is robust under high stenotic variation and notably efficient, as it requires only 12 milliseconds to localize all eight landmarks, as tested on a 3.60 GHz single-core CPU. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. A Sequential Approach to 3D Human Pose Estimation: Separation of Localization and Identification of Body Joints.
- Author
-
Jung, Ho Yub, Suh, Yumin, Moon, Gyeongsik, and Lee, Kyoung Mu
- Published
- 2016
- Full Text
- View/download PDF
11. Geodesic Distance Algorithm for Extracting the Ascending Aorta from 3D CT Images.
- Author
-
Jang, Yeonggul, Jung, Ho Yub, Hong, Youngtaek, Cho, Iksung, Shim, Hackjoon, and Chang, Hyuk-Jae
- Subjects
- *
GEODESIC distance , *IMAGE segmentation , *ALGORITHMS , *COMPUTED tomography , *AORTA radiography , *ENERGY function - Abstract
This paper presents a method for the automatic 3D segmentation of the ascending aorta from coronary computed tomography angiography (CCTA). The segmentation is performed in three steps. First, the initial seed points are selected by minimizing a newly proposed energy function across the Hough circles. Second, the ascending aorta is segmented by geodesic distance transformation. Third, the seed points are effectively transferred through the next axial slice by a novel transfer function. Experiments are performed using a database composed of 10 patients’ CCTA images. For the experiment, the ground truths are annotated manually on the axial image slices by a medical expert. A comparative evaluation with state-of-the-art commercial aorta segmentation algorithms shows that our approach is computationally more efficient and accurate under the DSC (Dice Similarity Coefficient) measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
12. Image Segmentation by Edge Partitioning over a Nonsubmodular Markov Random Field.
- Author
-
Jung, Ho Yub and Lee, Kyoung Mu
- Subjects
- *
IMAGE segmentation , *SUBMODULAR functions , *MARKOV random fields , *NUMBER theory , *ESTIMATION theory , *ALGORITHMS - Abstract
Edge weight-based segmentation methods, such as normalized cut or minimum cut, require a partition number specification for their energy formulation. The number of partitions plays an important role in the segmentation overall quality. However, finding a suitable partition number is a nontrivial problem, and the numbers are ordinarily manually assigned. This is an aspect of the general partition problem, where finding the partition number is an important and difficult issue. In this paper, the edge weights instead of the pixels are partitioned to segment the images. By partitioning the edge weights into two disjoints sets, that is, cut and connect, an image can be partitioned into all possible disjointed segments. The proposed energy function is independent of the number of segments. The energy is minimized by iterating the QPBO-α-expansion algorithm over the pairwise Markov random field and the mean estimation of the cut and connected edges. Experiments using the Berkeley database show that the proposed segmentation method can obtain equivalently accurate segmentation results without designating the segmentation numbers. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
13. A Novel Cascade Classifier for Automatic Microcalcification Detection.
- Author
-
Shin, Seung Yeon, Lee, Soochahn, Yun, Il Dong, Jung, Ho Yub, Heo, Yong Seok, Kim, Sun Mi, and Lee, Kyoung Mu
- Subjects
CALCIFICATIONS of the breast ,MAMMOGRAMS ,RECEIVER operating characteristic curves ,BREAST cancer diagnosis ,BOLTZMANN machine - Abstract
In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC). Our framework comprises three classification stages: i) a random forest (RF) classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii) a more complex discriminative restricted Boltzmann machine (DRBM) classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii) a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS) and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC) and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC) curve for detection of clustered μCs. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
14. Forest Walk Methods for Localizing Body Joints from Single Depth Image.
- Author
-
Jung, Ho Yub, Lee, Soochahn, Heo, Yong Seok, and Yun, Il Dong
- Subjects
- *
JOINT physiology , *POSTURE , *RANDOM forest algorithms , *POSE estimation (Computer vision) , *COMPUTATIONAL biology , *DISTRIBUTION (Probability theory) - Abstract
We present multiple random forest methods for human pose estimation from single depth images that can operate in very high frame rate. We introduce four algorithms: random forest walk, greedy forest walk, random forest jumps, and greedy forest jumps. The proposed approaches can accurately infer the 3D positions of body joints without additional information such as temporal prior. A regression forest is trained to estimate the probability distribution to the direction or offset toward the particular joint, relative to the adjacent position. During pose estimation, the new position is chosen from a set of representative directions or offsets. The distribution for next position is found from traversing the regression tree from new position. The continual position sampling through 3D space will eventually produce an expectation of sample positions, which we estimate as the joint position. The experiments show that the accuracy is higher than current state-of-the-art pose estimation methods with additional advantage in computation time. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. Stereo reconstruction using high order likelihood.
- Author
-
Jung, Ho Yub, Lee, Kyoung Mu, and Lee, Sang Uk
- Published
- 2011
- Full Text
- View/download PDF
16. Toward Global Minimum through Combined Local Minima.
- Author
-
Jung, Ho Yub, Lee, Kyoung Mu, and Lee, Sang Uk
- Abstract
There are many local and greedy algorithms for energy minimization over Markov Random Field (MRF) such as iterated condition mode (ICM) and various gradient descent methods. Local minima solutions can be obtained with simple implementations and usually require smaller computational time than global algorithms. Also, methods such as ICM can be readily implemented in a various difficult problems that may involve larger than pairwise clique MRFs. However, their short comings are evident in comparison to newer methods such as graph cut and belief propagation. The local minimum depends largely on the initial state, which is the fundamental problem of its kind. In this paper, disadvantages of local minima techniques are addressed by proposing ways to combine multiple local solutions. First, multiple ICM solutions are obtained using different initial states. The solutions are combined with random partitioning based greedy algorithm called Combined Local Minima (CLM). There are numerous MRF problems that cannot be efficiently implemented with graph cut and belief propagation, and so by introducing ways to effectively combine local solutions, we present a method to dramatically improve many of the pre-existing local minima algorithms. The proposed approach is shown to be effective on pairwise stereo MRF compared with graph cut and sequential tree re-weighted belief propagation (TRW-S). Additionally, we tested our algorithm against belief propagation (BP) over randomly generated 30 ×30 MRF with 2 ×2 clique potentials, and we experimentally illustrate CLM΄s advantage over message passing algorithms in computation complexity and performance. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
17. Window Annealing over Square Lattice Markov Random Field.
- Author
-
Jung, Ho Yub, Lee, Kyoung Mu, and Lee, Sang Uk
- Abstract
Monte Carlo methods and their subsequent simulated annealing are able to minimize general energy functions. However, the slow convergence of simulated annealing compared with more recent deterministic algorithms such as graph cuts and belief propagation hinders its popularity over the large dimensional Markov Random Field (MRF). In this paper, we propose a new efficient sampling-based optimization algorithm called WA (Window Annealing) over squared lattice MRF, in which cluster sampling and annealing concepts are combined together. Unlike the conventional annealing process in which only the temperature variable is scheduled, we design a series of artificial ″guiding″ (auxiliary) probability distributions based on the general sequential Monte Carlo framework. These auxiliary distributions lead to the maximum a posteriori (MAP) state by scheduling both the temperature and the proposed maximum size of the windows (rectangular cluster) variable. This new annealing scheme greatly enhances the mixing rate and consequently reduces convergence time. Moreover, by adopting the integral image technique for computation of the proposal probability of a sampled window, we can achieve a dramatic reduction in overall computations. The proposed WA is compared with several existing Monte Carlo based optimization techniques as well as state-of-the-art deterministic methods including Graph Cut (GC) and sequential tree re-weighted belief propagation (TRW-S) in the pairwise MRF stereo problem. The experimental results demonstrate that the proposed WA method is comparable with GC in both speed and obtained energy level. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
18. Stereo Matching Using Scanline Disparity Discontinuity Optimization.
- Author
-
Blanc-Talon, Jacques, Philips, Wilfried, Popescu, Dan, Scheunders, Paul, Jung, Ho Yub, Lee, Kyoung Mu, and Lee, Sang Uk
- Abstract
We propose a scanline energy minimization algorithm for stereo vision. The proposed algorithm differs from conventional energy minimization techniques in that it focuses on the relationship between local match cost solution and the energy minimization solution. The local solution is transformed into energy minimization solution through the optimization of the disparity discontinuity. In this paper, disparity discontinuities are targeted during the energy minimization instead of the disparities themselves. By eliminating and relocating the disparity discontinuities, the energy can be minimized in iterations of O(n) where n is the number of pixels. Although dynamic programming has been adequate for both speed and performance in the scan-line stereo, the proposed algorithm was shown to have better performance with comparable speed. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
19. Single Image Dehazing Using End-to-End Deep-Dehaze Network.
- Author
-
Fahim, Masud An-Nur Islam, Jung, Ho Yub, and Trzcinski, Tomasz
- Subjects
IMAGE fusion ,COMPUTER vision ,VERNACULAR architecture ,PERCEPTUAL illusions ,WEATHER - Abstract
Haze is a natural distortion to the real-life images due to the specific weather conditions. This distortion limits the perceptual fidelity, as well as information integrity, of a given image. Image dehazing for the observed images is a complicated task because of its ill-posed nature. This study offers the Deep-Dehaze network to retrieve haze-free images. Given an input, the proposed architecture uses four feature extraction modules to perform nonlinear feature extraction. We improvise the traditional U-Net architecture and the residual network to design our architecture. We also introduce the l 1 spatial-edge loss function that enables our system to achieve better performance than that for the typical l 1 and l 2 loss function. Unlike other learning-based approaches, our network does not use any fusion connection for image dehazing. By training the image translation and dehazing network in an end-to-end manner, we can obtain better effects of both image translation and dehazing. Experimental results on synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms. We trained our network in an end-to-end manner and validated it on natural and synthetic hazy datasets. Our method shows favorable results on these datasets without any post-processing in contrast to the traditional approach. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Fast Single-Image HDR Tone-Mapping by Avoiding Base Layer Extraction.
- Author
-
Fahim, Masud An-Nur Islam and Jung, Ho Yub
- Subjects
- *
HIGH dynamic range imaging , *ALGORITHMS , *CAMERAS , *IMAGE enhancement (Imaging systems) - Abstract
The tone-mapping algorithm compresses the high dynamic range (HDR) information into the standard dynamic range for regular devices. An ideal tone-mapping algorithm reproduces the HDR image without losing any vital information. The usual tone-mapping algorithms mostly deal with detail layer enhancement and gradient-domain manipulation with the help of a smoothing operator. However, these approaches often have to face challenges with over enhancement, halo effects, and over-saturation effects. To address these challenges, we propose a two-step solution to perform a tone-mapping operation using contrast enhancement. Our method improves the performance of the camera response model by utilizing the improved adaptive parameter selection and weight matrix extraction. Experiments show that our method performs reasonably well for overexposed and underexposed HDR images without producing any ringing or halo effects. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Enhancing Ensemble Learning Using Explainable CNN for Spoof Fingerprints.
- Author
-
Reza N and Jung HY
- Abstract
Convolutional Neural Networks (CNNs) have demonstrated remarkable success with great accuracy in classification problems. However, the lack of interpretability of the predictions made by neural networks has raised concerns about the reliability and robustness of CNN-based systems that use a limited amount of training data. In such cases, the utilization of ensemble learning using multiple CNNs has demonstrated the capability to improve the robustness of a network, but the robustness can often have a trade-off with accuracy. In this paper, we propose a novel training method that utilizes a Class Activation Map (CAM) to identify the fingerprint regions that influenced previously trained networks to attain their predictions. The identified regions are concealed during the training of networks with the same architectures, thus enabling the new networks to achieve the same objective from different regions. The resultant networks are then ensembled to ensure that the majority of the fingerprint features are taken into account during classification, resulting in significant enhancement of classification accuracy and robustness across multiple sensors in a consistent and reliable manner. The proposed method is evaluated on LivDet datasets and is able to achieve state-of-the-art accuracy.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.