6 results on '"Lee, Yeejin"'
Search Results
2. Sampling Agnostic Feature Representation for Long-Term Person Re-Identification.
- Author
-
Yang S, Kang B, and Lee Y
- Subjects
- Humans, Biometric Identification methods
- Abstract
Person re-identification is a problem of identifying individuals across non-overlapping cameras. Although remarkable progress has been made in the re-identification problem, it is still a challenging problem due to appearance variations of the same person as well as other people of similar appearance. Some prior works solved the issues by separating features of positive samples from features of negative ones. However, the performances of existing models considerably depend on the characteristics and statistics of the samples used for training. Thus, we propose a novel framework named sampling independent robust feature representation network (SirNet) that learns disentangled feature embedding from randomly chosen samples. A carefully designed sampling independent maximum discrepancy loss is introduced to model samples of the same person as a cluster. As a result, the proposed framework can generate additional hard negatives/positives using the learned features, which results in better discriminability from other identities. Extensive experimental results on large-scale benchmark datasets verify that the proposed model is more effective than prior state-of-the-art models.
- Published
- 2022
- Full Text
- View/download PDF
3. A Driver's Visual Attention Prediction Using Optical Flow.
- Author
-
Kang B and Lee Y
- Subjects
- Motion, Neural Networks, Computer, Automobile Driving, Optic Flow
- Abstract
Motion in videos refers to the pattern of the apparent movement of objects, surfaces, and edges over image sequences caused by the relative movement between a camera and a scene. Motion, as well as scene appearance, are essential features to estimate a driver's visual attention allocation in computer vision. However, the fact that motion can be a crucial factor in a driver's attention estimation has not been thoroughly studied in the literature, although driver's attention prediction models focusing on scene appearance have been well studied. Therefore, in this work, we investigate the usefulness of motion information in estimating a driver's visual attention. To analyze the effectiveness of motion information, we develop a deep neural network framework that provides attention locations and attention levels using optical flow maps, which represent the movements of contents in videos. We validate the performance of the proposed motion-based prediction model by comparing it to the performance of the current state-of-art prediction models using RGB frames. The experimental results for a real-world dataset confirm our hypothesis that motion plays a role in prediction accuracy improvement, and there is a margin for accuracy improvement by using motion features.
- Published
- 2021
- Full Text
- View/download PDF
4. High-Resolution Neural Network for Driver Visual Attention Prediction.
- Author
-
Kang B and Lee Y
- Subjects
- Decision Making, Humans, Neural Networks, Computer, Attention, Automobile Driving, Visual Perception
- Abstract
Driving is a task that puts heavy demands on visual information, thereby the human visual system plays a critical role in making proper decisions for safe driving. Understanding a driver's visual attention and relevant behavior information is a challenging but essential task in advanced driver-assistance systems (ADAS) and efficient autonomous vehicles (AV). Specifically, robust prediction of a driver's attention from images could be a crucial key to assist intelligent vehicle systems where a self-driving car is required to move safely interacting with the surrounding environment. Thus, in this paper, we investigate a human driver's visual behavior in terms of computer vision to estimate the driver's attention locations in images. First, we show that feature representations at high resolution improves visual attention prediction accuracy and localization performance when being fused with features at low-resolution. To demonstrate this, we employ a deep convolutional neural network framework that learns and extracts feature representations at multiple resolutions. In particular, the network maintains the feature representation with the highest resolution at the original image resolution. Second, attention prediction tends to be biased toward centers of images when neural networks are trained using typical visual attention datasets. To avoid overfitting to the center-biased solution, the network is trained using diverse regions of images. Finally, the experimental results verify that our proposed framework improves the prediction accuracy of a driver's attention locations.
- Published
- 2020
- Full Text
- View/download PDF
5. Deep transfer learning-based prostate cancer classification using 3 Tesla multi-parametric MRI.
- Author
-
Zhong X, Cao R, Shakeri S, Scalzo F, Lee Y, Enzmann DR, Wu HH, Raman SS, and Sung K
- Subjects
- Adult, Aged, Aged, 80 and over, Biopsy, Diagnosis, Differential, Humans, Male, Middle Aged, Retrospective Studies, Sensitivity and Specificity, Software, Deep Learning, Image Interpretation, Computer-Assisted methods, Magnetic Resonance Imaging methods, Prostatic Neoplasms pathology
- Abstract
Purpose: The purpose of the study was to propose a deep transfer learning (DTL)-based model to distinguish indolent from clinically significant prostate cancer (PCa) lesions and to compare the DTL-based model with a deep learning (DL) model without transfer learning and PIRADS v2 score on 3 Tesla multi-parametric MRI (3T mp-MRI) with whole-mount histopathology (WMHP) validation., Methods: With IRB approval, 140 patients with 3T mp-MRI and WMHP comprised the study cohort. The DTL-based model was trained on 169 lesions in 110 arbitrarily selected patients and tested on the remaining 47 lesions in 30 patients. We compared the DTL-based model with the same DL model architecture trained from scratch and the classification based on PIRADS v2 score with a threshold of 4 using accuracy, sensitivity, specificity, and area under curve (AUC). Bootstrapping with 2000 resamples was performed to estimate the 95% confidence interval (CI) for AUC., Results: After training on 169 lesions in 110 patients, the AUC of discriminating indolent from clinically significant PCa lesions of the DTL-based model, DL model without transfer learning and PIRADS v2 score ≥ 4 were 0.726 (CI [0.575, 0.876]), 0.687 (CI [0.532, 0.843]), and 0.711 (CI [0.575, 0.847]), respectively, in the testing set. The DTL-based model achieved higher AUC compared to the DL model without transfer learning and PIRADS v2 score ≥ 4 in discriminating clinically significant lesions in the testing set., Conclusion: The DeLong test indicated that the DTL-based model achieved comparable AUC compared to the classification based on PIRADS v2 score (p = 0.89).
- Published
- 2019
- Full Text
- View/download PDF
6. Camera-Aware Multi-Resolution Analysis for Raw Image Sensor Data Compression.
- Author
-
Lee Y, Hirakawa K, and Nguyen TQ
- Abstract
We propose novel lossless and lossy compression schemes for color filter array (CFA) sampled images based on the Camera-A ware M ulti- R esolution A nalysis, or CAMRA. Specifically, by CAMRA we refer to modifications that we make to wavelet transform of CFA sampled images in order to achieve a very high degree of decorrelation at the finest scale wavelet coefficients; and a series of color processing steps applied to the coarse scale wavelet coefficients, aimed at limiting the propagation of lossy compression errors through the subsequent camera processing pipeline. We validated our theoretical analysis and the performance of the proposed compression schemes using the images of natural scenes captured in a raw format. The experimental results verify that our proposed methods improve coding efficiency relative to the standard and the state-of-the-art compression schemes for CFA sampled images.
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.