110 results on '"Junyong Noh"'
Search Results
2. StyleCineGAN: Landscape Cinemagraph Generation Using a Pre-trained StyleGAN.
- Author
-
Jongwoo Choi, Kwanggyoon Seo, Amirsaman Ashtari, and Junyong Noh
- Published
- 2024
- Full Text
- View/download PDF
3. User Performance in Consecutive Temporal Pointing: An Exploratory Study.
- Author
-
Dawon Lee, Sunjun Kim, Junyong Noh, and Byungjoo Lee
- Published
- 2024
- Full Text
- View/download PDF
4. Motion to Dance Music Generation using Latent Diffusion Model.
- Author
-
Vanessa Tan, Junghyun Nam, Juhan Nam, and Junyong Noh
- Published
- 2023
- Full Text
- View/download PDF
5. Real-time Content Projection onto a Tunnel from a Moving Subway Train.
- Author
-
Jaedong Kim, Haegwang Eom, Jihwan Kim, Younghui Kim, and Junyong Noh
- Published
- 2021
- Full Text
- View/download PDF
6. Virtual Camera Layout Generation using a Reference Video.
- Author
-
Jung Eun Yoo, Kwanggyoon Seo, Sanghun Park, Jaedong Kim, Dawon Lee, and Junyong Noh
- Published
- 2021
- Full Text
- View/download PDF
7. 22.1 A 1.1V 16GB 640GB/s HBM2E DRAM with a Data-Bus Window-Extension Technique and a Synergetic On-Die ECC Scheme.
- Author
-
Chi-Sung Oh, Ki Chul Chun, Young-Yong Byun, Yong-Ki Kim, So-Young Kim, Yesin Ryu, Jaewon Park, Sinho Kim, Sang-uhn Cha, Dong-Hak Shin, Jungyu Lee 0002, Jong-Pil Son, Byung-Kyu Ho, Seong-Jin Cho, Beomyong Kil, Sungoh Ahn, Baekmin Lim, Yong-Sik Park, Kijun Lee, Myung-Kyu Lee, Seungduk Baek, Junyong Noh, Jae-Wook Lee, Seungseob Lee, Sooyoung Kim, Bo-Tak Lim, Seouk-Kyu Choi, Jin-Guk Kim, Hye-In Choi, Hyuk-Jun Kwon, Jun Jin Kong, Kyomin Sohn, Nam Sung Kim, Kwang-Il Park, and Jung-Bae Lee
- Published
- 2020
- Full Text
- View/download PDF
8. Generating 3D Human Texture from a Single Image with Sampling and Refinement.
- Author
-
Sihun Cha, Kwanggyoon Seo, Amirsaman Ashtari, and Junyong Noh
- Published
- 2022
- Full Text
- View/download PDF
9. Saliency Diagrams.
- Author
-
Nicolas Nghiem, Richard Roberts 0004, John P. Lewis, and Junyong Noh
- Published
- 2019
- Full Text
- View/download PDF
10. Online Avatar Motion Adaptation to Morphologically‐similar Spaces
- Author
-
Soojin Choi, Seokpyo Hong, Kyungmin Cho, Chaelin Kim, and Junyong Noh
- Subjects
Computer Graphics and Computer-Aided Design - Published
- 2023
11. Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks
- Author
-
Sihun Cha, Kwanggyoon Seo, Amirsaman Ashtari, and Junyong Noh
- Subjects
FOS: Computer and information sciences ,Computer Science - Graphics ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Computer Graphics and Computer-Aided Design ,Graphics (cs.GR) - Abstract
There has been significant progress in generating an animatable 3D human avatar from a single image. However, recovering texture for the 3D human avatar from a single image has been relatively less addressed. Because the generated 3D human avatar reveals the occluded texture of the given image as it moves, it is critical to synthesize the occluded texture pattern that is unseen from the source image. To generate a plausible texture map for 3D human avatars, the occluded texture pattern needs to be synthesized with respect to the visible texture from the given image. Moreover, the generated texture should align with the surface of the target 3D mesh. In this paper, we propose a texture synthesis method for a 3D human avatar that incorporates geometry information. The proposed method consists of two convolutional networks for the sampling and refining process. The sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh using the geometry information. The sampled texture is further refined and adjusted by the refiner network. To maintain the clear details in the given image, both sampled and refined texture is blended to produce the final texture map. To effectively guide the sampler network to achieve its goal, we designed a curriculum learning scheme that starts from a simple sampling task and gradually progresses to the task where the alignment needs to be considered. We conducted experiments to show that our method outperforms previous methods qualitatively and quantitatively.
- Published
- 2023
12. Real-time Shadow Removal using a Volumetric Skeleton Model in a Front Projection System.
- Author
-
Jaedong Kim, Hyunggoog Seo, Seunghoon Cha, and Junyong Noh
- Published
- 2017
- Full Text
- View/download PDF
13. PopStage
- Author
-
Dawon Lee, Jung Eun Yoo, Kyungmin Cho, Bumki Kim, Gyeonghun Im, and Junyong Noh
- Subjects
Computer Graphics and Computer-Aided Design - Abstract
StageMix is a mixed video that is created by concatenating the segments from various performance videos of an identical song in a visually smooth manner by matching the main subject's silhouette presented in the frame. We introduce PopStage , which allows users to generate a StageMix automatically. PopStage is designed based on the StageMix Editing Guideline that we established by interviewing creators as well as observing their workflows. PopStage consists of two main steps: finding an editing path and generating a transition effect at a transition point. Using a reward function that favors visual connection and the optimality of transition timing across the videos, we obtain the optimal path that maximizes the sum of rewards through dynamic programming. Given the optimal path, PopStage then aligns the silhouettes of the main subject from the transitioning video pair to enhance the visual connection at the transition point. The virtual camera view is next optimized to remove the black areas that are often created due to the transformation needed for silhouette alignment, while reducing pixel loss. In this process, we enforce the view to be the maximum size while maintaining the temporal continuity across the frames. Experimental results show that PopStage can generate a StageMix of a similar quality to those produced by professional creators in a highly reduced production time.
- Published
- 2022
14. StylePortraitVideo: Editing Portrait Videos with Expression Optimization
- Author
-
Kwanggyoon Seo, Seoung Wug Oh, Jingwan Lu, Joon‐Young Lee, Seonghyeon Kim, and Junyong Noh
- Subjects
Computer Graphics and Computer-Aided Design - Published
- 2022
15. A Drone Video Clip Dataset and its Applications in Automated Cinematography
- Author
-
Amirsaman Ashtari, Raehyuk Jung, Mingxiao Li, and Junyong Noh
- Subjects
Computer Graphics and Computer-Aided Design - Published
- 2022
16. Auto-calibration of multi-projector displays with a single handheld camera.
- Author
-
Sanghun Park, Hyunggoog Seo, Seunghoon Cha, and Junyong Noh
- Published
- 2015
- Full Text
- View/download PDF
17. ARbility: re-inviting older wheelchair users to in-store shopping via wearable augmented reality
- Author
-
Cholmin Kang, Inhwa Yeom, Amirsaman Ashtari, Woontack Woo, and Junyong Noh
- Subjects
Human-Computer Interaction ,Computer Graphics and Computer-Aided Design ,Software - Published
- 2023
18. Motion recommendation for online character control
- Author
-
Kyungmin Cho, Chaelin Kim, Jungjin Park, Joonkyu Park, and Junyong Noh
- Subjects
Computer Graphics and Computer-Aided Design - Abstract
Reinforcement learning (RL) has been proven effective in many scenarios, including environment exploration and motion planning. However, its application in data-driven character control has produced relatively simple motion results compared to recent approaches that have used large complex motion data without RL. In this paper, we provide a real-time motion control method that can generate high-quality and complex motion results from various sets of unstructured data while retaining the advantage of using RL, which is the discovery of optimal behaviors by trial and error. We demonstrate the results for a character achieving different tasks, from simple direction control to complex avoidance of moving obstacles. Our system works equally well on biped/quadruped characters, with motion data ranging from 1 to 48 minutes, without any manual intervention. To achieve this, we exploit a finite set of discrete actions, where each action represents full-body future motion features. We first define a subset of actions that can be selected in each state and store these pieces of information in databases during the preprocessing step. The use of this subset of actions enables the effective learning of control policy even from a large set of motion data. To achieve interactive performance at run-time, we adopt a proposal network and a k-nearest neighbor action sampler.
- Published
- 2021
19. Deep Learning‐Based Unsupervised Human Facial Retargeting
- Author
-
Roger Blanco i Ribera, Seonghyeon Kim, Sunjin Jung, Junyong Noh, and Kwanggyoon Seo
- Subjects
Human–computer interaction ,Computer science ,business.industry ,Deep learning ,Retargeting ,Animation ,Artificial intelligence ,business ,Computer Graphics and Computer-Aided Design ,Computing Methodologies - Published
- 2021
20. Overthere
- Author
-
Bumki Kim, Junyong Noh, Hyunggoog Seo, Jaedong Kim, and Kwanggyoon Seo
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Interface (computing) ,Process (computing) ,Object (computer science) ,Motion (physics) ,Human-Computer Interaction ,Hardware and Architecture ,Object registration ,Smart environment ,Computer vision ,Artificial intelligence ,business ,Simple (philosophy) ,Gesture - Abstract
An absolute mid-air pointing technique requires a preprocess called registration that makes the system remember the 3D positions and types of objects in advance. Previous studies have simply assumed that the information is already available because it requires a cumbersome process performed by an expert in a carefully calibrated environment. We introduce Overthere, which allows the user to intuitively register the objects in a smart environment by pointing to each target object a few times. To ensure accurate and coherent pointing gestures made by the user regardless of individual differences between them, we performed a user study and identified a desirable gesture motion for this purpose. In addition, we provide the user with various feedback to help them understand the current registration progress and adhere to required conditions, which will lead to accurate registration results. The user studies show that Overthere is sufficiently intuitive to be used by ordinary people.
- Published
- 2021
21. LightShop: An Interactive Lighting System Incorporating the 2D Image Editing Paradigm.
- Author
-
Younghui Kim and Junyong Noh
- Published
- 2009
- Full Text
- View/download PDF
22. Generating 3D Korean Sign Language Animation from 2D RGB Video and Proposing Guideline through User Experience
- Author
-
Cholmin Kang and Junyong Noh
- Subjects
Korean sign language ,User experience design ,Human–computer interaction ,Computer science ,business.industry ,RGB color model ,Guideline ,Animation ,business - Published
- 2021
23. Real-time tunnel projection from a moving subway train
- Author
-
Jaedong Kim, Haegwang Eom, Jihwan Kim, Younghui Kim, and Junyong Noh
- Subjects
Computer Vision and Pattern Recognition ,Computer Graphics and Computer-Aided Design ,Software - Published
- 2022
24. Neural crossbreed
- Author
-
Junyong Noh, Sanghun Park, and Kwanggyoon Seo
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Image (mathematics) ,Computer Science - Graphics ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,ComputingMethodologies_COMPUTERGRAPHICS ,0105 earth and related environmental sciences ,Sequence ,Artificial neural network ,business.industry ,Image and Video Processing (eess.IV) ,Pattern recognition ,Usability ,Electrical Engineering and Systems Science - Image and Video Processing ,Computer Graphics and Computer-Aided Design ,Graphics (cs.GR) ,Generative model ,Morphing ,Transformation (function) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Motion interpolation ,business - Abstract
We propose Neural Crossbreed, a feed-forward neural network that can learn a semantic change of input images in a latent space to create the morphing effect. Because the network learns a semantic change, a sequence of meaningful intermediate images can be generated without requiring the user to specify explicit correspondences. In addition, the semantic change learning makes it possible to perform the morphing between the images that contain objects with significantly different poses or camera views. Furthermore, just as in conventional morphing techniques, our morphing network can handle shape and appearance transitions separately by disentangling the content and the style transfer for rich usability. We prepare a training dataset for morphing using a pre-trained BigGAN, which generates an intermediate image by interpolating two latent vectors at an intended morphing value. This is the first attempt to address image morphing using a pre-trained generative model in order to learn semantic transformation. The experiments show that Neural Crossbreed produces high quality morphed images, overcoming various limitations associated with conventional approaches. In addition, Neural Crossbreed can be further extended for diverse applications such as multi-image morphing, appearance transfer, and video frame interpolation., Comment: 16 pages
- Published
- 2020
25. Enhanced Interactive 360° Viewing via Automatic Guidance
- Author
-
Younghui Kim, Seunghwa Jeong, Junyong Noh, Jungjin Lee, and Seunghoon Cha
- Subjects
Scheme (programming language) ,Computer science ,business.industry ,Automatic guidance ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Volume (computing) ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Path (graph theory) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,computer ,computer.programming_language - Abstract
We present a new interactive playback method to enhance 360° viewing experiences. Our method automatically rotates the virtual camera of a 360° panoramic video (360° video) player during interactive viewing to guide the viewer through the most important regions of the video. With this method, the viewer can watch a 360° video with minimum efforts to find important events in a scene both in interactive (e.g., HMD) and less-interactive (e.g., PC and TV) viewing environments. To estimate the importance of each viewing direction, we combine spatial and temporal saliency with cluster-based weighting. A maximum backward cumulative importance volume (MBCIV) is then constructed by accumulating this importance in the video space. During playback, which uses a forward tracing scheme through the MBCIV, the initial optimal path is found based on the viewer’s viewing direction. A smooth path is then derived using penalized curve fitting. Finally, the virtual camera is rotated to follow the path. The experiments and user studies demonstrate that our method allows the viewer to effectively enjoy 360° videos with minimum interaction efforts, or even through a non-interactive display.
- Published
- 2020
26. Facial Retargeting by Adding Supplemental Blendshapes.
- Author
-
Paul Hyunjin Kim, Yeongho Seol, Jaewon Song, and Junyong Noh
- Published
- 2011
- Full Text
- View/download PDF
27. Synthesizing Character Animation with Smoothly Decomposed Motion Layers
- Author
-
Seok-Pyo Hong, Kyungmin Cho, Junyong Noh, Sunjin Jung, Haegwang Eom, and Byungkuk Choi
- Subjects
Computer science ,Computer graphics (images) ,Character animation ,Animation ,Motion processing ,Computer Graphics and Computer-Aided Design ,Motion (physics) - Published
- 2019
28. Model Predictive Control with a Visuomotor System for Physics-based Character Animation
- Author
-
Joseph S. Shin, Junyong Noh, Daseong Han, and Haegwang Eom
- Subjects
0209 industrial biotechnology ,business.industry ,Partially observable Markov decision process ,020207 software engineering ,02 engineering and technology ,Optimal control ,Motion control ,Computer Graphics and Computer-Aided Design ,Model predictive control ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Character animation ,A priori and a posteriori ,Computer vision ,Differential dynamic programming ,Markov decision process ,Artificial intelligence ,business - Abstract
This article presents a Model Predictive Control framework with a visuomotor system that synthesizes eye and head movements coupled with physics-based full-body motions while placing visual attention on objects of importance in the environment. As the engine of this framework, we propose a visuomotor system based on human visual perception and full-body dynamics with contacts. Relying on partial observations with uncertainty from a simulated visual sensor, an optimal control problem for this system leads to a Partially Observable Markov Decision Process, which is difficult to deal with. We approximate it as a deterministic belief Markov Decision Process for effective control. To obtain a solution for the problem efficiently, we adopt differential dynamic programming, which is a powerful scheme to find a locally optimal control policy for nonlinear system dynamics. Guided by a reference skeletal motion without any a priori gaze information, our system produces realistic eye and head movements together with full-body motions for various tasks such as catching a thrown ball, walking on stepping stones, balancing after being pushed, and avoiding moving obstacles.
- Published
- 2019
29. Physics-based full-body soccer motion control for dribbling and shooting
- Author
-
Kyungmin Cho, Joseph S. Shin, Daseong Han, Seok-Pyo Hong, and Junyong Noh
- Subjects
Model predictive control ,Computer science ,Control theory ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,0202 electrical engineering, electronic engineering, information engineering ,Ball (bearing) ,020207 software engineering ,02 engineering and technology ,Physics based ,Optimal control ,Motion control ,Computer Graphics and Computer-Aided Design ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Playing with a soccer ball is not easy even for a real human because of dynamic foot contacts with the moving ball while chasing and controlling it. The problem of online full-body soccer motion synthesis is challenging and has not been fully solved yet. In this paper, we present a novel motion control system that produces physically-correct full-body soccer motions: dribbling forward, dribbling to the side, and shooting, in response to an online user motion prescription specified by a motion type, a running speed, and a turning angle. This system performs two tightly-coupled tasks: data-driven motion prediction and physics-based motion synthesis. Given example motion data, the former synthesizes a reference motion in accordance with an online user input and further refines the motion to make the character kick the ball at a right time and place. Provided with the reference motion, the latter then adopts a Model Predictive Control (MPC) framework to generate a physically-correct soccer motion, by solving an optimal control problem that is formulated based on dynamics for a full-body character and the moving ball together with their interactions. Our demonstration shows the effectiveness of the proposed system that synthesizes convincing full-body soccer motions in various scenarios such as adjusting the desired running speed of the character, changing the velocity and the mass of the ball, and maintaining balance against external forces.
- Published
- 2019
30. On-line Motion Synthesis Using Analytically Differentiable System Dynamics
- Author
-
Junyong Noh, Joseph S. Shin, and Daseong Han
- Subjects
Model predictive control ,Computer science ,Control theory ,Character animation ,Motion synthesis ,Differentiable function ,Trajectory optimization ,Physics based ,Line (text file) ,System dynamics - Published
- 2019
31. Video Extrapolation Using Neighboring Frames
- Author
-
Bumki Kim, Sangwoo Lee, Junyong Noh, Kyehyun Kim, and Jungjin Lee
- Subjects
Computer science ,business.industry ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Extrapolation ,020207 software engineering ,02 engineering and technology ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,010309 optics ,Transformation (function) ,0103 physical sciences ,Peripheral vision ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Computer vision ,Artificial intelligence ,Image warping ,Parallax ,business - Abstract
With the popularity of immersive display systems that fill the viewer’s field of view (FOV) entirely, demand for wide FOV content has increased. A video extrapolation technique based on reuse of existing videos is one of the most efficient ways to produce wide FOV content. Extrapolating a video poses a great challenge, however, due to the insufficient amount of cues and information that can be leveraged for the estimation of the extended region. This article introduces a novel framework that allows the extrapolation of an input video and consequently converts a conventional content into one with wide FOV. The key idea of the proposed approach is to integrate the information from all frames in the input video into each frame. Utilizing the information from all frames is crucial because it is very difficult to achieve the goal with a two-dimensional transformation based approach when parallax caused by camera motion is apparent. Warping guided by three-dimensnional scene points matches the viewpoints between the different frames. The matched frames are blended to create extended views. Various experiments demonstrate that the results of the proposed method are more visually plausible than those produced using state-of-the-art techniques.
- Published
- 2019
32. Virtual Camera Layout Generation using a Reference Video
- Author
-
Jaedong Kim, Kwanggyoon Seo, Jung Eun Yoo, Dawon Lee, Sanghun Park, and Junyong Noh
- Subjects
Computer science ,Movement (music) ,business.industry ,05 social sciences ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Cinematography ,Framing (construction) ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,Virtual camera ,business ,050107 human factors ,Computer animation - Abstract
We propose a method that generates a virtual camera layout of a 3D animation scene by following the cinematic intention of a reference video. From a reference video, cinematic features such as the start frame, end frame, framing, camera movement, and the visual features of the subjects are extracted automatically. The extracted information is used to generate the virtual camera layout, which resembles the camera layout of the reference video. Our method handles stylized as well as human characters with body proportions different from those of humans. We demonstrate the effectiveness of our approach with various reference videos and 3D animation scenes. The user evaluation results show that the generated layouts are comparable to layouts created by the artist, allowing us to assert that our method can provide effective assistance to both novice and professional users when positioning a virtual camera.
- Published
- 2021
33. 22.1 A 1.1V 16GB 640GB/s HBM2E DRAM with a Data-Bus Window-Extension Technique and a Synergetic On-Die ECC Scheme
- Author
-
Jin-Guk Kim, Kijun Lee, Junyong Noh, Seungseob Lee, Jung-Bae Lee, Seung-Duk Baek, Jungyu Lee, Sin-Ho Kim, Soo-Young Kim, Hye-In Choi, Ki-Chul Chun, Beomyong Kil, Sanguhn Cha, So-Young Kim, Jae-Won Park, Ryu Ye-Sin, Jun Jin Kong, Dong-Hak Shin, Byung-Kyu Ho, Hyuk-Jun Kwon, Baekmin Lim, Park Yong-Sik, Seouk-Kyu Choi, Chi-Sung Oh, Kyomin Sohn, Myungkyu Lee, Kwang-Il Park, Young-Yong Byun, Jae-Wook Lee, Bo-Tak Lim, Seong-Jin Cho, Jong-Pil Son, Yong-ki Kim, Nam Sung Kim, and S.J. Ahn
- Subjects
010302 applied physics ,Scheme (programming language) ,Random access memory ,business.industry ,Computer science ,Process (computing) ,02 engineering and technology ,01 natural sciences ,020202 computer hardware & architecture ,Power (physics) ,Stack (abstract data type) ,Embedded system ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Bandwidth (computing) ,business ,Throughput (business) ,computer ,Dram ,System bus ,computer.programming_language - Abstract
Rapidly evolving artificial intelligence (Al) technology, such as deep learning, has been successfully deployed in various applications: such as image recognition, health care, and autonomous driving. Such rapid evolution and successful deployment of Al technology have been possible owing to the emergence of accelerators, such as GPUs and TPUs, that have a higher data throughput. This, in turn, requires an enhanced memory system with large capacity and high bandwidth [1]; HBM has been the most preferred high-bandwidth memory technology due to its high-speed and low-power characteristics, and 1024 IOs facilitated by 2.5D silicon interposer technology, as well as large capacity realized by through-silicon via (TSV) stack technology [2]. Previous-generation HBM2 supports 8GB capacity with a stack of 8 DRAM dies (i.e., 8-high stack) and 341GB/s (2.7Gb/s/pin) bandwidth [3]. The HBM industry trend has been a speed improvement of 15~20% every year, while capacity increases by 1.5-2x every two years. In this paper, we present a 16GB HBM2E with circuit and design techniques to increase its bandwidth up to 640GB/s (5Gb/s/pin), while providing stable bit-cell operation in the 2nd generation of a 10nm DRAM process: featuring (1) a data-bus window-extension technique to cope with reduced $t_{cco}$ , (2) a power delivery network (PDN) designed for stable operation at a high speed, (3) a synergetic on-die ECC scheme to reliably provide large capacity, and (4) an MBIST solution to efficiently test large capacity memory at a high speed.
- Published
- 2020
34. Real‐Time Human Shadow Removal in a Front Projection System
- Author
-
Hyunggoog Seo, Jaedong Kim, Seunghoon Cha, and Junyong Noh
- Subjects
Computer science ,business.industry ,020207 software engineering ,Image processing ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Projection system ,020204 information systems ,Shadow ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Artificial intelligence ,business ,Front (military) - Published
- 2018
35. Object Segmentation Ensuring Consistency Across Multi-Viewpoint Images
- Author
-
Junyong Noh, Bumki Kim, Younghui Kim, Seunghwa Jeong, and Jungjin Lee
- Subjects
010504 meteorology & atmospheric sciences ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-space segmentation ,02 engineering and technology ,01 natural sciences ,Upsampling ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Structure from motion ,Segmentation ,Computer vision ,0105 earth and related environmental sciences ,Markov random field ,Pixel ,Segmentation-based object categorization ,business.industry ,Applied Mathematics ,Image segmentation ,Computational Theory and Mathematics ,Computer Science::Computer Vision and Pattern Recognition ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
We present a hybrid approach that segments an object by using both color and depth information obtained from views captured from a low-cost RGBD camera and sparsely-located color cameras. Our system begins with generating dense depth information of each target image by using Structure from Motion and Joint Bilateral Upsampling. We formulate the multi-view object segmentation as the Markov Random Field energy optimization on the graph constructed from the superpixels. To ensure inter-view consistency of the segmentation results between color images that have too few color features, our local mapping method generates dense inter-view geometric correspondences by using the dense depth images. Finally, the pixel-based optimization step refines the boundaries of the results obtained from the superpixel-based binary segmentation. We evaluate the validity of our method under various capture conditions such as numbers of views, rotations, and distances between cameras. We compared our method with the state-of-the-art methods that use the standard multi-view datasets. The comparison verified that the proposed method works very efficiently especially in a sparse wide-baseline capture environment.
- Published
- 2018
36. Data-driven bird simulation.
- Author
-
Eunjung Ju, Byungkuk Choi, Junyong Noh, and Jehee Lee
- Published
- 2011
- Full Text
- View/download PDF
37. Taming the cat.
- Author
-
Junyong Noh
- Published
- 2009
- Full Text
- View/download PDF
38. Sparse Rig Parameter Optimization for Character Animation
- Author
-
Mi You, Kyungmin Cho, Roger Blanco i Ribera, Jaewon Song, Byungkuk Choi, Junyong Noh, and J. P. Lewis
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,01 natural sciences ,GeneralLiterature_MISCELLANEOUS ,010309 optics ,Computer graphics ,Vector graphics ,Computer graphics (images) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Skeletal animation ,Computer vision ,Computer facial animation ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS ,business.industry ,020207 software engineering ,Animation ,Computer Graphics and Computer-Aided Design ,Non-photorealistic rendering ,Real-time computer graphics ,Retargeting ,Character animation ,Artificial intelligence ,business ,2D computer graphics ,3D computer graphics - Abstract
We propose a novel motion retargeting method that efficiently estimates artist-friendly rig space parameters. Inspired by the workflow typically observed in keyframe animation, our approach transfe...
- Published
- 2017
39. ScreenX: Public Immersive Theatres with Uniform Movie Viewing Experiences
- Author
-
Younghui Kim, Jungjin Lee, Sangwoo Lee, and Junyong Noh
- Subjects
Imagination ,Multimedia ,Computer science ,media_common.quotation_subject ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Visualization ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Immersion (virtual reality) ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,computer ,Software ,media_common - Abstract
This paper introduces ScreenX, which is a novel movie viewing platform that enables ordinary movie theatres to become multi-projection movie theatres. This enables the general public to enjoy immersive viewing experiences. The left and right side walls are used to form surrounding screens. This surrounding display environment delivers a strong sense of immersion in general movie viewing. However, naïve display of the content on the side walls results in the appearance of distorted images according to the location of the viewer. In addition, the different dimensions in width, height, and depth among theatres may lead to different viewing experiences. Therefore, for successful deployment of this novel platform, an approach to providing similar movie viewing experiences across target theatres is presented. The proposed image representation model ensures minimum average distortion of the images displayed on the side walls when viewed from different locations. Furthermore, the proposed model assists with determining the appropriate variation of the content according to the diverse viewing environments of different theatres. The theatre suitability estimation method excludes outlier theatres that have extraordinary dimensions. In addition, the content production guidelines indicate appropriate regions to place scene elements for the side wall, depending on their importance. The experiments demonstrate that the proposed method improves the movie viewing experiences in ScreenX theatres. Finally, ScreenX and the proposed techniques are discussed with regard to various aspects and the research issues that are relevant to this movie viewing platform are summarized.
- Published
- 2017
40. Rich360
- Author
-
Kyehyun Kim, Bumki Kim, Younghui Kim, Jungjin Lee, and Junyong Noh
- Subjects
UV mapping ,Computer science ,business.industry ,Pipeline (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Object detection ,Image stitching ,Sampling (signal processing) ,Computer graphics (images) ,Video tracking ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Map projection ,Parallax ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper presents Rich360, a novel system for creating and viewing a 360° panoramic video obtained from multiple cameras placed on a structured rig. Rich360 provides an as-rich-as-possible 360° viewing experience by effectively resolving two issues that occur in the existing pipeline. First, a deformable spherical projection surface is utilized to minimize the parallax from multiple cameras. The surface is deformed spatio-temporally according to the depth constraints estimated from the overlapping video regions. This enables fast and efficient parallax-free stitching independent of the number of views. Next, a non-uniform spherical ray sampling is performed. The density of the sampling varies depending on the importance of the image region. Finally, for interactive viewing, the non-uniformly sampled video is mapped onto a uniform viewing sphere using a UV map. This approach can preserve the richness of the input videos when the resolution of the final 360° panoramic video is smaller than the overall resolution of the input videos, which is the case for most 360° panoramic videos. We show various results from Rich360 to demonstrate the richness of the output video and the advancement in the stitching results.
- Published
- 2016
41. SketchiMo
- Author
-
Junyong Noh, Haegwang Eom, Yeongho Seol, John P. Lewis, Seok-Pyo Hong, Sunjin Jung, Byungkuk Choi, and Roger Blanco i Ribera
- Subjects
Viewport ,Sketch recognition ,Computer science ,business.industry ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Sketch ,Motion (physics) ,Set (abstract data type) ,Range (mathematics) ,Sketch-based modeling ,Computer graphics (images) ,0202 electrical engineering, electronic engineering, information engineering ,Character animation ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present SketchiMo, a novel approach for the expressive editing of articulated character motion. SketchiMo solves for the motion given a set of projective constraints that relate the sketch inputs to the unknown 3 D poses. We introduce the concept of sketch space, a contextual geometric representation of sketch targets---motion properties that are editable via sketch input---that enhances, right on the viewport, different aspects of the motion. The combination of the proposed sketch targets and space allows for seamless editing of a wide range of properties, from simple joint trajectories to local parent-child spatiotemporal relationships and more abstract properties such as coordinated motions. This is made possible by interpreting the user's input through a new sketch-based optimization engine in a uniform way. In addition, our view-dependent sketch space also serves the purpose of disambiguating the user inputs by visualizing their range of effect and transparently defining the necessary constraints to set the temporal boundaries for the optimization.
- Published
- 2016
42. Data-guided Model Predictive Control Based on Smoothed Contact Dynamics
- Author
-
Joseph S. Shin, Haegwang Eom, Daseong Han, and Junyong Noh
- Subjects
Computer science ,020207 software engineering ,02 engineering and technology ,Trajectory optimization ,Optimal control ,01 natural sciences ,Computer Graphics and Computer-Aided Design ,Contact force ,010101 applied mathematics ,Model predictive control ,symbols.namesake ,Jacobian matrix and determinant ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Torque ,Contact dynamics ,0101 mathematics ,Algorithm ,Interpolation - Abstract
In this paper, we propose an efficient data-guided method based on Model Predictive Control (MPC) to synthesize a full-body motion. Guided by a reference motion, our method repeatedly plans the full-body motion to produce an optimal control policy for predictive control while sliding the fixed-span window along the time axis. Based on this policy, the method computes the joint torques of a character at every time step. Together with contact forces and external perturbations if there are any, the joint torques are used to update the state of the character. Without including the contact forces in the control vector, our formulation of the trajectory optimization problem enables automatic adjustment of contact timings and positions for balancing in response to environmental changes and external perturbations. For efficiency, we adopt derivative-based trajectory optimization on top of state-of-the-art smoothed contact dynamics. Use of derivatives enables our method to run much faster than the existing sampling-based methods. In order to further accelerate the performance of MPC, we propose efficient numerical differentiation of the system dynamics of a full-body character based on two schemes: data reuse and data interpolation. The former scheme exploits data dependency to reuse physical quantities of the system dynamics at near-by time points. The latter scheme allows the use of derivatives at sparse sample points to interpolate those at other time points in the window. We further accelerate evaluation of the system dynamics by exploiting the sparsity of physical quantities such as Jacobian matrix resulting from the tree-like structure of the articulated body. Through experiments, we show that the proposed method efficiently can synthesize realistic motions such as locomotion, dancing, gymnastic motions, and martial arts at interactive rates using moderate computing resources.
- Published
- 2016
43. Online real‐time locomotive motion transformation based on biomechanical observations
- Author
-
Junyong Noh, Xiaogang Jin, Joseph S. Shin, Seok-Pyo Hong, and Daseong Han
- Subjects
Computer science ,Upper body ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,03 medical and health sciences ,0302 clinical medicine ,Naturalness ,Motion estimation ,0202 electrical engineering, electronic engineering, information engineering ,Character animation ,Preprocessor ,Effective method ,Graph (abstract data type) ,Moving speed ,Computer vision ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In the paper, we present an online real-time method for automatically transforming a basic locomotive motion to a desired motion of the same type, based on biomechanical results. Given an online request for a motion of a certain type with desired moving speed and turning angle, our method first extracts a basic motion of the same type from a motion graph, and then transforms it to achieve the desired moving speed and turning angle by exploiting the following biomechanical observations: contact-driven center-of-mass control, anticipatory reorientation of upper body segments, moving speed adjustment, and whole-body leaning. Exploiting these observations, we propose a simple but effective method to add physical and behavioral naturalness to the resulting locomotive motions without preprocessing. Through experiments, we show that our method enables a character to respond agilely to online user commands while efficiently generating walking, jogging, and running motions with a compact motion library. Our method can also deal with certain dynamical motions such as forward roll. Copyright © 2016 John Wiley & Sons, Ltd.
- Published
- 2016
44. Omnidirectional Environmental Projection Mapping with Single Projector and Single Spherical Mirror
- Author
-
Seunghwa Jeong, Jungjin Lee, Bumki Kim, Younghui Kim, and Junyong Noh
- Subjects
Physics ,Projector ,business.industry ,law ,Projection mapping ,Curved mirror ,Computer vision ,Artificial intelligence ,Omnidirectional antenna ,business ,law.invention - Published
- 2015
45. Simulating Drops Settling in a Still Liquid
- Author
-
Junyong Noh, Donghoon Sagong, Xiaogang Jin, Joseph S. Shin, and Nahyup Kang
- Subjects
Condensed Matter::Soft Condensed Matter ,Physics::Fluid Dynamics ,Fluid simulation ,Settling ,Computer science ,Drop (liquid) ,Particle ,Particle swarm optimization ,Mechanics ,Computer Graphics and Computer-Aided Design ,Software ,Simulation - Abstract
Researchers have devised a physics-based method to reproduce intricate mixing patterns of two miscible liquids. A small drop of one liquid descends in an initially still surrounding liquid, such as an ink drop sinking in water. Modeling the drop as a particle swarm, the researchers reduce the miscible-liquid-mixing problem to that of liquid--particle interactions. This method traces each particle to generate the mixing pattern; it allows two-way interaction between the particles and liquid.
- Published
- 2015
46. Trcollage: Efficient Image Collage Using Tree-Based Layer Reordering
- Author
-
Ping Li, Shiguang Liu, Junyong Noh, and Xiaobing Wang
- Subjects
Tree (data structure) ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Mobile technology ,Tree based ,Layer (object-oriented design) ,Virtual reality ,Algorithm ,Image (mathematics) ,Visualization - Abstract
This paper proposes an efficient image collage approach called TrCollage using tree-based layer reordering, which takes into account not only the efficiency of the collage processing but also the quality of the final collage effects. Besides, our tree-based TrCollage also meets the enhanced demands from the rapid development of mobile technology, which call for robust solutions for efficient picture collage without computation-intensive processing, e.g. graph-cut, saliency detection. The experimental results have shown the efficiency and effectiveness of our tree-based TrCollage with high-quality image collage effects applying layer reordering.
- Published
- 2017
47. Facial Retargeting with Automatic Range of Motion Alignment
- Author
-
J. P. Lewis, Eduard Zell, Mario Botsch, Roger Blanco i Ribera, and Junyong Noh
- Subjects
business.industry ,Facial motion capture ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Animation ,Computer Graphics and Computer-Aided Design ,Motion capture ,GeneralLiterature_MISCELLANEOUS ,Character (mathematics) ,Retargeting ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Computer facial animation ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
While facial capturing focuses on accurate reconstruction of an actor's performance, facial animation retargeting has the goal to transfer the animation to another character, such that the semantic meaning of the animation remains. Because of the popularity of blendshape animation, this effectively means to compute suitable blendshape weights for the given target character. Current methods either require manually created examples of matching expressions of actor and target character, or are limited to characters with similar facial proportions (i.e., realistic models). In contrast, our approach can automatically retarget facial animations from a real actor to stylized characters. We formulate the problem of transferring the blendshapes of a facial rig to an actor as a special case of manifold alignment, by exploring the similarities of the motion spaces defined by the blendshapes and by an expressive training sequence of the actor. In addition, we incorporate a simple, yet elegant facial prior based on discrete differential properties to guarantee smooth mesh deformation. Our method requires only sparse correspondences between characters and is thus suitable for retargeting marker-less and marker-based motion capture as well as animation transfer between virtual characters.
- Published
- 2017
48. Data-Driven Reconstruction of Human Locomotion Using a Single Smartphone
- Author
-
Junyong Noh, Haegwang Eom, and Byungkuk Choi
- Subjects
Computer science ,business.industry ,Gyroscope ,Animation ,Computer Graphics and Computer-Aided Design ,Motion capture ,law.invention ,Computer graphics ,law ,Character animation ,Computer vision ,Segmentation ,Artificial intelligence ,business - Abstract
Generating a visually appealing human motion sequence using low-dimensional control signals is a major line of study in the motion research area in computer graphics. We propose a novel approach that allows us to reconstruct full body human locomotion using a single inertial sensing device, a smartphone. Smartphones are among the most widely used devices and incorporate inertial sensors such as an accelerometer and a gyroscope. To find a mapping between a full body pose and smartphone sensor data, we perform low dimensional embedding of full body motion capture data, based on a Gaussian Process Latent Variable Model. Our system ensures temporal coherence between the reconstructed poses by using a state decomposition model for automatic phase segmentation. Finally, application of the proposed nonlinear regression algorithm finds a proper mapping between the latent space and the sensor data. Our framework effectively reconstructs plausible 3D locomotion sequences. We compare the generated animation to ground truth data obtained using a commercial motion capture system.
- Published
- 2014
49. Body Motion Retargeting to Rig-space
- Author
-
Junyong Noh
- Subjects
business.industry ,Computer science ,Retargeting ,Motion (geometry) ,Computer vision ,Artificial intelligence ,business ,Space (mathematics) ,Motion capture - Published
- 2014
50. On-line real-time physics-based predictive motion control with balance recovery
- Author
-
Joseph S. Shin, Junyong Noh, Daseong Han, Sung Y. Shin, and Xiaogang Jin
- Subjects
Motion field ,Computer science ,Motion estimation ,Character animation ,Physics based ,Optimal control ,Motion control ,Computer Graphics and Computer-Aided Design ,Algorithm - Abstract
In this paper, we present an on-line real-time physics-based approach to motion control with contact repositioning based on a low-dimensional dynamics model using example motion data. Our approach first generates a reference motion in run time according to an on-line user request by transforming an example motion extracted from a motion library. Guided by the reference motion, it repeatedly generates an optimal control policy for a small time window one at a time for a sequence of partially overlapping windows, each covering a couple of footsteps of the reference motion, which supports an on-line performance. On top of this, our system dynamics and problem formulation allow to derive closed-form derivative functions by exploiting the low-dimensional dynamics model together with example motion data. These derivative functions and their sparse structures facilitate a real-time performance. Our approach also allows contact foot repositioning so as to robustly respond to an external perturbation or an environmental change as well as to perform locomotion tasks such as stepping on stones effectively.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.