9 results on '"Kun Lai"'
Search Results
2. DeepFaceEditing
- Author
-
Yu-Kun Lai, Feng-Lin Liu, Shu-Yu Chen, Chunpeng Li, Lin Gao, Hongbo Fu, and Paul L. Rosin
- Subjects
FOS: Computer and information sciences ,Exploit ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Geometry ,Computer Graphics and Computer-Aided Design ,Graphics (cs.GR) ,Sketch ,Computer Science - Graphics ,Salient ,Face (geometry) ,Component (UML) ,Domain knowledge ,Representation (mathematics) ,Control (linguistics) ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Recent facial image synthesis methods have been mainly based on conditional generative models. Sketch-based conditions can effectively describe the geometry of faces, including the contours of facial components, hair structures, as well as salient edges (e.g., wrinkles) on face surfaces but lack effective control of appearance, which is influenced by color, material, lighting condition, etc. To have more control of generated results, one possible approach is to apply existing disentangling works to disentangle face images into geometry and appearance representations. However, existing disentangling methods are not optimized for human face editing, and cannot achieve fine control of facial details such as wrinkles. To address this issue, we propose DeepFaceEditing, a structured disentanglement framework specifically designed for face images to support face generation and editing with disentangled control of geometry and appearance. We adopt a local-to-global approach to incorporate the face domain knowledge: local component images are decomposed into geometry and appearance representations, which are fused consistently using a global fusion module to improve generation quality. We exploit sketches to assist in extracting a better geometry representation, which also supports intuitive geometry editing via sketching. The resulting method can either extract the geometry and appearance representations from face images, or directly extract the geometry representation from face sketches. Such representations allow users to easily edit and synthesize face images, with decoupled control of their geometry and appearance. Both qualitative and quantitative evaluations show the superior detail and appearance control abilities of our method compared to state-of-the-art methods.
- Published
- 2021
3. Noise-Resilient Reconstruction of Panoramas and 3D Scenes Using Robot-Mounted Unsynchronized Commodity RGB-D Cameras
- Author
-
Yu-Kun Lai, Hongbo Fu, Yan-Pei Cao, Sheng Yang, Leif Kobbelt, Shi-Min Hu, and Beichen Li
- Subjects
Pixel ,Panorama ,Computer science ,business.industry ,Perspective (graphical) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Image stitching ,0202 electrical engineering, electronic engineering, information engineering ,Equirectangular projection ,RGB color model ,Robot ,020201 artificial intelligence & image processing ,Computer vision ,Noise (video) ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a two-stage approach to first constructing 3D panoramas and then stitching them for noise-resilient reconstruction of large-scale indoor scenes. Our approach requires multiple unsynchronized RGB-D cameras, mounted on a robot platform, which can perform in-place rotations at different locations in a scene. Such cameras rotate on a common (but unknown) axis, which provides a novel perspective for coping with unsynchronized cameras, without requiring sufficient overlap of their Field-of-View (FoV). Based on this key observation, we propose novel algorithms to track these cameras simultaneously. Furthermore, during the integration of raw frames onto an equirectangular panorama, we derive uncertainty estimates from multiple measurements assigned to the same pixels. This enables us to appropriately model the sensing noise and consider its influence, so as to achieve better noise resilience, and improve the geometric quality of each panorama and the accuracy of global inter-panorama registration. We evaluate and demonstrate the performance of our proposed method for enhancing the geometric quality of scene reconstruction from both real-world and synthetic scans.
- Published
- 2020
4. SDM-NET
- Author
-
Hongbo Fu, Yu-Jie Yuan, Lin Gao, Yu-Kun Lai, Tong Wu, Hao Zhang, and Jie Yang
- Subjects
FOS: Computer and information sciences ,Artificial neural network ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Autoencoder ,Graphics (cs.GR) ,Homeomorphism ,Computer Science - Graphics ,0202 electrical engineering, electronic engineering, information engineering ,Polygon mesh ,Computer vision ,Artificial intelligence ,business ,Generative grammar ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We introduce SDM-NET, a deep generative neural network which produces structured deformable meshes. Specifically, the network is trained to generate a spatial arrangement of closed, deformable mesh parts, which respect the global part structure of a shape collection, e.g., chairs, airplanes, etc. Our key observation is that while the overall structure of a 3D shape can be complex, the shape can usually be decomposed into a set of parts, each homeomorphic to a box, and the finer-scale geometry of the part can be recovered by deforming the box. The architecture of SDM-NET is that of a two-level variational autoencoder (VAE). At the part level, a PartVAE learns a deformable model of part geometries. At the structural level, we train a Structured Parts VAE (SP-VAE), which jointly learns the part structure of a shape collection and the part geometries, ensuring a coherence between global shape structure and surface details. Through extensive experiments and comparisons with the state-of-the-art deep generative models of shapes, we demonstrate the superiority of SDM-NET in generating meshes with visual quality, flexible topology, and meaningful structures, which benefit shape interpolation and other subsequently modeling tasks., Comment: Conditionally Accepted to Siggraph Asia 2019
- Published
- 2019
5. Knitting 4D garments with elasticity controlled for body motion
- Author
-
Xiangjia Chen, Yu-Kun Lai, Eugeni L. Doubrovski, Xingjian Han, Zishun Liu, Yuchen Zhang, Emily Whiting, and Charlie C. L. Wang
- Subjects
business.industry ,Pipeline (computing) ,Computation ,knitting ,Mechanical engineering ,Body movement ,computational fabrication ,Clothing ,Computer Graphics and Computer-Aided Design ,Motion (physics) ,4D garment ,Graph (abstract data type) ,Polygon mesh ,Elasticity (economics) ,business ,elasticity control ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we present a new computational pipeline for designing and fabricating 4D garments as knitwear that considers comfort during body movement. This is achieved by careful control of elasticity distribution to reduce uncomfortable pressure and unwanted sliding caused by body motion. We exploit the ability to knit patterns in different elastic levels by single-jersey jacquard (SJJ) with two yarns. We design the distribution of elasticity for a garment by physics-based computation, the optimized elasticity on the garment is then converted into instructions for a digital knitting machine by two algorithms proposed in this paper. Specifically, a graph-based algorithm is proposed to generate knittable stitch meshes that can accurately capture the 3D shape of a garment, and a tiling algorithm is employed to assign SJJ patterns on the stitch mesh to realize the designed distribution of elasticity. The effectiveness of our approach is verified on simulation results and on specimens physically fabricated by knitting machines.
- Published
- 2021
6. BiggerPicture
- Author
-
Ralph R. Martin, Yu-Kun Lai, Shi-Min Hu, Miao Wang, and Yuan Liang
- Subjects
Matching (graph theory) ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Extrapolation ,Image processing ,Image editing ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,QA76 ,Data-driven ,Image texture ,Source image ,Computer vision ,Small hole ,Artificial intelligence ,business ,Algorithm ,computer ,Mathematics - Abstract
Filling a small hole in an image with plausible content is well studied. Extrapolating an image to give a distinctly larger one is much more challenging---a significant amount of additional content is needed which matches the original image, especially near its boundaries. We propose a data-driven approach to this problem. Given a source image, and the amount and direction(s) in which it is to be extrapolated, our system determines visually consistent content for the extrapolated regions using library images. As well as considering low-level matching, we achieve consistency at a higher level by using graph proxies for regions of source and library images. Treating images as graphs allows us to find candidates for image extrapolation in a feasible time. Consistency of subgraphs in source and library images is used to find good candidates for the additional content; these are then further filtered. Region boundary curves are aligned to ensure consistency where image parts are joined using a photomontage method. We demonstrate the power of our method in image editing applications.
- Published
- 2014
7. Automatic semantic modeling of indoor scenes from low-quality RGB-D data using contextual information
- Author
-
Kang Chen, Yuxin Wu, Yu-Kun Lai, Ralph R. Martin, and Shi-Min Hu
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Contextual information ,RGB color model ,Scene statistics ,Computer vision ,Artificial intelligence ,business ,Computer Graphics and Computer-Aided Design ,ComputingMethodologies_COMPUTERGRAPHICS ,QA76 - Abstract
We present a novel solution to automatic semantic modeling of indoor scenes from a sparse set of low-quality RGB-D images. Such data presents challenges due to noise, low resolution, occlusion and missing depth information. We exploit the knowledge in a scene database containing 100s of indoor scenes with over 10,000 manually segmented and labeled mesh models of objects. In seconds, we output a visually plausible 3D scene, adapting these models and their parts to fit the input scans. Contextual relationships learned from the database are used to constrain reconstruction, ensuring semantic compatibility between both object models and parts. Small objects and objects with incomplete depth information which are difficult to recover reliably are processed with a two-stage approach. Major objects are recognized first, providing a known scene structure. 2D contour-based model retrieval is then used to recover smaller objects. Evaluations using our own data and two public datasets show that our approach can model typical real-world indoor scenes efficiently and robustly.
- Published
- 2014
8. Diffusion pruning for rapidly and robustly selecting global correspondences using local isometry
- Author
-
Ralph R. Martin, Gary K. L. Tam, Paul L. Rosin, and Yu-Kun Lai
- Subjects
QA75 ,Diffusion (acoustics) ,Matching (graph theory) ,Geodesic ,Computation ,Topology ,Computer Graphics and Computer-Aided Design ,Set (abstract data type) ,Computer graphics ,3d geometry ,Pruning (decision trees) ,Algorithm ,ComputingMethodologies_COMPUTERGRAPHICS ,Mathematics - Abstract
Finding correspondences between two surfaces is a fundamental operation in various applications in computer graphics and related fields. Candidate correspondences can be found by matching local signatures, but as they only consider local geometry, many are globally inconsistent. We provide a novel algorithm to prune a set of candidate correspondences to those most likely to be globally consistent. Our approach can handle articulated surfaces, and ones related by a deformation which is globally nonisometric, provided that the deformation is locally approximately isometric. Our approach uses an efficient diffusion framework, and only requires geodesic distance calculations in small neighbourhoods, unlike many existing techniques which require computation of global geodesic distances. We demonstrate that, for typical examples, our approach provides significant improvements in accuracy, yet also reduces time and memory costs by a factor of several hundred compared to existing pruning techniques. Our method is furthermore insensitive to holes, unlike many other methods.
- Published
- 2014
9. Automatic and Topology-Preserving Gradient Mesh Generation for Image Vectorization.
- Author
-
Yu-Kun Lai, Shi-Min Hu, and Martin, Ralph R.
- Subjects
GRAPHIC arts ,COMPUTER software ,TOPOLOGY ,COMPUTER graphics ,ALGORITHMS ,COMPUTER-aided design - Abstract
Gradient mesh vector graphics representation, used in commercial software, is a regular grid with specified position and color, and their gradients, at each grid point. Gradient meshes can compactly represent smoothly changing data, and are typically used for single objects. This paper advances the state of the art for gradient meshes in several significant ways. Firstly, we introduce a topology-preserving gradient mesh representation which allows an arbitrary number of holes. This is important, as objects in images often have holes, either due to occlusion, or their 3D structure. Secondly, our algorithm uses the concept of image manifolds, adapting surface parameterization and fitting techniques to generate the gradient mesh in a fully automatic manner. Existing gradient-mesh algorithms require manual interaction to guide grid construction, and to cut objects with holes into disk-like regions. Our new algorithm is empirically at least 10 times faster than previous approaches. Furthermore, image segmentation can be used with our new algorithm to provide automatic gradient mesh generation for a whole image. Finally, fitting errors can be simply controlled to balance quality with storage. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.