33 results on '"Kun Zhou"'
Search Results
2. Real-Time Hair Simulation With Neural Interpolation
- Author
-
Kun Zhou, Qing Lyu, Menglei Chai, and Xiang Chen
- Subjects
Generator (computer programming) ,Artificial neural network ,Computer science ,Heuristic (computer science) ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Computer Graphics and Computer-Aided Design ,Pipeline (software) ,Signal Processing ,Computer Graphics ,Computer Simulation ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithms ,Software ,Computer animation ,Hair ,ComputingMethodologies_COMPUTERGRAPHICS ,Interpolation - Abstract
Traditionally, reduced hair simulation methods are either restricted to heuristic approximations or bound to specific hairstyles. We introduce the first CNN-integrated framework for simulating various hairstyles. The approach produces visually realistic hairs with an interactive speed. To address the technical challenges, our hair simulation pipeline is designed as a two-stage process. First, we present a fully-convolutional neural interpolator as the backbone generator to compute dynamic weights for guide hair interpolation. Then, we adopt a second generator to produce fine-scale displacements to enhance the hair details. We train the neural interpolator with a dedicated loss function and the displacement generator with an adversarial discriminator. Experimental results demonstrate that our method is effective, efficient, and superior to the state-of-the-art on a wide variety of hairstyles. We further propose a performance-driven digital avatar system and an interactive hairstyle editing tool to illustrate the practical applications.
- Published
- 2022
3. Personalized Audio-Driven 3D Facial Animation Via Style-Content Disentanglement
- Author
-
Yujin Chai, Tianjia Shao, Yanlin Weng, and Kun Zhou
- Subjects
Signal Processing ,Computer Vision and Pattern Recognition ,Computer Graphics and Computer-Aided Design ,Software - Published
- 2022
4. H-CNN: Spatial Hashing Based CNN for 3D Shape Analysis
- Author
-
Yanlin Weng, Kun Zhou, Tianjia Shao, Yin Yang, and Qiming Hou
- Subjects
FOS: Computer and information sciences ,Computer science ,Hash function ,020207 software engineering ,02 engineering and technology ,Data structure ,Computer Graphics and Computer-Aided Design ,Convolutional neural network ,Graphics (cs.GR) ,Hash table ,Computer Science - Graphics ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Memory footprint ,Leverage (statistics) ,Computer Vision and Pattern Recognition ,Algorithm ,Software - Abstract
We present a novel spatial hashing based data structure to facilitate 3D shape analysis using convolutional neural networks (CNNs). Our method well utilizes the sparse occupancy of 3D shape boundary and builds hierarchical hash tables for an input model under different resolutions. Based on this data structure, we design two efficient GPU algorithms namely hash2col and col2hash so that the CNN operations like convolution and pooling can be efficiently parallelized. The spatial hashing is nearly minimal, and our data structure is almost of the same size as the raw input. Compared with state-of-the-art octree-based methods, our data structure significantly reduces the memory footprint during the CNN training. As the input geometry features are more compactly packed, CNN operations also run faster with our data structure. The experiment shows that, under the same network structure, our method yields comparable or better benchmarks compared to the state-of-the-art while it has only one-third memory consumption. Such superior memory performance allows the CNN to handle high-resolution shape analysis., 12 pages, 9 figures
- Published
- 2020
5. Neural Reflectance Capture in the View-Illumination Domain
- Author
-
Cihui Xie, Kaizhang Kang, Kun Zhou, Xuanda Yang, Minyi Gu, and Hongzhi Wu
- Subjects
Artificial neural network ,Exploit ,Computer science ,business.industry ,Object (computer science) ,Computer Graphics and Computer-Aided Design ,Reflectivity ,Multiplexing ,Domain (software engineering) ,Sampling (signal processing) ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Coherence (physics) - Abstract
We propose a novel framework to efficiently capture the unknown reflectance on a non-planar 3D object, by learning to probe the 4D view-lighting domain with a high-performance illumination multiplexing setup. The core of our framework is a deep neural network, specifically tailored to exploit the multi-view coherence for efficiency. It takes as input the photometric measurements of a surface point under learned lighting patterns at different views, automatically aggregates the information and reconstructs the anisotropic reflectance. We also evaluate the impact of different sampling parameters over our network. The effectiveness of our framework is demonstrated on high-quality reconstructions of a variety of physical objects, with an acquisition efficiency outperforming state-of-the-art techniques.
- Published
- 2021
6. Human Bas-Relief Generation From a Single Photograph
- Author
-
Kun Zhou, Youyi Zheng, Zhenjie Yang, Xiang Chen, and Beijia Chen
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point set registration ,Solid modeling ,Iterative reconstruction ,Computer Graphics and Computer-Aided Design ,Image (mathematics) ,Imaging, Three-Dimensional ,Signal Processing ,Normal mapping ,Computer Graphics ,Humans ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Image warping ,business ,Pose ,Image resolution ,Software ,Algorithms ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a semi-automatic method for producing human bas-relief from a single photograph. Given an input photo of one or multiple persons, our method first estimates a 3D skeleton for each person in the image. SMPL models are then fitted to the 3D skeletons to generate a 3D guide model. To align the 3D guide model with the image, we compute a 2D warping field to non-rigidly register the projected contours of the guide model with the body contours in the image. Then the normal map of the 3D guide model is warped by the 2D deformation field to reconstruct an overall base shape. Finally, the base shape is integrated with a fine-scale normal map to produce the final bas-relief. To tackle the complex intra- and inter-body interactions, we design an occlusion relationship resolution method that operates at the level of 3D skeletons with minimal user inputs. To tightly register the model contours to the image contours, we propose a non-rigid point matching algorithm harnessing user-specified sparse correspondences. Experiments demonstrate that our human bas-relief generation method is capable of producing perceptually realistic results on various single-person and multi-person images, on which the state-of-the-art depth and pose estimation methods often fail.
- Published
- 2021
7. Physics-Based Quadratic Deformation Using Elastic Weighting
- Author
-
Weiwei Xu, Ran Luo, Kun Zhou, Huamin Wang, and Yin Yang
- Subjects
Computer science ,020207 software engineering ,Domain decomposition methods ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Weighting ,Nonlinear system ,Quadratic equation ,Robustness (computer science) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Algorithm ,Software ,Subspace topology ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper presents a spatial reduction framework for simulating nonlinear deformable objects interactively. This reduced model is built using a small number of overlapping quadratic domains as we notice that incorporating high-order degrees of freedom (DOFs) is important for the simulation quality. Departing from existing multi-domain methods in graphics, our method interprets deformed shapes as blended quadratic transformations from nearby domains. Doing so avoids expensive safeguards against the domain coupling and improves the numerical robustness under large deformations. We present an algorithm that efficiently computes weight functions for reduced DOFs in a physics-aware manner. Inspired by the well-known multi-weight enveloping technique, our framework also allows subspace tweaking based on a few representative deformation poses. Such elastic weighting mechanism significantly extends the expressivity of the reduced model with light-weight computational efforts. Our simulator is versatile and can be well interfaced with many existing techniques. It also supports local DOF adaption to incorporate novel deformations (i.e., induced by the collision). The proposed algorithm complements state-of-the-art model reduction and domain decomposition methods by seeking for good trade-offs among animation quality, numerical robustness, pre-computation complexity, and simulation efficiency from an alternative perspective.
- Published
- 2018
8. ShuttleSpace: Exploring and Analyzing Movement Trajectory in Immersive Visualization
- Author
-
Lejun Shen, Xiangtong Chu, Shuainan Ye, Yifan Wang, Siwei Fu, Zhutian Chen, Yingcai Wu, and Kun Zhou
- Subjects
Visual analytics ,Computer science ,business.industry ,Metaphor ,media_common.quotation_subject ,Perspective (graphical) ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Computer Graphics and Computer-Aided Design ,Domain (software engineering) ,Visualization ,Immersive technology ,Data visualization ,Human–computer interaction ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Immersion (virtual reality) ,Computer Vision and Pattern Recognition ,business ,Software ,media_common - Abstract
We present ShuttleSpace, an immersive analytics system to assist experts in analyzing trajectory data in badminton. Trajectories in sports, such as the movement of players and balls, contain rich information on player behavior and thus have been widely analyzed by coaches and analysts to improve the players' performance. However, existing visual analytics systems often present the trajectories in court diagrams that are abstractions of reality, thereby causing difficulty for the experts to imagine the situation on the court and understand why the player acted in a certain way. With recent developments in immersive technologies, such as virtual reality (VR), experts gradually have the opportunity to see, feel, explore, and understand these 3D trajectories from the player's perspective. Yet, few research has studied how to support immersive analysis of sports data from such a perspective. Specific challenges are rooted in data presentation (e.g., how to seamlessly combine 2D and 3D visualizations) and interaction (e.g., how to naturally interact with data without keyboard and mouse) in VR. To address these challenges, we have worked closely with domain experts who have worked for a top national badminton team to design ShuttleSpace. Our system leverages 1) the peripheral vision to combine the 2D and 3D visualizations and 2) the VR controller to support natural interactions via a stroke metaphor. We demonstrate the effectiveness of ShuttleSpace through three case studies conducted by the experts with useful insights. We further conduct interviews with the experts whose feedback confirms that our first-person immersive analytics system is suitable and useful for analyzing badminton data.
- Published
- 2020
9. DeepSketchHair: Deep Sketch-Based 3D Hair Modeling
- Author
-
Kun Zhou, Youyi Zheng, Hongbo Fu, Changgeng Zhang, and Yuefan Shen
- Subjects
FOS: Computer and information sciences ,Matching (graph theory) ,Artificial neural network ,business.industry ,Computer science ,Deep learning ,020207 software engineering ,02 engineering and technology ,Solid modeling ,Computer Graphics and Computer-Aided Design ,Sketch ,Field (computer science) ,Graphics (cs.GR) ,Computer Science - Graphics ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present sketchhair, a deep learning based tool for interactive modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicating the hair growing direction within a hair region), and automatically generates a 3D hair model, which matches the input sketch both globally and locally. The key enablers of our system are two carefully designed neural networks, namely, S2ONet, which converts an input sketch to a dense 2D hair orientation field; and O2VNet, which maps the 2D orientation field to a 3D vector field. Our system also supports hair editing with additional sketches in new views. This is enabled by another deep neural network, V2VNet, which updates the 3D vector field with respect to the new sketches. All the three networks are trained with synthetic data generated from a 3D hairstyle database. We demonstrate the effectiveness and expressiveness of our tool using a variety of hairstyles and also compare our method with prior art.
- Published
- 2020
10. OrthoAligner: Image-based Teeth Alignment Prediction via Latent Style Manipulation
- Author
-
Beijia, Chen, Hongbo, Fu, Kun, Zhou, and Youyi, Zheng
- Subjects
Signal Processing ,Computer Vision and Pattern Recognition ,Computer Graphics and Computer-Aided Design ,Software - Abstract
In this paper, we present OrthoAligner, a novel method to predict the visual outcome of orthodontic treatment in a portrait image. Unlike the state-of-the-art method, which relies on a 3D teeth model obtained from dental scanning, our method generates realistic alignment effects in images without requiring additional 3D information as input and thus making our system readily available to average users. The key of our approach is to employ the 3D geometric information encoded in an unsupervised generative model, i.e., StyleGAN in this paper. Instead of directly conducting translation in the image space, we embed the teeth region extracted from a given portrait to the latent space of the StyleGAN generator and propose a novel {latent} editing method to discover a geometrically meaningful editing path that yields the alignment process in the image space. To blend the edited mouth region with the original portrait image, we further introduce a BlendingNet to remove boundary artifacts and correct color inconsistency. We also extend our method to short video clips by propagating the alignment effects across neighboring frames. We evaluate our method in various orthodontic cases, compare it to the state-of-the-art and competitive baselines, and validate the effectiveness of each component.
- Published
- 2022
11. Example-Based Subspace Stress Analysis for Interactive Shape Design
- Author
-
Kun Zhou, Changxi Zheng, and Xiang Chen
- Subjects
Scheme (programming language) ,Discretization ,Basis (linear algebra) ,Computer science ,020207 software engineering ,Geometry ,02 engineering and technology ,Construct (python library) ,Computer Graphics and Computer-Aided Design ,Finite element method ,Stress (mechanics) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,computer ,Algorithm ,Software ,Subspace topology ,computer.programming_language - Abstract
Stress analysis is a crucial tool for designing structurally sound shapes. However, the expensive computational cost has hampered its use in interactive shape editing tasks. We augment the existing example-based shape editing tools, and propose a fast subspace stress analysis method to enable stress-aware shape editing. In particular, we construct a reduced stress basis from a small set of shape exemplars and possible external forces. This stress basis is automatically adapted to the current user edited shape on the fly, and thereby offers reliable stress estimation. We then introduce a new finite element discretization scheme to use the reduced basis for fast stress analysis. Our method runs up to two orders of magnitude faster than the full-space finite element analysis, with average $L_2$ estimation errors less than 2 percent and maximum $L_2$ errors less than 6 percent. Furthermore, we build an interactive stress-aware shape editing tool to demonstrate its performance in practice.
- Published
- 2017
12. Coloring 3D Printed Surfaces by Thermoforming
- Author
-
Yizhong Zhang, Kun Zhou, and Yiying Tong
- Subjects
Pressing ,3d printed ,Computer science ,business.industry ,Shell (structure) ,3D printing ,Mechanical engineering ,020207 software engineering ,02 engineering and technology ,Solid modeling ,021001 nanoscience & nanotechnology ,Computer Graphics and Computer-Aided Design ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,0210 nano-technology ,business ,Thermoforming ,Texture mapping ,Software - Abstract
Decorating the surfaces of 3D printed objects with color textures is still not readily available in most consumer-level or even high-end 3D printers. Existing techniques such as hydrographics color transfer suffer from the issues of air pockets in concave regions and discoloration in overly stretched regions. We propose a novel thermoforming-based coloring technique to alleviate these problems as well as to simplify the overall procedure. Thermoforming is a widely used technique in industry for plastic thin shell product manufacturing by pressing heated plastic sheets onto molds using atmospheric pressure. We attach on the transparent plastic sheet a precomputed color pattern decal prior to heating, and adhere it to 3D printed models treated as the molds in thermoforming. The 3D models are thus decorated with the desired color texture, as well as a thin, polished protective cover. The precomputation involves a physical simulation of the thermoforming process to compute the correct color pattern on the plastic sheet, and the vent hole layout on the 3D model for air pocket elimination. We demonstrate the effectiveness and accuracy of our computational model and our prototype thermoforming surface coloring system through physical experiments.
- Published
- 2017
13. Shape Completion from a Single RGBD Image
- Author
-
Hongzhi Wu, Dongping Li, Kun Zhou, and Tianjia Shao
- Subjects
Computer science ,business.industry ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Robustness (computer science) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
We present a novel approach for constructing a complete 3D model for an object from a single RGBD image. Given an image of an object segmented from the background, a collection of 3D models of the same category are non-rigidly aligned with the input depth, to compute a rough initial result. A volumetric-patch-based optimization algorithm is then performed to refine the initial result to generate a 3D model that not only is globally consistent with the overall shape expected from the input image but also possesses geometric details similar to those in the input image. The optimization with a set of high-level constraints, such as visibility, surface confidence and symmetry, can achieve more robust and accurate completion over state-of-the art techniques. We demonstrate the efficiency and robustness of our approach with multiple categories of objects with various geometries and details, including busts, chairs, bikes, toys, vases and tables.
- Published
- 2017
14. Adaptive Skinning for Interactive Hair-Solid Simulation
- Author
-
Kun Zhou, Menglei Chai, and Changxi Zheng
- Subjects
Computer science ,Movement ,Image processing ,02 engineering and technology ,Computer graphics ,Skinning ,Computer graphics (images) ,Computer Graphics ,Image Processing, Computer-Assisted ,otorhinolaryngologic diseases ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Computer Simulation ,Computer vision ,integumentary system ,business.industry ,020207 software engineering ,Animation ,Computer Graphics and Computer-Aided Design ,Signal Processing ,020201 artificial intelligence & image processing ,sense organs ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithms ,Software ,Hair - Abstract
Reduced hair models have proven successful for interactively simulating a full head of hair strands, building upon a fundamental assumption that only a small set of guide hairs are needed for explicit simulation, and the rest of the hair move coherently and thus can be interpolated using guide hairs. Unfortunately, hair-solid interactions is a pathological case for traditional reduced hair models, as the motion coherence between hair strands can be arbitrarily broken by interacting with solids. In this paper, we propose an adaptive hair skinning method for interactive hair simulation with hair-solid collisions. We precompute many eligible sets of guide hairs and the corresponding interpolation relationships that are represented using a compact strand-based hair skinning model. At runtime, we simulate only guide hairs; for interpolating every other hair, we adaptively choose its guide hairs, taking into account motion coherence and potential hair-solid collisions. Further, we introduce a two-way collision correction algorithm to allow sparsely sampled guide hairs to resolve collisions with solids that can have small geometric features. Our method enables interactive simulation of more than 150 K hair strands interacting with complex solid objects, using 400 guide hairs. We demonstrate the efficiency and robustness of the method with various hairstyles and user-controlled arbitrary hair-solid interactions.
- Published
- 2017
15. Simultaneous Localization and Appearance Estimation with a Consumer RGB-D Camera
- Author
-
Wang Zhaotian, Hongzhi Wu, and Kun Zhou
- Subjects
Surface (mathematics) ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Iterative reconstruction ,Object (computer science) ,Computer Graphics and Computer-Aided Design ,Domain (software engineering) ,Wavelet ,Camera auto-calibration ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
Acquiring general material appearance with hand-held consumer RGB-D cameras is difficult for casual users, due to the inaccuracy in reconstructed camera poses and geometry, as well as the unknown lighting that is coupled with materials in measured color images. To tackle these challenges, we present a novel technique for estimating the spatially varying isotropic surface reflectance, solely from color and depth images captured with an RGB-D camera under unknown environment illumination. The core of our approach is a joint optimization, which alternates among solving for plausible camera poses, materials, the environment lighting and normals. To refine camera poses, we exploit the rich spatial and view-dependent variations of materials, treating the object as a localization-self-calibrating model. To recover the unknown lighting, measured color images along with the current estimate of materials are used in a global optimization, efficiently solved by exploiting the sparsity in the wavelet domain. We demonstrate the substantially improved quality of estimated appearance on a variety of daily objects.
- Published
- 2016
16. NNWarp: Neural Network-Based Nonlinear Deformation
- Author
-
Xiang Chen, Huamin Wang, Weiwei Xu, Kun Zhou, Ran Luo, Yin Yang, and Tianjia Shao
- Subjects
Artificial neural network ,Computer science ,Feature vector ,Linear elasticity ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Displacement (vector) ,Range (mathematics) ,Nonlinear system ,Matrix (mathematics) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,Image warping ,Algorithm ,Software - Abstract
NNWarp is a highly re-usable and efficient neural network (NN) based nonlinear deformable simulation framework. Unlike other machine learning applications such as image recognition, where different inputs have a uniform and consistent format (e.g., an array of all the pixels in an image), the input for deformable simulation is quite variable, high-dimensional, and parametrization-unfriendly. Consequently, even though the neural network is known for its rich expressivity of nonlinear functions, directly using an NN to reconstruct the force-displacement relation for general deformable simulation is nearly impossible. NNWarp obviates this difficulty by partially restoring the force-displacement relation via warping the nodal displacement simulated using a simplistic constitutive model–the linear elasticity. In other words, NNWarp yields an incremental displacement fix per mesh node based on a simplified (therefore incorrect) simulation result other than synthesizing the unknown displacement directly. We introduce a compact yet effective feature vector including geodesic , potential and digression to sort training pairs of per-node linear and nonlinear displacement. NNWarp is robust under different model shapes and tessellations. With the assistance of deformation substructuring, one NN training is able to handle a wide range of 3D models of various geometries. Thanks to the linear elasticity and its constant system matrix, the underlying simulator only needs to perform one pre-factorized matrix solve at each time step, which allows NNWarp to simulate large models in real time.
- Published
- 2018
17. Parallel Style-Aware Image Cloning for Artworks
- Author
-
Kun Zhou, Xiaogang Jin, Hanli Zhao, Meng Ai, Yandan Zhao, and Ying-Qing Xu
- Subjects
Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image editing ,Object (computer science) ,computer.software_genre ,Computer Graphics and Computer-Aided Design ,Composite image filter ,GeneralLiterature_MISCELLANEOUS ,Non-photorealistic rendering ,Style (sociolinguistics) ,Image (mathematics) ,Computer graphics (images) ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present style-aware image cloning , a novel image editing approach for artworks, which allows users to seamlessly insert any photorealistic or artificial objects into an artwork to create a new image that shares the same artistic style with the original artwork. To this end, a real-time image transfer algorithm is developed to stylize the cloned object according to a distance metric based on the artistic styles and semantic information. Several interactive functions, such as layering, shadowing, semantic labeling, and direction field editing, are provided to enhance the harmonization of the composite image. Extensive experimental results demonstrate the effectiveness of our method.
- Published
- 2015
18. Recovering Functional Mechanical Assemblies from Raw Scans
- Author
-
Niloy J. Mitra, Tianjia Shao, Kun Zhou, Minmin Lin, and Youyi Zheng
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,020207 software engineering ,3d scanning ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,020901 industrial engineering & automation ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software - Abstract
This paper presents a method to reconstruct a functional mechanical assembly from raw scans. Given multiple input scans of a mechanical assembly, our method first extracts the functional mechanical parts using a motion-guided, patch-based hierarchical registration and labeling algorithm. The extracted functional parts are then parameterized from the segments and their internal mechanical relations are encoded by a graph. We use a joint optimization to solve for the best geometry, placement, and orientation of each part, to obtain a final workable mechanical assembly. We demonstrated our algorithm on various types of mechanical assemblies with diverse settings and validated our output using physical fabrication.
- Published
- 2017
19. Cone Tracing for Furry Object Rendering
- Author
-
Hao Qin, Kun Zhou, Qiming Hou, Menglei Chai, and Zhong Ren
- Subjects
Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Bounding volume hierarchy ,Rendering (computer graphics) ,Computer graphics ,Computer graphics (images) ,Computer Graphics ,Animals ,Computer vision ,ComputingMethodologies_COMPUTERGRAPHICS ,Pixel ,business.industry ,Sciuridae ,Supersampling ,Image plane ,Computer Graphics and Computer-Aided Design ,Signal Processing ,Compositing ,Ray tracing (graphics) ,Computer Vision and Pattern Recognition ,Shading ,Artificial intelligence ,Cone tracing ,business ,Algorithms ,Software ,Distributed ray tracing ,Hair - Abstract
We present a cone-based ray tracing algorithm for high-quality rendering of furry objects with reflection, refraction and defocus effects. By aggregating many sampling rays in a pixel as a single cone, we significantly reduce the high supersampling rate required by the thin geometry of fur fibers. To reduce the cost of intersecting fur fibers with cones, we construct a bounding volume hierarchy for the fiber geometry to find the fibers potentially intersecting with cones, and use a set of connected ribbons to approximate the projections of these fibers on the image plane. The computational cost of compositing and filtering transparent samples within each cone is effectively reduced by approximating away in-cone variations of shading, opacity and occlusion. The result is a highly efficient ray tracing algorithm for furry objects which is able to render images of quality comparable to those generated by alternative methods, while significantly reducing the rendering time. We demonstrate the rendering quality and performance of our algorithm using several examples and a user study.
- Published
- 2014
20. Boundary-Aware Multidomain Subspace Deformation
- Author
-
Weiwei Xu, Kun Zhou, Baining Guo, Xiaohu Guo, and Yin Yang
- Subjects
Deformation (mechanics) ,Computer science ,Boundary (topology) ,Domain decomposition methods ,Deformation (meteorology) ,Elasticity (physics) ,Topology ,Computer Graphics and Computer-Aided Design ,Linear subspace ,Finite element method ,symbols.namesake ,Lagrange multiplier ,Signal Processing ,symbols ,Tetrahedron ,Computer Vision and Pattern Recognition ,Software ,Subspace topology ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we propose a novel framework for multidomain subspace deformation using node-wise corotational elasticity. With the proper construction of subspaces based on the knowledge of the boundary deformation, we can use the Lagrange multiplier technique to impose coupling constraints at the boundary without overconstraining. In our deformation algorithm, the number of constraint equations to couple two neighboring domains is not related to the number of the nodes on the boundary but is the same as the number of the selected boundary deformation modes. The crack artifact is not present in our simulation result, and the domain decomposition with loops can be easily handled. Experimental results show that the single-core implementation of our algorithm can achieve real-time performance in simulating deformable objects with around quarter million tetrahedral elements.
- Published
- 2013
21. Analytic Double Product Integrals for All-Frequency Relighting
- Author
-
Rui Wang, Kun Zhou, Wei Hua, Minghao Pan, Hujun Bao, Weifeng Chen, and Zhong Ren
- Subjects
Computer science ,business.industry ,Visibility (geometry) ,Mathematical analysis ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Computer Graphics and Computer-Aided Design ,Integral equation ,GeneralLiterature_MISCELLANEOUS ,Piecewise linear function ,Computer graphics ,Product (mathematics) ,Signal Processing ,Point (geometry) ,Computer vision ,Computer Vision and Pattern Recognition ,Bidirectional reflectance distribution function ,Shading ,Specular reflection ,Artificial intelligence ,business ,Legendre polynomials ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper presents a new technique for real-time relighting of static scenes with all-frequency shadows from complex lighting and highly specular reflections from spatially varying BRDFs. The key idea is to depict the boundaries of visible regions using piecewise linear functions, and convert the shading computation into double product integrals—the integral of the product of lighting and BRDF on visible regions. By representing lighting and BRDF with spherical Gaussians and approximating their product using Legendre polynomials locally in visible regions, we show that such double product integrals can be evaluated in an analytic form. Given the precomputed visibility, our technique computes the visibility boundaries on the fly at each shading point, and performs the analytic integral to evaluate the shading color. The result is a real-time all-frequency relighting technique for static scenes with dynamic, spatially varying BRDFs, which can generate more accurate shadows than the state-of-the-art real-time PRT methods.
- Published
- 2013
22. TransCut: Interactive Rendering of Translucent Cutouts
- Author
-
Kun Zhou, Zhong Ren, Baining Guo, Dongping Li, Yiying Tong, Stephen Lin, and Xin Sun
- Subjects
Diffusion equation ,Computer science ,Subsurface scattering ,Solver ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Finite element method ,Pattern Recognition, Automated ,Rendering (computer graphics) ,Computer graphics ,Refractometry ,User-Computer Interface ,Matrix (mathematics) ,Imaging, Three-Dimensional ,Computer Science::Graphics ,Computer graphics (images) ,Image Interpretation, Computer-Assisted ,Signal Processing ,Computer Graphics ,Computer Vision and Pattern Recognition ,Algorithms ,Software - Abstract
We present TransCut, a technique for interactive rendering of translucent objects undergoing fracturing and cutting operations. As the object is fractured or cut open, the user can directly examine and intuitively understand the complex translucent interior, as well as edit material properties through painting on cross sections and recombining the broken pieces-all with immediate and realistic visual feedback. This new mode of interaction with translucent volumes is made possible with two technical contributions. The first is a novel solver for the diffusion equation (DE) over a tetrahedral mesh that produces high-quality results comparable to the state-of-art finite element method (FEM) of Arbree et al. [1] but at substantially higher speeds. This accuracy and efficiency is obtained by computing the discrete divergences of the diffusion equation and constructing the DE matrix using analytic formulas derived for linear finite elements. The second contribution is a multiresolution algorithm to significantly accelerate our DE solver while adapting to the frequent changes in topological structure of dynamic objects. The entire multiresolution DE solver is highly parallel and easily implemented on the GPU. We believe TransCut provides a novel visual effect for heterogeneous translucent objects undergoing fracturing and cutting operations.
- Published
- 2013
23. Interactive Shape Interpolation through Controllable Dynamic Deformation
- Author
-
Jin Huang, Yiying Tong, Hujun Bao, Kun Zhou, and Mathieu Desbrun
- Subjects
Sequence ,Series (mathematics) ,Computer science ,business.industry ,Modal analysis ,Deformation (meteorology) ,Computer Graphics and Computer-Aided Design ,Matrix decomposition ,Finite strain theory ,Signal Processing ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Software ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS ,Interpolation - Abstract
In this paper, we introduce an interactive approach to generate physically based shape interpolation between poses. We extend linear modal analysis to offer an efficient and robust numerical technique to generate physically-plausible dynamics even for very large deformation. Our method also provides a rich set of intuitive editing tools with real-time feedback, including control over vibration frequencies, amplitudes, and damping of the resulting interpolation sequence. We demonstrate the versatility of our approach through a series of complex dynamic shape interpolations.
- Published
- 2011
24. Data-Parallel Octrees for Surface Reconstruction
- Author
-
Xin Huang, Baining Guo, Minmin Gong, and Kun Zhou
- Subjects
Sparse voxel octree ,Marching cubes ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,Parallel computing ,Computer Graphics and Computer-Aided Design ,Rendering (computer graphics) ,Computer graphics ,Octree ,Signal Processing ,Triangle mesh ,Isosurface ,Computer Vision and Pattern Recognition ,Barnes–Hut simulation ,Algorithm ,Software ,Surface reconstruction ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present the first parallel surface reconstruction algorithm that runs entirely on the GPU. Like existing implicit surface reconstruction methods, our algorithm first builds an octree for the given set of oriented points, then computes an implicit function over the space of the octree, and finally extracts an isosurface as a watertight triangle mesh. A key component of our algorithm is a novel technique for octree construction on the GPU. This technique builds octrees in real time and uses level-order traversals to exploit the fine-grained parallelism of the GPU. Moreover, the technique produces octrees that provide fast access to the neighborhood information of each octree node, which is critical for fast GPU surface reconstruction. With an octree so constructed, our GPU algorithm performs Poisson surface reconstruction, which produces high-quality surfaces through a global optimization. Given a set of 500 K points, our algorithm runs at the rate of about five frames per second, which is over two orders of magnitude faster than previous CPU algorithms. To demonstrate the potential of our algorithm, we propose a user-guided surface reconstruction technique which reduces the topological ambiguity and improves reconstruction results for imperfect scan data. We also show how to use our algorithm to perform on-the-fly conversion from dynamic point clouds to surfaces as well as to reconstruct fluid surfaces for real-time fluid simulation.
- Published
- 2011
25. Visual Abstraction and Exploration of Multi-class Scatterplots
- Author
-
Wei Chen, Wentao Gu, Weifeng Chen, Honghui Mei, Haidong Chen, Kwan-Liu Ma, Zhiqi Liu, and Kun Zhou
- Subjects
Adult ,Male ,Computer science ,computer.software_genre ,Pattern Recognition, Automated ,Computer graphics ,Young Adult ,Data visualization ,Computer Graphics ,Humans ,Abstraction (linguistics) ,Class (computer programming) ,Models, Statistical ,business.industry ,Sampling (statistics) ,Computer Graphics and Computer-Aided Design ,Visualization ,Visual inspection ,Signal Processing ,Outlier ,Female ,Computer Vision and Pattern Recognition ,Data mining ,business ,computer ,Software ,Algorithms - Abstract
Scatterplots are widely used to visualize scatter dataset for exploring outliers, clusters, local trends, and correlations. Depicting multi-class scattered points within a single scatterplot view, however, may suffer from heavy overdraw, making it inefficient for data analysis. This paper presents a new visual abstraction scheme that employs a hierarchical multi-class sampling technique to show a feature-preserving simplification. To enhance the density contrast, the colors of multiple classes are optimized by taking the multi-class point distributions into account. We design a visual exploration system that supports visual inspection and quantitative analysis from different perspectives. We have applied our system to several challenging datasets, and the results demonstrate the efficiency of our approach.
- Published
- 2015
26. FaceWarehouse: a 3D facial expression database for visual computing
- Author
-
Yanlin Weng, Shun Zhou, Kun Zhou, Yiying Tong, and Chen Cao
- Subjects
Male ,Databases, Factual ,Facial motion capture ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Video Recording ,computer.software_genre ,Facial recognition system ,Imaging, Three-Dimensional ,Humans ,Computer vision ,Computer facial animation ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS ,Facial expression ,Database ,business.industry ,Color image ,Computer Graphics and Computer-Aided Design ,Visual computing ,Facial Expression ,Feature (computer vision) ,Face ,Signal Processing ,Retargeting ,Female ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software - Abstract
We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.
- Published
- 2014
27. Motion imitation with a handheld camera
- Author
-
Guofeng Zhang, Jin Huang, Kun Zhou, Tien-Tsin Wong, Hujun Bao, Hanqing Jiang, and Jiaya Jia
- Subjects
Motion compensation ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Graphics and Computer-Aided Design ,Motion capture ,Motion (physics) ,Motion field ,Match moving ,Motion estimation ,Computer graphics (images) ,Signal Processing ,Structure from motion ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Motion interpolation ,business ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In this paper, we present a novel method to extract motion of a dynamic object from a video that is captured by a handheld camera, and apply it to a 3D character. Unlike the motion capture techniques, neither special sensors/trackers nor a controllable environment is required. Our system significantly automates motion imitation which is traditionally conducted by professional animators via manual keyframing. Given the input video sequence, we track the dynamic reference object to obtain trajectories of both 2D and 3D tracking points. With them as constraints, we then transfer the motion to the target 3D character by solving an optimization problem to maintain the motion gradients. We also provide a user-friendly editing environment for users to fine tune the motion details. As casual videos can be used, our system, therefore, greatly increases the supply source of motion data. Examples of imitating various types of animal motion are shown.
- Published
- 2010
28. Memory-Scalable GPU Spatial Hierarchy Construction
- Author
-
Qiming Hou, Christian Lauterbach, Kun Zhou, Dinesh Manocha, and Xin Sun
- Subjects
Flat memory model ,Coprocessor ,Computer science ,Graphics processing unit ,Uniform memory access ,Parallel computing ,Computer Graphics and Computer-Aided Design ,CUDA ,Memory management ,Signal Processing ,Algorithm design ,Out-of-core algorithm ,Computer Vision and Pattern Recognition ,Central processing unit ,Massively parallel ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.
- Published
- 2010
29. Radiance transfer biclustering for real-time all-frequency biscale rendering
- Author
-
Zhong Ren, Qiming Hou, Baining Guo, Kun Zhou, and Xin Sun
- Subjects
Computer science ,Graphics hardware ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Information Storage and Retrieval ,Rendering (computer graphics) ,Biclustering ,Imaging, Three-Dimensional ,Image texture ,Image Interpretation, Computer-Assisted ,Computer Graphics ,Computer vision ,Lighting ,ComputingMethodologies_COMPUTERGRAPHICS ,Pixel ,business.industry ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Transfer matrix ,Signal Processing ,Radiance ,Computer-Aided Design ,Ray tracing (graphics) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Shading ,Bidirectional texture function ,business ,Software ,Algorithms - Abstract
We present a real-time algorithm to render all-frequency radiance transfer at both macroscale and mesoscale. At a mesoscale, the shading is computed on a per-pixel basis by integrating the product of the local incident radiance and a bidirectional texture function. While at a macroscale, the precomputed transfer matrix, which transfers the global incident radiance to the local incident radiance at each vertex, is losslessly compressed by a novel biclustering technique. The biclustering is directly applied on the radiance transfer represented in a pixel basis, on which the BTF is naturally defined. It exploits the coherence in the transfer matrix and a property of matrix element values to reduce both storage and runtime computation cost. Our new algorithm renders at real-time frame rates realistic materials and shadows under all-frequency direct environment lighting. Comparisons show that our algorithm is able to generate images that compare favorably with reference ray tracing results, and has obvious advantages over alternative methods in storage and preprocessing time.
- Published
- 2010
30. Decorating surfaces with bidirectional texture functions
- Author
-
Peng Du, Jiaoying Shi, Kun Zhou, Heung-Yeung Shum, Yasuyuki Matsushita, Lifeng Wang, and Baining Guo
- Subjects
Surface (mathematics) ,Computer science ,Surface Properties ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Information Storage and Retrieval ,Texture (geology) ,Computer graphics ,Imaging, Three-Dimensional ,Image texture ,Computer graphics (images) ,Image Interpretation, Computer-Assisted ,Computer Graphics ,Computer vision ,ComputingMethodologies_COMPUTERGRAPHICS ,Painting ,business.industry ,Computational geometry ,Image Enhancement ,Computer Graphics and Computer-Aided Design ,Feature (computer vision) ,Signal Processing ,Paintings ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Bidirectional texture function ,business ,Software ,Algorithms ,Texture synthesis - Abstract
We present a system for decorating arbitrary surfaces with bidirectional texture functions (BTF). Our system generates BTFs in two steps. First, we automatically synthesize a BTF over the target surface from a given BTF sample. Then, we let the user interactively paint BTF patches onto the surface such that the painted patches seamlessly integrate with the background patterns. Our system is based on a patch-based texture synthesis approach known as quilting. We present a graphcut algorithm for BTF synthesis on surfaces and the algorithm works well for a wide variety of BTF samples, including those which present problems for existing algorithms. We also describe a graphcut texture painting algorithm for creating new surface imperfections (e.g., dirt, cracks, scratches) from existing imperfections found in input BTF samples. Using these algorithms, we can decorate surfaces with real-world textures that have spatially-variant reflectance, fine-scale geometry details, and surfaces imperfections. A particularly attractive feature of BTF painting is that it allows us to capture imperfections of real materials and paint them onto geometry models. We demonstrate the effectiveness of our system with examples.
- Published
- 2005
31. Parallel Style-Aware Image Cloning for Artworks.
- Author
-
Yandan Zhao, Xiaogang Jin, Yingqing Xu, Hanli Zhao, Meng Ai, and Kun Zhou
- Abstract
We present style-aware image cloning, a novel image editing approach for artworks, which allows users to seamlessly insert any photorealistic or artificial objects into an artwork to create a new image that shares the same artistic style with the original artwork. To this end, a real-time image transfer algorithm is developed to stylize the cloned object according to a distance metric based on the artistic styles and semantic information. Several interactive functions, such as layering, shadowing, semantic labeling, and direction field editing, are provided to enhance the harmonization of the composite image. Extensive experimental results demonstrate the effectiveness of our method.
- Published
- 2015
- Full Text
- View/download PDF
32. Data-Parallel Octrees for Surface Reconstruction.
- Author
-
Kun Zhou, Minmin Gong, Xin Huang, and Baining Guo
- Abstract
We present the first parallel surface reconstruction algorithm that runs entirely on the GPU. Like existing implicit surface reconstruction methods, our algorithm first builds an octree for the given set of oriented points, then computes an implicit function over the space of the octree, and finally extracts an isosurface as a watertight triangle mesh. A key component of our algorithm is a novel technique for octree construction on the GPU. This technique builds octrees in real time and uses level-order traversals to exploit the fine-grained parallelism of the GPU. Moreover, the technique produces octrees that provide fast access to the neighborhood information of each octree node, which is critical for fast GPU surface reconstruction. With an octree so constructed, our GPU algorithm performs Poisson surface reconstruction, which produces high-quality surfaces through a global optimization. Given a set of 500 K points, our algorithm runs at the rate of about five frames per second, which is over two orders of magnitude faster than previous CPU algorithms. To demonstrate the potential of our algorithm, we propose a user-guided surface reconstruction technique which reduces the topological ambiguity and improves reconstruction results for imperfect scan data. We also show how to use our algorithm to perform on-the-fly conversion from dynamic point clouds to surfaces as well as to reconstruct fluid surfaces for real-time fluid simulation.
- Published
- 2011
- Full Text
- View/download PDF
33. Memory-Scalable GPU Spatial Hierarchy Construction.
- Author
-
Qiming Hou, Xin Sun, Kun Zhou, Lauterbach C, and Manocha D
- Abstract
Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.