11 results on '"Komura T."'
Search Results
2. Relationship-Based Point Cloud Completion.
- Author
-
Zhao X, Zhang B, Wu J, Hu R, and Komura T
- Abstract
We propose a partial point cloud completion approach for scenes that are composed of multiple objects. We focus on pairwise scenes where two objects are in close proximity and are contextually related to each other, such as a chair tucked in a desk, a fruit in a basket, a hat on a hook and a flower in a vase. Different from existing point cloud completion methods, which mainly focus on single objects, we design a network that encodes not only the geometry of the individual shapes, but also the spatial relations between different objects. More specifically, we complete missing parts of the objects in a conditional manner, where the partial or completed point cloud of the other object is used as an additional input to help predict missing parts. Based on the idea of conditional completion, we further propose a two-path network, which is guided by a consistency loss between different sequences of completion. Our method can handle difficult cases where the objects heavily occlude each other. Also, it only requires a small set of training data to reconstruct the interaction area compared to existing completion approaches. We evaluate our method qualitatively and quantitatively via ablation studies and in comparison to the state-of-the-art point cloud completion methods.
- Published
- 2022
- Full Text
- View/download PDF
3. Reconstruction of Dexterous 3D Motion Data From a Flexible Magnetic Sensor With Deep Learning and Structure-Aware Filtering.
- Author
-
Huang J, Sugawara R, Chu K, Komura T, and Kitamura Y
- Subjects
- Computer Graphics, Magnetic Phenomena, Motion, Neural Networks, Computer, Deep Learning
- Abstract
We propose IM3D+, a novel approach to reconstructing 3D motion data from a flexible magnetic flux sensor array using deep learning and a structure-aware temporal bilateral filter. Computing the 3D configuration of markers (inductor-capacitor (LC) coils) from flux sensor data is difficult because the existing numerical approaches suffer from system noise, dead angles, the need for initialization, and limitations in the sensor array's layout. We solve these issues with deep neural networks to learn the regression from the simulation flux values to the LC coils' 3D configuration, which can be applied to the actual LC coils at any location and orientation within the capture volume. To cope with the influence of system noise and the dead-angle limitation caused by the characteristics of the hardware and sensing principle, we propose a structure-aware temporal bilateral filter for reconstructing motion sequences. Our method can track various movements, including fingers that manipulate objects, beetles that move inside a vivarium with leaves and soil, and the flow of opaque fluid. Since no power supply is needed for the lightweight wireless markers, our method can robustly track movements for a very long time, making it suitable for various types of observations whose tracking is difficult with existing motion-tracking systems. Furthermore, the flexibility of the flux sensor layout allows users to reconfigure it based on their own applications, thus making our approach suitable for a variety of virtual reality applications.
- Published
- 2022
- Full Text
- View/download PDF
4. Localization and Completion for 3D Object Interactions.
- Author
-
Zhao X, Hu R, Liu H, Komura T, and Yang X
- Abstract
Finding where and what objects to put into an existing scene is a common task for scene synthesis and robot/character motion planning. Existing frameworks require development of hand-crafted features suitable for the task, or full volumetric analysis that could be memory intensive and imprecise. In this paper, we propose a data-driven framework to discover a suitable location and then place the appropriate objects in a scene. Our approach is inspired by computer vision techniques for localizing objects in images: using an all directional depth image (ADD-image) that encodes the 360-degree field of view from samples in the scene, our system regresses the images to the positions where the new object can be located. Given several candidate areas around the host object in the scene, our system predicts the partner object whose geometry fits well to the host object. Our approach is highly parallel and memory efficient, and is especially suitable for handling interactions between large and small objects. We show examples where the system can hang bags on hooks, fit chairs in front of desks, put objects into shelves, insert flowers into vases, and put hangers onto laundry rack.
- Published
- 2020
- Full Text
- View/download PDF
5. A Sampling Approach to Generating Closely Interacting 3D Pose-Pairs from 2D Annotations.
- Author
-
Yin K, Huang H, Ho ESL, Wang H, Komura T, Cohen-Or D, and Zhang H
- Abstract
We introduce a data-driven method to generate a large number of plausible, closely interacting 3D human pose-pairs, for a given motion category, e.g., wrestling or salsa dance. With much difficulty in acquiring close interactions using 3D sensors, our approach utilizes abundant existing video data which cover many human activities. Instead of treating the data generation problem as one of reconstruction, either through 3D acquisition or direct 2D-to-3D data lifting from video annotations, we present a solution based on Markov Chain Monte Carlo (MCMC) sampling. Given a motion category and a set of video frames depicting the motion with the 2D pose-pair in each frame annotated, we start the sampling with one or few seed 3D pose-pairs which are manually created based on the target motion category. The initial set is then augmented by MCMC sampling around the seeds, via the Metropolis-Hastings algorithm and guided by a probability density function (PDF) that is defined by two terms to bias the sampling towards 3D pose-pairs that are physically valid and plausible for the motion category. With a focus on efficient sampling over the space of close interactions, rather than pose spaces, we develop a novel representation called interaction coordinates (IC) to encode both poses and their interactions in an integrated manner. Plausibility of a 3D pose-pair is then defined based on the IC and with respect to the annotated 2D pose-pairs from video. We show that our sampling-based approach is able to efficiently synthesize a large volume of plausible, closely interacting 3D pose-pairs which provide a good coverage of the input 2D pose-pairs.
- Published
- 2019
- Full Text
- View/download PDF
6. Widening Viewing Angles of Automultiscopic Displays Using Refractive Inserts.
- Author
-
Lyu G, Shen X, Komura T, Subr K, and Teng L
- Abstract
Displays that can portray environments that are perceivable from multiple views are known as multiscopic displays. Some multiscopic displays enable realistic perception of 3D environments without the need for cumbersome mounts or fragile head-tracking algorithms. These automultiscopic displays carefully control the distribution of emitted light over space, direction (angle) and time so that even a static image displayed can encode parallax across viewing directions (Iightfield). This allows simultaneous observation by multiple viewers, each perceiving 3D from their own (correct) perspective. Currently, the illusion can only be effectively maintained over a narrow range of viewing angles. In this paper, we propose and analyze a simple solution to widen the range of viewing angles for automultiscopic displays that use parallax barriers. We propose the use of a refractive medium, with a high refractive index, between the display and parallax barriers. The inserted medium warps the exitant lightfield in a way that increases the potential viewing angle. We analyze the consequences of this warp and build a prototype with a 93% increase in the effective viewing angle.
- Published
- 2018
- Full Text
- View/download PDF
7. Learning Inverse Rig Mappings by Nonlinear Regression.
- Author
-
Holden D, Saito J, and Komura T
- Abstract
We present a framework to design inverse rig-functions-functions that map low level representations of a character's pose such as joint positions or surface geometry to the representation used by animators called the animation rig. Animators design scenes using an animation rig, a framework widely adopted in animation production which allows animators to design character poses and geometry via intuitive parameters and interfaces. Yet most state-of-the-art computer animation techniques control characters through raw, low level representations such as joint angles, joint positions, or vertex coordinates. This difference often stops the adoption of state-of-the-art techniques in animation production. Our framework solves this issue by learning a mapping between the low level representations of the pose and the animation rig. We use nonlinear regression techniques, learning from example animation sequences designed by the animators. When new motions are provided in the skeleton space, the learned mapping is used to estimate the rig controls that reproduce such a motion. We introduce two nonlinear functions for producing such a mapping: Gaussian process regression and feedforward neural networks. The appropriate solution depends on the nature of the rig and the amount of data available for training. We show our framework applied to various examples including articulated biped characters, quadruped characters, facial animation rigs, and deformable characters. With our system, animators have the freedom to apply any motion synthesis algorithm to arbitrary rigging and animation pipelines for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.
- Published
- 2017
- Full Text
- View/download PDF
8. An Energy-Driven Motion Planning Method for Two Distant Postures.
- Author
-
Wang H, Ho ES, and Komura T
- Abstract
In this paper, we present a local motion planning algorithm for character animation. We focus on motion planning between two distant postures where linear interpolation leads to penetrations. Our framework has two stages. The motion planning problem is first solved as a Boundary Value Problem (BVP) on an energy graph which encodes penetrations, motion smoothness and user control. Having established a mapping from the configuration space to the energy graph, a fast and robust local motion planning algorithm is introduced to solve the BVP to generate motions that could only previously be computed by global planning methods. In the second stage, a projection of the solution motion onto a constraint manifold is proposed for more user control. Our method can be integrated into current keyframing techniques. It also has potential applications in motion planning problems in robotics.
- Published
- 2015
- Full Text
- View/download PDF
9. Interactive formation control in complex environments.
- Author
-
Henry J, Shum HP, and Komura T
- Abstract
The degrees of freedom of a crowd is much higher than that provided by a standard user input device. Typically, crowd-control systems require multiple passes to design crowd movements by specifying waypoints, and then defining character trajectories and crowd formation. Such multi-pass control would spoil the responsiveness and excitement of real-time control systems. In this paper, we propose a single-pass algorithm to control a crowd in complex environments. We observe that low-level details in crowd movement are related to interactions between characters and the environment, such as diverging/merging at cross points, or climbing over obstacles. Therefore, we simplify the problem by representing the crowd with a deformable mesh, and allow the user, via multitouch input, to specify high-level movements and formations that are important for context delivery. To help prevent congestion, our system dynamically reassigns characters in the formation by employing a mass transport solver to minimize their overall movement. The solver uses a cost function to evaluate the impact from the environment, including obstacles and areas affecting movement speed. Experimental results show realistic crowd movement created with minimal high-level user inputs. Our algorithm is particularly useful for real-time applications including strategy games and interactive animation creation.
- Published
- 2014
- Full Text
- View/download PDF
10. Simulating multiple character interactions with collaborative and adversarial goals.
- Author
-
Shum HP, Komura T, and Yamazaki S
- Abstract
This paper proposes a new methodology for synthesizing animations of multiple characters, allowing them to intelligently compete with one another in dense environments, while still satisfying requirements set by an animator. To achieve these two conflicting objectives simultaneously, our method separately evaluates the competition and collaboration of the interactions, integrating the scores to select an action that maximizes both criteria. We extend the idea of min-max search, normally used for strategic games such as chess. Using our method, animators can efficiently produce scenes of dense character interactions such as those in collective sports or martial arts. The method is especially effective for producing animations along story lines, where the characters must follow multiple objectives, while still accommodating geometric and kinematic constraints from the environment.
- Published
- 2012
- Full Text
- View/download PDF
11. Guest editors' introduction: Special section on ACM VRST 2010.
- Author
-
Komura T, Peng Q, Baciu G, and Lau RW
- Subjects
- Humans, Software, User-Computer Interface
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.