9 results on '"Yuting Ye"'
Search Results
2. Virtual hands in VR
- Author
-
Michael Neff, Franziska Mueller, Victor Zordan, Yuting Ye, and Sophie Jörg
- Subjects
Focus (computing) ,Computer science ,media_common.quotation_subject ,GRASP ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Motion capture ,Mixed reality ,Human–computer interaction ,Perception ,0202 electrical engineering, electronic engineering, information engineering ,Character animation ,020201 artificial intelligence & image processing ,Augmented reality ,media_common - Abstract
We use our hands every day: to grasp a cup of coffee, write text on a keyboard, or signal that we are about to say something important. We use our hands to interact with our environment and to help us communicate with each other without thinking about it. Wouldn't it be great to be able to do the same in virtual reality? However, accurate hand motions are not trivial to capture. In this course, we present the current state of the art when it comes to virtual hands. Starting with current examples for controlling and depicting hands in virtual reality (VR), we dive into the latest methods and technologies to capture hand motions. As hands can currently not be captured in every situation and as constraints stopping us from intersecting with objects are typically not available in VR, we present research on how to synthesize hand motions and simulate grasping motions. Finally, we provide an overview of our knowledge of how virtual hands are being perceived, resulting in practical tips on how to represent and handle virtual hands. Our goals are (a) to present a broad state of the art of the current usage of hands in VR, (b) to provide more in-depth knowledge about the functioning of current hand motion tracking and hand motion synthesis methods, (c) to give insights on our perception of hand motions in VR and how to use those insights when developing new applications, and finally (d) to identify gaps in knowledge that might be investigated next. While the focus of this course is on VR, many parts also apply to augmented reality, mixed reality, and character animation in general, and some content originates from these areas.
- Published
- 2020
- Full Text
- View/download PDF
3. Virtual Grasping Feedback and Virtual Hand Ownership
- Author
-
Yuting Ye, Yu Sun, Sophie Jörg, Aline Normoyle, Massimiliano Di Luca, and Ryan Canales
- Subjects
High fidelity ,Computer science ,Human–computer interaction ,GRASP ,Work (physics) ,Virtual reality ,Object (philosophy) ,Haptic technology ,Visualization ,Task (project management) - Abstract
In this work, we investigate the influence of different visualizations on a manipulation task in virtual reality (VR). Without the haptic feedback of the real world, grasping in VR might result in intersections with virtual objects. As people are highly sensitive when it comes to perceiving collisions, it might look more appealing to avoid intersections and visualize non-colliding hand motions. However, correcting the position of the hand or fingers results in a visual-proprioceptive discrepancy and must be used with caution. Furthermore, the lack of haptic feedback in the virtual world might result in slower actions as a user might not know exactly when a grasp has occurred. This reduced performance could be remediated with adequate visual feedback. In this study, we analyze the performance, level of ownership, and user preference of eight different visual feedback techniques for virtual grasping. Three techniques show the tracked hand (with or without grasping feedback), even if it intersects with the grasped object. Another three techniques display a hand without intersections with the object, called outer hand, simulating the look of a real world interaction. One visualization is a compromise between the two groups, showing both a primary outer hand and a secondary tracked hand. Finally, in the last visualization the hand disappears during the grasping activity. In an experiment, users perform a pick-and-place task for each feedback technique. We use high fidelity marker-based hand tracking to control the virtual hands in real time. We found that the tracked hand visualizations result in better performance, however, the outer hand visualizations were preferred. We also find indications that ownership is higher with the outer hand visualizations.
- Published
- 2019
- Full Text
- View/download PDF
4. Hierarchical planning and control for complex motor tasks
- Author
-
Stelian Coros, Yuting Ye, Daniel Zimmermann, Robert W. Sumner, and Markus Gross
- Subjects
Handrail ,Computer science ,Human–computer interaction ,Motion planning ,Physics engine ,Crawling ,Set (psychology) ,Motion capture ,Simulation ,Motor skill ,Motion (physics) - Abstract
We present a planning and control framework that enables physically simulated characters to perform various types of motor tasks. To create physically-valid motion plans, our method uses a hierarchical set of simplified models. Computational resources are therefore focused where they matter most: motion plans for the immediate future are generated using higher-fidelity models, while coarser models are used to create motion plans with longer time horizons. Our framework can be used for different types of motor skills, including ones where the actions of the arms and legs must be precisely coordinated. We demonstrate controllers for tasks such as getting up from a chair, crawling onto a raised platform, or using a handrail while climbing stairs. All of the motions are simulated using a black-box physics engine from high level user commands, without requiring any motion capture data.
- Published
- 2015
- Full Text
- View/download PDF
5. Feature-based texture stretch compensation for 3D meshes
- Author
-
Kevin Sprout, Yuting Ye, and Stephane Grabli
- Subjects
business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Feature based ,Computer vision ,Polygon mesh ,Artificial intelligence ,business ,GeneralLiterature_MISCELLANEOUS ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
One subtle artifact which still often reveals the synthetic nature of a digital creature on screen is the stretching of a supposedly rigid feature, such as a dinosaur scale or a callus, under deformation. We introduce a technique which attempts to preserve the shape of user-defined features when a 3D mesh deforms. Our approach makes no restriction on the type of features or their distribution---it can handle features of very high resolution and it is designed to fit in a texture-painting driven pipeline.
- Published
- 2015
- Full Text
- View/download PDF
6. Multi-resolution geometric transfer for Jurassic World
- Author
-
Rachel Rose and Yuting Ye
- Subjects
Scheme (programming language) ,business.industry ,Computer science ,Process (computing) ,Magic (programming) ,Geometric shape ,Resolution (logic) ,GeneralLiterature_MISCELLANEOUS ,Vertex (geometry) ,Skinning ,Computer graphics (images) ,Computer vision ,Artificial intelligence ,business ,computer ,ComputingMethodologies_COMPUTERGRAPHICS ,computer.programming_language ,Interpolation - Abstract
For Jurassic World at Industrial Light & Magic (ILM), artists brought to life a wide range of dinosaurs to populate the ill-fated amusement park. Key to the realism of these dinosaurs is the detail built into their geometric representations. During asset development and shot work, this realistic detail can slow down an artist's workflow; so ILM developed a new transfer-based system for creating and maintaining multiple resolution versions of each creature that allows an artist to work at whatever resolution is best for the current situation.The ILM multi-resolution transfer tool automates the process of moving changes in the geometric shape or properties of the creature between corresponding vertices; changes to vertices without direct correspondence are interpolated using a poisson interpolation scheme. The tool is capable of maintaining overall body shape, skinning weights, blendshapes, simulation mappings, etc. from any resolution to any other without losing the added detail at the target resolution.
- Published
- 2015
- Full Text
- View/download PDF
7. High fidelity facial animation capture and retargeting with contours
- Author
-
Yuting Ye, Kiran S. Bhat, Rony Goldenthal, Ronald Mallet, and Michael Koperwas
- Subjects
Facial expression ,Computer science ,Facial motion capture ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Animation ,Motion capture ,Contour line ,Retargeting ,Computer vision ,Artificial intelligence ,business ,Computer animation ,Computer facial animation ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Human beings are naturally sensitive to subtle cues in facial expressions, especially in areas of the eyes and mouth. Current facial motion capture methods fail to accurately reproduce motions in those areas due to multiple limitations. In this paper, we present a new performance capture method that focuses on the perceptually important contour features on the face. Additionally, the output of our two-step optimization scheme is also easily editable by an animator. To illustrate the strength of our system, we present a retargeting application that incorporates primary contour lines to map a performance with lip-sync from an actor to a creature.
- Published
- 2013
- Full Text
- View/download PDF
8. Optimization-based interactive motion synthesis for virtual characters
- Author
-
Yuting Ye, C. Karen Liu, and Sumit Jain
- Subjects
business.industry ,Computer science ,media_common.quotation_subject ,Controllability ,Human–computer interaction ,Perception ,Skeletal animation ,Immersion (virtual reality) ,Character animation ,Artificial intelligence ,Physics engine ,business ,Computer facial animation ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common - Abstract
Modeling the reactions of human characters to a dynamic environment is crucial for achieving perceptual immersion in applications such as video games, training simulations and movies. Virtual characters in these applications need to realistically react to environmental events and precisely follow high-level user commands. Most existing physics engines for computer animation facilitate synthesis of passive motion, but remain unsuccessful in generating motion that requires active control, such as character animation. We present an optimization-based approach to synthesizing active motion for articulated characters, emphasizing both physical realism and user controllability. At each time step, we optimize the motion based on a set of goals specified by higher-level decision makers, subject to the Lagrangian dynamics and the physical limitations of the character. Our framework represents each decision maker as a controller.
- Published
- 2007
- Full Text
- View/download PDF
9. Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2023, Los Angeles, CA, USA, August 4-6, 2023
- Author
-
Chenfanfu Jiang, Mridul Aanjaneya, Huamin Wang, and Yuting Ye
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.