15 results on '"Daniel Thalmann"'
Search Results
2. Non-verbal speech cues as objective measures for negative symptoms in patients with schizophrenia.
- Author
-
Yasir Tahir, Zixu Yang, Debsubhra Chakraborty, Nadia Thalmann, Daniel Thalmann, Yogeswary Maniam, Nur Amirah Binte Abdul Rashid, Bhing-Leet Tan, Jimmy Lee Chee Keong, and Justin Dauwels
- Subjects
Medicine ,Science - Abstract
Negative symptoms in schizophrenia are associated with significant burden and possess little to no robust treatments in clinical practice today. One key obstacle impeding the development of better treatment methods is the lack of an objective measure. Since negative symptoms almost always adversely affect speech production in patients, speech dysfunction have been considered as a viable objective measure. However, researchers have mostly focused on the verbal aspects of speech, with scant attention to the non-verbal cues in speech. In this paper, we have explored non-verbal speech cues as objective measures of negative symptoms of schizophrenia. We collected an interview corpus of 54 subjects with schizophrenia and 26 healthy controls. In order to validate the non-verbal speech cues, we computed the correlation between these cues and the NSA-16 ratings assigned by expert clinicians. Significant correlations were obtained between these non-verbal speech cues and certain NSA indicators. For instance, the correlation between Turn Duration and Restricted Speech is -0.5, Response time and NSA Communication is 0.4, therefore indicating that poor communication is reflected in the objective measures, thus validating our claims. Moreover, certain NSA indices can be classified into observable and non-observable classes from the non-verbal speech cues by means of supervised classification methods. In particular the accuracy for Restricted speech quantity and Prolonged response time are 80% and 70% respectively. We were also able to classify healthy and patients using non-verbal speech features with 81.3% accuracy.
- Published
- 2019
- Full Text
- View/download PDF
3. The Making of a 3D-Printed, Cable-Driven, Single-Model, Lightweight Humanoid Robotic Hand
- Author
-
Li Tian, Nadia Magnenat Thalmann, Daniel Thalmann, and Jianmin Zheng
- Subjects
robotic hand ,modeling ,3D printing ,cable-driven system ,grasp planning ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Dexterity robotic hands can (Cummings, 1996) greatly enhance the functionality of humanoid robots, but the making of such hands with not only human-like appearance but also the capability of performing the natural movement of social robots is a challenging problem. The first challenge is to create the hand’s articulated structure and the second challenge is to actuate it to move like a human hand. A robotic hand for humanoid robot should look and behave human like. At the same time, it also needs to be light and cheap for widely used purposes. We start with studying the biomechanical features of a human hand and propose a simplified mechanical model of robotic hands, which can achieve the important local motions of the hand. Then, we use 3D modeling techniques to create a single interlocked hand model that integrates pin and ball joints to our hand model. Compared to other robotic hands, our design saves the time required for assembling and adjusting, which makes our robotic hand ready-to-use right after the 3D printing is completed. Finally, the actuation of the hand is realized by cables and motors. Based on this approach, we have designed a cost-effective, 3D printable, compact, and lightweight robotic hand. Our robotic hand weighs 150 g, has 15 joints, which are similar to a real human hand, and 6 Degree of Freedom (DOFs). It is actuated by only six small size actuators. The wrist connecting part is also integrated into the hand model and could be customized for different robots such as Nadine robot (Magnenat Thalmann et al., 2017). The compact servo bed can be hidden inside the Nadine robot’s sleeve and the whole robotic hand platform will not cause extra load to her arm as the total weight (150 g robotic hand and 162 g artificial skin) is almost the same as her previous unarticulated robotic hand which is 348 g. The paper also shows our test results with and without silicon artificial hand skin, and on Nadine robot.
- Published
- 2017
- Full Text
- View/download PDF
4. Standardized Virtual Reality, Are We There Yet?
- Author
-
Mario A. Gutierrez A., Frederic Vexo, and Daniel Thalmann
- Published
- 2006
- Full Text
- View/download PDF
5. A virtual 3D mobile guide in the INTERMEDIA project.
- Author
-
Nadia Magnenat-Thalmann, Achille Peternier, Xavier Righetti, Mingyu Lim, George Papagiannakis, Tasos Fragopoulos, Kyriaki Lambropoulou, Paolo Barsocchi, and Daniel Thalmann
- Subjects
INTERACTIVE multimedia ,STOCHASTIC convergence ,VIRTUAL machine systems ,MULTIMEDIA systems - Abstract
Abstract In this paper, we introduce a European research project, interactive media with personal networked devices (INTERMEDIA) in which we seek to progress beyond home and device-centric convergence toward truly user-centric convergence of multimedia. Our vision is to make the user the multimedia center: the user as the point at which multimedia services and the means for interacting with them converge. This paper proposes the main research goals in providing users with a personalized interface and content independent of physical networked devices, and space and time. As a case study, we describe an indoors, mobile mixed reality guide system: Chloe@University. With a see-through head-mounted display (HMD) connected to a small wearable computing device, Chloe@University provides users with an efficient way to guide someone in a building. A 3D virtual character in front of the user guides him/her to the required destination. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
6. Visual creation of inhabited 3D environments.
- Author
-
Alejandra García-Rojas, Mario Gutiérrez, and Daniel Thalmann
- Subjects
VIRTUAL reality ,COMPUTER programming ,COMPUTER graphics ,TECHNOLOGICAL complexity - Abstract
Abstract The creation of virtual reality applications and 3D environments is a complex task that requires good programming skills and expertise in computer graphics and many other disciplines. The complexity increases when we want to include complex entities such as virtual characters and animate them. In this paper we present a system that assists in the tasks of setting up a 3D scene and configuring several parameters affecting the behavior of virtual entities like objects and autonomous virtual humans. Our application is based on a visual programming paradigm, supported by a semantic representation, an ontology for virtual environments. The ontology allows us to store and organize the components of a 3D scene, together with the knowledge associated with them. It is also used to expose functionalities in the given 3D engine. Based on a formal representation of its components, the proposed architecture provides a scalable VR system. Using this system, non-experts can set up interactive scenarios with minimum effort; no programming skills or advanced knowledge is required. [ABSTRACT FROM AUTHOR]
- Published
- 2008
7. Haptic feedback in mixed-reality environment.
- Author
-
Renaud Ott, Daniel Thalmann, and Frédéric Vexo
- Subjects
- *
COMPUTER systems , *COST control , *TESTING , *HUMAN-computer interaction - Abstract
Abstract  The training process in industries is assisted with computer solutions to reduce costs. Normally, computer systems created to simulate assembly or machine manipulation are implemented with traditional Human-Computer interfaces (keyboard, mouse, etc). But, this usually leads to systems that are far from the real procedures, and thus not efficient in term of training. Two techniques could improve this procedure: mixed-reality and haptic feedback. We propose in this paper to investigate the integration of both of them inside a single framework. We present the hardware used to design our training system. A feasibility study allows one to establish testing protocol. The results of these tests convince us that such system should not try to simulate realistically the interaction between real and virtual objects as if it was only real objects. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
8. An ontology of virtual humans.
- Author
-
Mario Gutiérrez A., Alejandra García-Rojas, Daniel Thalmann, Frederic Vexo, Laurent Moccozet, Nadia Magnenat-Thalmann, Michela Mortara, and Michela Spagnuolo
- Subjects
VIRTUAL reality ,COMPUTER simulation ,GEOMETRY ,SEMANTICS - Abstract
Abstract??Most of the efforts concerning graphical representations of humans (Virtual Humans) have been focused on synthesizing geometry for static or animated shapes. The next step is to consider a human body not only as a 3D shape, but as an active semantic entity with features, functionalities, interaction skills, etc. We are currently working on an ontology-based approach to make Virtual Humans more active and understandable both for humans and machines. The ontology for Virtual Humans we are defining will provide the ?semantic layer? required to reconstruct, stock, retrieve, reuse and share content and knowledge related to Virtual Humans. [ABSTRACT FROM AUTHOR]
- Published
- 2007
9. A wearable system for mobility improvement of visually impaired people.
- Author
-
Sylvain Cardin, Daniel Thalmann, and Frédéric Vexo
- Subjects
- *
DETECTORS , *ELECTRONIC navigation , *SONAR , *ENGINEERING instruments - Abstract
Abstract??Degradation of the visual system can lead to a dramatic reduction of mobility by limiting a person to his sense of touch and hearing. This paper presents the development of an obstacle detection system for visually impaired people. While moving in his environment the user is alerted to close obstacles in range. The system we propose detects an obstacle surrounding the user by using a multi-sonar system and sending appropriate vibrotactile feedback. The system aims at increasing the mobility of visually impaired people by offering new sensing abilities. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
10. Dynamic obstacle avoidance for real-time character animation.
- Author
-
Pascal Glardon, Ronan Boulic, and Daniel Thalmann
- Subjects
COMPUTER-generated imagery ,COMPUTER users ,TIME perspective ,ELECTRONIC systems - Abstract
This paper proposes a novel method to control virtual characters in dynamic environments. A virtual character is animated by a locomotion and jumping engine, enabling production of continuous parameterized motions. At any time during runtime, flat obstacles (e.g. a puddle of water) can be created and placed in front of a character. The method first decides whether the character is able to get around or jump over the obstacle. Then the motion parameters are accordingly modified. The transition from locomotion to jump is performed with an improved motion blending technique. While traditional blending approaches let the user choose the transition time and duration manually, our approach automatically controls transitions between motion patterns whose parameters are not known in advance. In addition, according to the animation context, blending operations are executed during a precise period of time to preserve specific physical properties. This ensures coherent movements over the parameter space of the original input motions. The initial locomotion type and speed are smoothly varied with respect to the required jump type and length. This variation is carefully computed in order to place the take-off foot as close to the created obstacle as possible. [ABSTRACT FROM AUTHOR]
- Published
- 2006
11. Robust on-line adaptive footplant detection and enforcement for locomotion.
- Author
-
Pascal Glardon, Ronan Boulic, and Daniel Thalmann
- Abstract
A common problem in virtual character computer animation concerns the preservation of the basic foot-floor constraint (or footplant), consisting in detecting it before enforcing it. This paper describes a system capable of generating motion while continuously preserving the footplants for a real-time, dynamically evolving context. This system introduces a constraint detection method that improves classical techniques by adaptively selecting threshold values according to motion type and quality. The footplants are then enforced using a numerical inverse kinematics solver. As opposed to previous approaches, we define the footplant by attaching to it two effectors whose position at the beginning of the constraint can be modified, in order to place the foot on the ground, for example. However, the corrected posture at the constraint beginning is needed before it starts to ensure smoothness between the unconstrained and constrained states. We, therefore, present a new approach based on motion anticipation, which computes animation postures in advance, according to time-evolving motion parameters, such as locomotion speed and type. We illustrate our on-line approach with continuously modified locomotion patterns, and demonstrate its ability to correct motion artifacts, such as foot sliding, to change the constraint position and to modify from a straight to a curved walk motion. [ABSTRACT FROM AUTHOR]
- Published
- 2006
12. Virtual humans: thirty years of research, what next?
- Author
-
Nadia Magnenat-Thalmann and Daniel Thalmann
- Abstract
In this paper, we present research results and future challenges in creating realistic and believable Virtual Humans. To realize these modeling goals, real-time realistic representation is essential, but we also need interactive and perceptive Virtual Humans to populate the Virtual Worlds. Three levels of modeling should be considered to create these believable Virtual Humans: 1) realistic appearance modeling, 2) realistic, smooth and flexible motion modeling, and 3) realistic high-level behaviors modeling. At first, the issues of creating virtual humans with better skeleton and realistic deformable bodies are illustrated. To give a level of believable behavior, challenges are laid on generating on the fly flexible motion and complex behaviours of Virtual Humans inside their environments using a realistic perception of the environment. Interactivity and group behaviours are also important parameters to create believable Virtual Humans which have challenges in creating believable relationship between real and virtual humans based on emotion and personality, and simulating realistic and believable behaviors of groups and crowds. Finally, issues in generating realistic virtual clothed and haired people are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2005
13. A multi‐GPU finite element computation and hybrid collision handling process framework for brain deformation simulation.
- Author
-
Daniel, Thalmann, Tian, Ye, Hu, Yong, and Shen, Xukun
- Subjects
BRAIN anatomy ,FINITE element method ,GRAPHICS processing units ,COLLISION detection (Computer animation) ,PARALLEL algorithms - Abstract
This paper offers a fast multi‐graphics processing unit (GPU) parallel simulation framework to the problem of real‐time and nonlinear finite element computation of brain deformation. A load balancing strategy is proposed to ensure the efficient distribution of nonlinear finite element computation on multi‐GPU. A data storage structure is designed to minimize the amount of data transfer and make full use of the overlay technique of GPU to reduce the transferring latency between multi‐GPUs. We further present a fast central processing unit (CPU)–GPU parallel continuous collision detection and response method, which not only can deal with the collision between the brain and skull but also can handle the self‐collision of the brain. Our method can make full use of CPU and GPU to implement a parallel computation about deformation and collision detection. Our experimental results show that our method is able to handle a brain geometric model with high detail gyrus composed of more than 40,000 tetrahedron elements. This can facilitate the fidelity of the current virtual brain surgery simulator. We evaluate our approach qualitatively and quantitatively and compare it with related works. This paper presents a multi‐GPU simulation framework for brain deformation capable of solving continuous collision detection efficiently for geometric models of high complexity. We propose a load balancing strategy to distribute nonlinear finite element computation and a data storage structure to minimize the data transfer on multi‐GPU, as well as two hybrid CPU/GPU algorithms for parallel continuous collision detection and resolution. Besides, brain region partition, memory data coherence, and GPU streaming, among others, are also considered in order to optimize performance. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. Editorial.
- Author
-
Daniel Thalmann and Alexei Sourin
- Published
- 2007
- Full Text
- View/download PDF
15. Editorial.
- Author
-
Tolga Capin, Selim Balcisoy, Daniel Thalmann, Nadia Magnenat-Thalmann, and Tat-Seng Chua
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.