49 results on '"Daniel Thalmann"'
Search Results
2. Modeling behaviour for social robots and virtual humans
- Author
-
Nadia Magnenat Thalmann and Daniel Thalmann
- Subjects
Social robot ,Human–computer interaction ,Computer science ,Media Lab Europe's social robots ,Simulation - Published
- 2016
- Full Text
- View/download PDF
3. Modeling human-like non-rationality for social agents
- Author
-
Ah-Hwee Tan, Jaroslaw Kochanowicz, and Daniel Thalmann
- Subjects
Cognitive science ,Social psychology (sociology) ,Management science ,Multi-agent system ,Irrationality ,020207 software engineering ,Rationality ,Cognition ,02 engineering and technology ,Terminology ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Affective computing ,Psychology ,Social simulation - Abstract
Humans are not rational beings. Deviations from rationality in human thinking are currently well documented [25] as non-reducible to rational pursuit of egoistic benefit or its occasional distortion with temporary emotional excitation, as it is often assumed. This occurs not only outside conceptual reasoning or rational goal realization but also subconsciously and often in certainty that they did not and could not take place 'in my case'. Non-rationality can no longer be perceived as a rare affective abnormality in otherwise rational thinking, but as a systemic, permanent quality, 'a design feature' of human cognition. While social psychology has systematically addressed non-rationality of human cognition (including its non-emotional aspects) for decades [63]. It is not the case for computer science, despite obvious relevance for individual and group behavior modeling. This paper proposes brief survey of work in computational disciplines related to human-like non-rationality modeling including: Social Signal Processing, Cognitive Architectures, Affective Computing, Human-Like Agents and Normative Multi-agent Systems. It attempts to establish a common terminology and conceptual frame for this extremely interdisciplinary issue, reveal assumptions about non-rationality underlying the discussed models and disciplines, their current limitations and potential in contributing to solution. Finally, it also presents ideas concerning possible directions of development, hopefully contributing to solution of this challenging issue.
- Published
- 2016
- Full Text
- View/download PDF
4. An evaluation of spatial presence, social presence, and interactions with various 3D displays
- Author
-
Jun Lee, Daniel Thalmann, and Nadia Magnenat Thalmann
- Subjects
Computer science ,Oculus rift ,05 social sciences ,ComputingMilieux_PERSONALCOMPUTING ,Sense of presence ,020207 software engineering ,Stereoscopy ,02 engineering and technology ,Stereo display ,computer.software_genre ,law.invention ,Human–computer interaction ,Virtual machine ,law ,Autostereoscopy ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,computer ,050107 human factors - Abstract
This paper presents an immersive volleyball game, where a player plays not only against virtual opponents but also with support on his/her side of virtual teammates. This volleyball game has been implemented for several 3D displays such as a stereoscopic display, an autostereoscopic display, Oculus Rift glasses, and a 320o Immersive Room. In this paper, we also propose a user study of the relations between virtual humans and the sense of presence in the different 3D displays. We particularly study how surrounding virtual humans affect the sense of presence. Results show that users more significantly perceived spatial presence of virtual environment and social presence of virtual humans with the Oculus Rift and the Immersive Room.
- Published
- 2016
- Full Text
- View/download PDF
5. Multimodal human-machine interaction including virtual humans or social robots
- Author
-
Nadia Magnenat Thalmann and Daniel Thalmann
- Published
- 2015
- Full Text
- View/download PDF
6. AR in Hand
- Author
-
Junsong Yuan, Daniel Thalmann, Nadia Magnenat Thalmann, and Hui Liang
- Subjects
business.industry ,Gesture recognition ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,RGB color model ,Computer vision ,Augmented reality ,Artificial intelligence ,Animation ,business ,Wearable technology ,Camera resectioning ,Gesture - Abstract
Wearable devices such as Microsoft Hololens and Google glass are highly popular in recent years. As traditional input hardware is difficult to use on such platforms, vision-based hand pose tracking and gesture control techniques are more suitable alternatives. This demo shows the possibility to interact with 3D contents with bare hands on wearable devices by two Augmented Reality applications, including virtual teapot manipulation and fountain animation in hand. Technically, we use a head-mounted depth camera to capture the RGB-D images from egocentric view, and adopt the random forest to regress for the palm pose and classify the hand gesture simultaneously via a spatial-voting framework. The predicted pose and gesture are used to render the 3D virtual objects, which are overlaid onto the hand region in input RGB images with camera calibration parameters for seamless virtual and real scene synthesis.
- Published
- 2015
- Full Text
- View/download PDF
7. Activity recognition in unconstrained RGB-D video using 3D trajectories
- Author
-
Junsong Yuan, Yang Xiao, Daniel Thalmann, and Gangqiang Zhao
- Subjects
Activity recognition ,Feature (computer vision) ,Computer science ,business.industry ,Histogram ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,RGB color model ,Clutter ,Computer vision ,Artificial intelligence ,business ,Motion (physics) - Abstract
Human activity recognition in unconstrained RGB--D videos has extensive applications in surveillance, multimedia data analytics, human-computer interaction, etc, but remains a challenging problem due to the background clutter, camera motion, viewpoint changes, etc. We develop a novel RGB--D activity recognition approach that leverages the dense trajectory feature in RGB videos. By mapping the 2D positions of the dense trajectories from RGB video to the corresponding positions in the depth video, we can recover the 3D trajectory of the tracked interest points, which captures important motion information along the depth direction. To characterize the 3D trajectories, we apply motion boundary histogram (MBH) to depth direction and propose 3D trajectory shape descriptors. Our proposed 3D trajectory feature is a good complementary to dense trajectory feature extracted from RGB video only. The performance evaluation on a challenging unconstrained RGB--D activity recognition dataset, i.e., Hollywood 3D, shows that our proposed method outperforms the baseline methods (STIP-based) significantly, and achieves the state-of-the-art performance.
- Published
- 2014
- Full Text
- View/download PDF
8. Multimodal human-machine interaction including virtual humans or social robots
- Author
-
Nadia Magnenat Thalmann, Zerrin Yumak, and Daniel Thalmann
- Subjects
Social robot ,Human–computer interaction ,Human machine interaction ,Ricci flow ,Topology ,Mathematics - Published
- 2014
- Full Text
- View/download PDF
9. From ratings to trust
- Author
-
Neil Yorke-Smith, Anirban Basu, Jie Zhang, Daniel Thalmann, and Guibing Guo
- Subjects
Focus (computing) ,Information retrieval ,Empirical research ,Computer science ,Data_MISCELLANEOUS ,Similarity (psychology) ,Trust metric ,Data mining ,Recommender system ,computer.software_genre ,computer - Abstract
Trust has been extensively studied and its effectiveness demonstrated in recommender systems. Due to the lack of explicit trust information in most systems, many trust metric approaches have been proposed to infer implicit trust from user ratings. However, previous works have not compared these different approaches, and oftentimes focus only on the performance of predictive item ratings. In this paper, we first analyse five kinds of trust metrics in light of the properties of trust. We conduct an empirical study to explore the ability of trust metrics to distinguish explicit trust from implicit trust and to generate accurate predictions. Experimental results on two real-world data sets show that existing trust metrics cannot provide satisfying performance, and indicate that future metrics should be designed more carefully.
- Published
- 2014
- Full Text
- View/download PDF
10. Interactive virtual characters
- Author
-
Nadia Magnenat Thalmann and Daniel Thalmann
- Subjects
Facial expression ,Social robot ,Multimedia ,Computer science ,media_common.quotation_subject ,Animation ,computer.software_genre ,Gaze ,Variety (cybernetics) ,Personality ,Dialog box ,computer ,Humanoid robot ,Gesture ,media_common - Abstract
In this tutorial, we will describe both virtual characters and realistic humanoid social robots using the same high level models. Particularly, we will describe:1. How to capture real-time gestures and facial emotions from real people, how to recognize any real person, how to recognize certain sounds. We will present a state of the art and some new avenues of research.2. How to model a variety of interactive reactions of the virtual humans and social robots (facial expressions, gestures, multiparty dialog, etc) depending on the real scenes input parameters.3. How we can define Virtual Characters that have an emotional behavior (personality and mood and emotions) and how to allow them to remember us and have a believable relationship with us. This part is to allow virtual humans and social robots to have an individual and not automatic behaviour. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will explain different methods to identify user actions and how to allow Virtual Characters to answer to them.4. This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will present the concepts of behavioral animation, group simulation, intercommunication between virtual humans, social humanoid robots and real people.Case studies will be presented from the Being There Centre (see http://imi.ntu.edu.sg/BeingThereCentre/Projects/Pages/Project4.aspx) where autonomous virtual humans and social robots react to a few actions from the real people.
- Published
- 2013
- Full Text
- View/download PDF
11. Prior ratings
- Author
-
Jie Zhang, Neil Yorke-Smith, Daniel Thalmann, and Guibing Guo
- Subjects
World Wide Web ,Product (business) ,Modalities ,Cold start ,business.industry ,Human–computer interaction ,Computer science ,Information source ,Conceptual model (computer science) ,E-commerce ,Recommender system ,business - Abstract
Lack of motivation to provide ratings and eligibility to rate generally only after purchase restrain the effectiveness of recommender systems and contribute to the well-known data sparsity and cold start problems. This paper proposes a new information source for recommender systems, called prior ratings. Prior ratings are based on users' experiences of virtual products in a mediated environment, and they can be submitted prior to purchase. A conceptual model of prior ratings is proposed, integrating the environmental factor presence whose effects on product evaluation have not been studied previously. A user study conducted in website and virtual store modalities demonstrates the validity of the conceptual model, in that users are more willing and confident to provide prior ratings in virtual environments.
- Published
- 2013
- Full Text
- View/download PDF
12. 3D fingertip and palm tracking in depth image sequences
- Author
-
Junsong Yuan, Hui Liang, and Daniel Thalmann
- Subjects
Computer science ,business.industry ,Track (disk drive) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer vision ,Kalman filter ,Artificial intelligence ,Palm ,Tracking (particle physics) ,Particle filter ,business ,Distance transform ,Image (mathematics) - Abstract
We present a vision-based approach for robust 3D fingertip and palm tracking on depth images using a single Kinect sensor. First the hand is segmented in the depth images by applying depth and morphological constraints. The palm is located by performing distance transform to the hand contour and tracked with a Kalman filter. The fingertips are detected by combining three depth-based features and tracked with a particle filter over successive frames. Quantitative results on synthetic depth sequences show the proposed scheme can track the fingertips quite accurately. Besides, its capabilities are further demonstrated through a real-life human-computer interaction application.
- Published
- 2012
- Full Text
- View/download PDF
13. Vintage radio interface
- Author
-
Frédéric Vexo, Mathieu Hopmann, Daniel Thalmann, and Mario Gutierrez
- Subjects
Vintage ,Analog control ,Multimedia ,business.industry ,Computer science ,Interface (computing) ,computer.software_genre ,External Data Representation ,Data visualization ,Human–computer interaction ,business ,computer ,Digital collections ,Digital audio - Abstract
We present an interface for navigating digital collections based on a one-dimensional analog control and a data visualization based on old analog radios. Our system takes advantage of inertial control to browse a large data collection in a compelling way, reducing the complexity of similar interfaces present in both desktop-based and portable media players. This vintage radio interface has been used to navigate a digital music collection. We have compared the proposed interface with the current most popular hardware, the iPod. The results of user tests with 24 participants are presented and discussed. The insights gained are encouraging enough to continue the development of one-dimensional analog controls for content discovery and retrieval.
- Published
- 2012
- Full Text
- View/download PDF
14. Brain activity underlying third person and first person perspective training in virtual environments
- Author
-
Frédéric Vexo, Daniel Thalmann, Olaf Blanke, Patrick Salamin, and Tej Tadi
- Subjects
Multimedia ,Third person ,Brain activity and meditation ,First person ,Computer science ,Human–computer interaction ,Perspective (graphical) ,Virtual reality ,computer.software_genre ,computer ,Training (civil) ,Instructional simulation - Abstract
Over the years, different approaches have been explored to build effective learning methods in virtual reality but the design of effective 3D manipulation techniques still remains an important research problem. To this end, it is important to quantify behavioral and brain mechanisms underlying the geometrical mappings of the body with the environment and external objects, both within the virtual environments (VE), the real world and relative to each other. The successful mapping of such interactions entails the study of fundamental components of these interactions, such as the origin of the visuo-spatial perspective (1PP, 3PP) and how they contribute to the user's performance in the virtual environments. Here, we report data using a novel set-up exposing participants - during free navigation - with a scene view from either 3PP or the habitual first-person perspective (1PP).
- Published
- 2010
- Full Text
- View/download PDF
15. Session details: Image processing and GPU
- Author
-
Daniel Thalmann
- Subjects
Computer science ,Computer graphics (images) ,Image processing ,Session (computer science) - Published
- 2009
- Full Text
- View/download PDF
16. Real-time individualized virtual humans
- Author
-
Nadia Magnenat-Thalmann and Daniel Thalmann
- Subjects
Crowds ,Computer science ,Computer graphics (images) ,Polygon mesh ,Animation ,Static mesh ,Computer facial animation ,Motion (physics) ,ComputingMethodologies_COMPUTERGRAPHICS ,Gesture ,Variety (cybernetics) - Abstract
This tutorial will present the latest techniques to model fast individualized animatable virtual humans for Real-Time applications. As a human is composed of a head and a body, we will analyze how these two parts can be modeled and globally animated as in real-life. More precisely, we will show how we can model and deform human bodies and heads. Facial animation will be also addressed from motion facial capture and voice to the simulation of interactive realistic talking virtual humans, including personality models and complete body gestures. We will describe how we can model crowds in realtime using dynamic meshes, static meshes and impostors. Techniques to introduce variety in crowds including individual animation with accessories will be explained.
- Published
- 2008
- Full Text
- View/download PDF
17. Accurate on-line avatar control with collision anticipation
- Author
-
Manuel Peinado, Daniel Raunhardt, Damien Maupu, Ronan Boulic, D. Meziat, and Daniel Thalmann
- Subjects
Inverse Kinematics ,Inverse kinematics ,Computer science ,business.industry ,Virtual Reality ,Character Animation ,Body movement ,Kinematics ,Environments ,Solver ,Virtual reality ,computer.software_genre ,Motion capture ,Collision Avoidance ,Motion Capture ,Virtual machine ,Character animation ,Computer vision ,Artificial intelligence ,business ,computer ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Interactive control of a virtual character through full body movement has a wide range of applications. However, there is a need for systems that accurately reproduce the motion of a performer while accounting for surrounding obstacles. We propose an approach based on a Prioritized Inverse Kinematics constraint solver. Several markers are placed on the user's body. A set of kinematic constraints make the virtual character track these markers. At the same time, we monitor the instantaneous displacements of a set of geometric primitives, called observers, attached to different parts of the virtual character. When an observer enters the influence area of an obstacle, its motion is damped by means of automatically created preventive constraints. The IK solver satisfies both maker and preventive constraints simultaneously, yielding postures of the virtual character that remain close to those of the user, while avoiding collisions with the virtual environment. Our performance measurements show the maturity of the IK technology for real-time full-body interactions.
- Published
- 2007
- Full Text
- View/download PDF
18. Chloe@University
- Author
-
Mingyu Lim, Paolo Barsocchi, Xavier Righetti, Pierre Davy, Daniel Thalmann, George Papagiannakis, Nadia Magnenat-Thalmann, Yiannis Gialelis, Achille Peternier, Matteo Repettoy, Tasos Fragopoulos, Mathieu Hopmann, Dimitrios Serpanos, and Anna Kirykou
- Subjects
Ubiquitous computing ,Multimedia ,Computer science ,Mobile computing ,Wearable computer ,Voice command device ,computer.software_genre ,Guidance system ,Mobile device ,computer ,Wireless sensor network ,Mixed reality - Abstract
With the advent of ubiquitous and pervasive computing environments, one of promising applications is a guidance system. In this paper, we propose a mobile mixed reality guide system for indoor environments, Chloe@University. A mobile computing device (Sony's Ultra Mobile PC) is hidden inside a jacket and a user selects a destination inside a building through voice commands. A 3D virtual assistant then appears in the see-through HMD and guides him/her to destination. Thus, the user simply follows the virtual guide. Chloe@University also suggests the most suitable virtual character (e.g. human guide, dog, cat, etc.) based on user preferences and profiles. Depending on user profiles, different security levels and authorizations for content are previewed. Concerning indoor location tracking, WiFi, RFID, and sensor-based methods are integrated in this system to have maximum flexibility. Moreover smart and transparent wireless connectivity provides the user terminal with fast and seamless transition among Access Points (APs). Different AR navigation approaches have been studied: [Olwal 2006], [Elmqvist et al.] and [Newman et al.] work indoors while [Bell et al. 2002] and [Reitmayr and Drummond 2006] are employed outdoors. Accurate tracking and registration is still an open issue and recently it has mostly been tackled by no single method, but mostly through aggregation of tracking and localization methods, mostly based on handheld AR. A truly wearable, HMD based mobile AR navigation aid for both indoors and outdoors with rich 3D content remains an open issue and a very active field of multi-discipline research.
- Published
- 2007
- Full Text
- View/download PDF
19. The benefits of third-person perspective in virtual and augmented reality?
- Author
-
Daniel Thalmann, Frédéric Vexo, and Patrick Salamin
- Subjects
immersion ,distance evaluation ,Multimedia ,Artificial reality ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer-mediated reality ,Virtual reality ,computer.software_genre ,Metaverse ,Mixed reality ,proprio-perception ,Third person ,Presence ,Immersion (virtual reality) ,Augmented reality ,exocentric perspective ,computer - Abstract
Instead of the reality in which you can see your own limbs, in virtual reality simulations it is sometimes disturbing not to be able to see your own body. It seems to create an issue in the proprio-perception of the user who does not completely feel integrated in the environment. This perspective should be beneficial for the users. We propose to give the possibility to the people to use the first and the third-person perspective like in video games (e.g. GTA). As the gamers prefer to use the third-person perspective for moving actions and the first-person view for the thin operations, we will verify this comportment is extendable to simulations in augmented and virtual reality.
- Published
- 2006
- Full Text
- View/download PDF
20. Advanced virtual reality technologies for surveillance and security applications
- Author
-
Renaud Ott, Daniel Thalmann, Frédéric Vexo, and Mario A. A. Gutiérrez
- Subjects
Multimedia ,Blimp ,Computer science ,Interface (computing) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Surveying ,Virtual reality ,Cameras ,computer.software_genre ,Control room ,Interfaces (computer) ,Security systems ,Human–computer interaction ,Personal digital assistants ,Eye tracking ,Zoom ,computer ,Gesture ,Haptic technology - Abstract
We present a system that exploits advanced Virtual Reality technologies to create a surveillance and security system. Surveillance cameras are carried by a mini Blimp which is tele-operated using an innovative Virtual Reality interface with haptic feedback. An interactive control room (CAVE) receives multiple video streams from airborne and fixed cameras. Eye tracking technology allows for turning the user's gaze into the main interaction mechanism; the user in charge can examine, zoom and select specific views by looking at them. Video streams selected at the control room can be redirected to agents equipped with a PDA. On-field agents can examine the video sent by the control center and locate the actual position of the airborne cameras in a GPS-driven map. The PDA interface reacts to the user's gestures. A tilt sensor recognizes the position in which the PDA is held and adapts the interface accordingly. The prototype we present shows the added value of integrating VR technologies into a complex application and opens up several research directions in the areas of tele-operation, Multimodal Interfaces, etc. Copyright © 2006 by the Association for Computing Machinery, Inc.
- Published
- 2006
- Full Text
- View/download PDF
21. Populating virtual environments with crowds
- Author
-
Daniel Thalmann
- Subjects
Crowds ,Multimedia ,Computer science ,Data_GENERAL ,Special effects ,Augmented reality ,Crowd simulation ,computer.software_genre ,computer - Abstract
For many years, this was a challenge to produce realistic virtual crowds for special effects in movies. Now, there is a new challenge: the production of real-time autonomous Virtual Crowds. Real-time crowds are necessary for games, VR systems for training and simulation and crowds in Augmented Reality applications. Autonomy is the only way to create believable crowds reacting to events in real-time. This paper will present state-of-the-art techniques and methods.
- Published
- 2006
- Full Text
- View/download PDF
22. Crowd and group animation
- Author
-
Christophe Hery, Hiromi Ono, Seth Lippman, Daniel Thalmann, Stephen Regelous, and Douglas Sutton
- Subjects
Cultural heritage ,Crowds ,Multimedia ,Computer science ,Augmented reality ,Animation ,Motion control ,computer.software_genre ,Flocking (texture) ,computer ,Rendering (computer graphics) - Abstract
A continuous challenge for special effects in movies is the production of realistic virtual crowds, in terms of rendering and behavior. This course will present state-of-the-art techniques and methods. The course will explain in details the different approaches to create virtual crowds: particle systems with flocking techniques using attraction and repulsion forces, copy and pasting techniques, agent-based methods. The architecture of software tools will be presented including the MASSIVE software used for the Lord of the Ring trilogy.The course will explore essential aspects to the generation of virtual crowds. In particular, it will present the aspects concerning information (intentions, status and knowledge), behavior (innate, group, complex and guided) and control (programmed, autonomous and guided). It will emphasize essential concepts like sensory input (vision, audition, tactile), versatile motion control, artificial intelligence level, and rendering techniques.The course will also presents the new challenge in the production of realtime crowds for games, VR systems for training and simulation and crowds in Augmented Reality applications for cultural Heritage (like adding virtual audience in Roma or Greek theaters).The course will be illustrated with a lot of examples from recent movies (Star Wars, Jurassic Park, Lord of the Ring, Shrek2) and real-time applications in Emergency situations and Cultural Heritage.
- Published
- 2004
- Full Text
- View/download PDF
23. A framework for rapid evaluation of prototypes with augmented reality
- Author
-
Daniel Thalmann, Pascal Fua, Selim Balcisoy, and Marcelo Kallmann
- Subjects
Human–computer interaction ,Process (engineering) ,Computer science ,Context (language use) ,Augmented reality ,Test object ,Object (computer science) ,Virtual actor - Abstract
In this paper we present a new framework in Augmented Reality context for rapid evaluation of prototypes before manufacture. The design of such prototypes is a time consuming process, leading t o the need of previous evaluation in realistic interactive environments. We have extended the definition of modelling object geometry with modelling object behaviour being able to evaluate them in a mixed environment. Such enhancements allow the development of tools and methods to test object behaviour, and perform interactions between real virtual humans and complex real and virtual objects.We propose a framework for testing the design of objects i an augmented reality context, where a virtual human is able to perform evaluation tests with an object composed of rea and virtual components. In this paper our framework is described and a case study is presented.
- Published
- 2000
- Full Text
- View/download PDF
24. Requirements for an architecture for believable social agents
- Author
-
Daniel Thalmann and Anthony Guye-Vuillème
- Subjects
Human–computer interaction ,Computer science ,Multi-agent system ,Functional requirement ,multi-agent systems ,Architecture ,simulation ,social sciences ,Social agents ,Agent-based social simulation ,Social simulation - Abstract
This paper introduces four sociological concepts which we argue are important for the creation of autonomous social agents capable of behaving and interacting realistically with each other as virtual humans. A list of functional requirements based on these concepts is then proposed
- Published
- 2000
- Full Text
- View/download PDF
25. Verbal communication
- Author
-
Jean-Sébastien Monzani and Daniel Thalmann
- Subjects
Nonverbal communication ,Multimedia ,Software agent ,Computer science ,Multi-agent system ,Sound propagation ,Communication language ,computer.software_genre ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,computer ,Natural language ,Computer animation - Abstract
We present a method for generating conversations between human-like agents by proposing specific parameters for inter-agents messages with an approximation of virtual sound propagation. We are then be able to simulate appropriate human-like behaviours automatically. For instance, we demonstrate how to create proper reactions for agents that are not able to understand, but only to hear the sentences
- Published
- 2000
- Full Text
- View/download PDF
26. Tournament selection for browsing temporal signals
- Author
-
Daniel Thalmann and Ik Soo Lim
- Subjects
Focus (computing) ,Information retrieval ,business.industry ,Computer science ,media_common.quotation_subject ,Process (computing) ,Tournament selection ,Presentation ,Peripheral vision ,Computer vision ,Artificial intelligence ,User interface ,Set (psychology) ,business ,media_common - Abstract
Presentation of audio or video files for browsing is difficultdue to their serial and transitory nature whereas texts or pho-tos may be simply placed in the user’s visual field. This pa-per describes a simple technique for browsing and selectingamong temporal signals such as audio and video. In the pro-posed method, only two among a set of the signals are pre-sented to the user at a time and (s)he just needs to select a‘winner’ among the two. This process is repeated for manyrounds until a single winner remains as analogous tothe tour-nament match in many sport games. We applied this intu-itive technique to browsing and selecting among computer-animation clips for motion parameter setting. It would alsobe applicable to browsing/selecting among retrieved candi-dates in audio/video retrieval systems.Key W ords: browsing, retrieval, tournament, multimedia,audio, video, user interface. 1 Introduction Browsing in a visual user interface takes advantage of the fact that avariety of artifacts may be placed in the visual field and the user canrapidly scan them, as well as use peripheral vision to obtain somesense of the objects not in the immediate visual focus [8]. Presen-tation of a set of retrieved items is usually done this way in imageretrieval systems [5][6]: the user can rapidly scan them and choosethe best among them. However, presentation of temporal signalssuch as audio or video files for browsing is made difficult by theserial and transitory nature of them. As opposed to the static dis
- Published
- 2000
- Full Text
- View/download PDF
27. Real-time animation and motion capture in Web human director (WHD)
- Author
-
Christian Babski and Daniel Thalmann
- Subjects
computer animation ,Internet ,Computer science ,Context (language use) ,Animation ,computer.file_format ,computer.software_genre ,Motion capture ,real-time systems ,virtual reality languages ,Computer graphics (images) ,VRML ,Skeletal animation ,Plug-in ,computer ,Java ,Computer facial animation ,Computer animation ,information resources ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Motion capture systems usually work in conjunction with complex 3D applications, such as 3D Studio Max by Kinetix or Maya by Alias/Wavefront. Once models have been created in these applications, motion capture systems provide the necessary data input to animate these models. The context of this paper introduces a simple motion capture system, which is integrated into a web-based application, thus allowing HANIM humanoids to be animated using VRML and JAVA. Since Web browser/VRML plugin context is commonly available on computers, the presented motion capture application is easy to use on any platform. Taking benefit of a standard language as VRML makes the exploitation of produced animation easier than other commercial application with their specific formats.
- Published
- 2000
- Full Text
- View/download PDF
28. Direct 3D interaction with smart objects
- Author
-
Marcelo Kallmann and Daniel Thalmann
- Subjects
3D interaction ,data gloves ,Multimedia ,Computer science ,Smart objects ,business.industry ,Wired glove ,Limiting ,Virtual reality ,computer.software_genre ,Task (project management) ,graphical user interfaces ,Human–computer interaction ,Virtual image ,virtual reality ,business ,computer ,human factors ,Graphical user interface - Abstract
Performing 3D interactions with virtual objects easily becomes a complex task, limiting the implementation of larger applications. In order to overcome some of these limitations, this paper describes a framework where the virtual object aids the user to accomplish a pre-programmed possible interaction. Such objects are called Smart Objects, in the sense that they know how the user can interact with them, giving clues to aid the interaction. We show how such objects are constructed, and exemplify the framework with an application where the user, wearing a data glove, can easily open and close drawers of some furniture.
- Published
- 1999
- Full Text
- View/download PDF
29. A seamless shape for HANIM compliant bodies
- Author
-
Christian Babski and Daniel Thalmann
- Subjects
computer animation ,Java ,Computer science ,deformation ,External authoring interface ,computer.file_format ,Set (abstract data type) ,real-time systems ,virtual reality languages ,Computer graphics (images) ,VRML ,authoring systems ,software standards ,computer ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS ,Humanoid animation ,computer.programming_language - Abstract
A standard for representing avatars in VRML worlds has been specified by the HANIM (Humanoid ANIMation) Working Group. Avatars based on these specifications are generally composed of a set of body surfaces which do not ensure that continuity between connected body parts is satisfied. This paper proposes a generic method for performing real-time body deformations using Java and VRML. This method allows the generation of seamless bodies that are fully compatible with the HANIM standard
- Published
- 1999
- Full Text
- View/download PDF
30. Integration of motion control techniques for virtual human and avatar real-time animation
- Author
-
Luc Emering, Pascal Bécheiraz, Ronan Boulic, and Daniel Thalmann
- Subjects
computer animation ,Inverse kinematics ,inverse problems ,Computer science ,business.industry ,motion control ,image sequences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Animation ,Virtual reality ,Motion control ,Motion capture ,real-time systems ,kinematics ,virtual reality ,Computer vision ,Artificial intelligence ,Software architecture ,business ,Computer animation ,software engineering ,ComputingMethodologies_COMPUTERGRAPHICS ,Virtual actor - Abstract
Real-time animation of virtual humans requires a dedicated architecture for the integration of different motion control techniques running into so-called actions. In this paper, we describe a software architecture called AGENTlib for the management of action combination. Considered actions exploit various techniques from keyframe sequence playback to inverse kinematics and motion capture. Two major requirements have to be enforced from the end user viewpoint: first, that multiple motion controllers can control simultaneously some parts or whole of the virtual human, and second, that successive actions result in a smooth motion flow
- Published
- 1997
- Full Text
- View/download PDF
31. VB2
- Author
-
Daniel Thalmann, Jean-Francis Balaguer, and Enrico Gobbetti
- Subjects
3D interaction ,Object-oriented programming ,Computer science ,Human–computer interaction ,Pattern recognition (psychology) ,User interface management systems ,Input device ,Virtual reality ,Visual appearance ,Computer animation - Abstract
The paper describes the VB2 architecture for the construction of three-dimensional interactive applications. The system's state and behavior are uniformly represented as a network of interrelated objects. Dynamic components are modeled by active variables, while multi-way relations are modeled by hierarchical constraints. Daemons are used to sequence between system states in reaction to changes in variable values. The constraint network is efficiently maintained by an incremental constraint solver based on an enhancement of SkyBlue. Multiple devices are used to interact with the synthetic world through the use of various interaction paradigms, including immersive environments with visual and audio feedback. Interaction techniques range from direct manipulation, to gestural input and three-dimensional virtual tools. Adaptive pattern recognition is used to increase input device expressiveness by enhancing sensor data with classification information. Virtual tools, which are encapsulations of visual appearance and behavior, present a selective view of manipulated models' information and offer an interaction metaphor to control it. Since virtual tools are first class objects, they can be assembled into more complex tools, much in the same way that simple tools are built on top of a modeling hierarchy. The architecture is currently being used to build a virtual reality animation system
- Published
- 1993
- Full Text
- View/download PDF
32. Dressing animated synthetic actors with complex deformable clothes
- Author
-
Nadia Magnenat Thalmann, Daniel Thalmann, Ying Yang, and Michel Carignan
- Subjects
computer animation ,Cloth modeling ,Garment design ,General Computer Science ,Interface (Java) ,business.industry ,Human body ,Clothing ,Computer Graphics and Computer-Aided Design ,Object (philosophy) ,GeneralLiterature_MISCELLANEOUS ,Motion (physics) ,Computer graphics (images) ,business ,Computer animation ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Discusses the use of physics-based models for animating clothes on synthetic actors in motion. In this approach, cloth pieces are first designed with polygonal panels in two dimensions, and are then seamed and attached to the actor's body in three dimensions. After the clothes are created, physical properties are simulated and then clothes are animated according to the actor's motion in a physical environment. The paper describes the physical models used and then addresses several problems encountered. It examines how to constrain the elements of deformable objects which are either seamed together or attached to rigid moving objects. It also describes a new approach to the problem of handling collisions among the cloth elements themselves, or between a cloth element and a rigid object like the human body. Finally, the paper discusses how to reduce the number of parameters for improving the interface between the animator and the physics-based model
- Published
- 1992
- Full Text
- View/download PDF
33. The 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI22, Guangzhou, China, December 27-29, 2022
- Author
-
Enhua Wu, Lionel Ming-shuan Ni, Zhigeng Pan, Daniel Thalmann, Ping Li 0016, Charlie C. L. Wang, Lei Zhu 0003, and Minghao Yang
- Published
- 2022
- Full Text
- View/download PDF
34. Proceedings of Computer Graphics International 2018, CGI 2018, Bintan Island, Indonesia, June 11-14, 2018
- Author
-
Nadia Magnenat-Thalmann, Jinman Kim, Holly E. Rushmeier, Bruno Lévy 0001, Hao (Richard) Zhang, and Daniel Thalmann
- Published
- 2018
35. Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, VRCAI 2018, Hachioji, Japan, December 02-03, 2018
- Author
-
Koji Mikami, Zhigeng Pan, Matt Adcock, Daniel Thalmann, Xubo Yang, Tomoki Itamiya, and Enhua Wu
- Published
- 2018
- Full Text
- View/download PDF
36. Proceedings of the 23rd International ACM Conference on 3D Web Technology, Web3D 2018, Poznań, Poland, June 20-22, 2018
- Author
-
Krzysztof Walczak 0001, Gabriel Zachmann, Jakub Flotynski, Kiyoshi Kiyokawa, and Daniel Thalmann
- Published
- 2018
37. Proceedings of the Computer Graphics International Conference, CGI 2017, Yokohama, Japan, June 27 - 30, 2017
- Author
-
Xiaoyang Mao, Daniel Thalmann, and Marina L. Gavrilova
- Published
- 2017
- Full Text
- View/download PDF
38. Short Paper Proceedings of the 33rd Computer Graphics International, Heraklion, Greece, June 28 - July 1, 2016
- Author
-
George Papagiannakis, Daniel Thalmann, and Panos E. Trahanias
- Published
- 2016
39. Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry, VRCAI 2016, Zhuhai, China, December 3-4, 2016
- Author
-
Yiyu Cai and Daniel Thalmann
- Published
- 2016
- Full Text
- View/download PDF
40. Proceedings of the 29th International Conference on Computer Animation and Social Agents, CASA 2016, Geneva, Switzerland, May 23-25, 2016
- Author
-
Nadia Magnenat-Thalmann, Marina L. Gavrilova, Taku Komura, Frederic Cordier, and Daniel Thalmann
- Published
- 2016
41. Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology, VRST 2015, Beijing, China, November 13-15, 2015
- Author
-
Qinping Zhao, Daniel Thalmann, Stephen N. Spencer, Enhua Wu, Ming C. Lin, and Lili Wang 0006
- Published
- 2015
- Full Text
- View/download PDF
42. The 19th ACM Symposium on Virtual Reality Software and Technology, VRST'13, Singapore, October 6-9, 2013
- Author
-
Nadia Magnenat-Thalmann, Enhua Wu, Susumu Tachi, Daniel Thalmann, Luciana Porcher Nedel, and Weiwei Xu
- Published
- 2013
- Full Text
- View/download PDF
43. Virtual Reality Continuum and its Applications in Industry, VRCAI 2012, Singapore, December 2-4, 2012
- Author
-
Daniel Thalmann, Enhua Wu, Zhigeng Pan, Abdennour El Rhalibi, Nadia Magnenat-Thalmann, and Matt Adcock
- Published
- 2012
44. Proceedings of the 8th International Conference on Virtual Reality Continuum and its Applications in Industry, VRCAI 2009, Yokohama, Japan, December 14-15, 2009
- Author
-
Stephen N. Spencer, Masayuki Nakajima 0001, Enhua Wu, Kazunori Miyata, Daniel Thalmann, and Zhiyong Huang 0001
- Published
- 2009
45. Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST 2009, Kyoto, Japan, November 18-20, 2009
- Author
-
Stephen N. Spencer, Yoshifumi Kitamura, Haruo Takemura, Kiyoshi Kiyokawa, Benjamin Lok, and Daniel Thalmann
- Published
- 2009
- Full Text
- View/download PDF
46. Proceedings of the 7th International Conference on Virtual Reality Continuum and its Applications in Industry, VRCAI 2008, Singapore, December 8-9, 2008
- Author
-
Susanto Rahardja, Enhua Wu, Daniel Thalmann, and Zhiyong Huang 0001
- Published
- 2008
47. Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST 2008, Bordeaux, France, October 27-29, 2008
- Author
-
Steven Feiner, Daniel Thalmann, Pascal Guitton, Bernd Fröhlich 0001, Ernst Kruijff, and Martin Hachet
- Published
- 2008
- Full Text
- View/download PDF
48. Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST 1997, Lausanne, Switzerland, September 15-17, 1997.
- Author
-
Daniel Thalmann, Steve Feiner, and Gurminder Singh
- Published
- 1997
- Full Text
- View/download PDF
49. Grimage
- Author
-
Jean-Sébastien Franco, Jean-Denis Lesage, Benjamin Petit, Edmond Boyer, Bruno Raffin, Interpretation and Modelling of Images and Videos (PERCEPTION), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Jean Kuntzmann (LJK), Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS), PrograMming and scheduling design fOr Applications in Interactive Simulation (MOAIS), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG), Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS), Visualization and manipulation of complex data on wireless mobile devices (IPARLA ), Université Sciences et Technologies - Bordeaux 1 (UB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS), Steven Feiner, Daniel Thalmann, Pascal Guitton, Bernd Fröhlich, Ernst Kruijff, and Martin Hachet, ANR-06-MDCA-0003,DALIA,Data trAnsfert for Large Interactive Applications(2006), Laboratoire d'Informatique de Grenoble (LIG), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Sciences et Technologies - Bordeaux 1, and Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)
- Subjects
business.industry ,Virtual world ,Computer science ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,020207 software engineering ,3d model ,Input device ,02 engineering and technology ,Work in process ,3D modeling ,Rendering (computer graphics) ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,business - Abstract
Nature de la présentation: démonstration; International audience; Real-time multi-camera 3D modeling provides full-body geometric and photometric data on the objects present in the acquisition space. It can be used as an input device for rendering textured 3D models, and for computing interactions with virtual objects through a physical simulation engine. In this paper we present a work in progress to build a collaborative environment where two distant users, each one 3D modeled in real-time, interact in a shared virtual world.
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.