104 results on '"Mixed / augmented reality"'
Search Results
2. Exploring the Extent of Usability for Augmented Profile Interfaces in Enhancing Conversation Experiences.
- Author
-
Ryu, Hyeyoung, Bang, Hyeonseok, Hwang, Daeun, and Kang, Younah
- Subjects
- *
INFORMATION needs , *AUGMENTED reality , *USER interfaces , *SATISFACTION , *CONVERSATION - Abstract
In this study, we investigated how to design a usable augmented reality (AR) profile conversation assistant focusing on how and which information leads to enhanced conversation experience and satisfaction. We drew on usability practices including user need interviews, information disposition sessions and an experiment comparing the conversation experiences between AR profile usage and non-usage. We provide insights into how to design a user interface that can enhance users' conversation experience and satisfaction compared to existing interfaces, especially in terms of type, quantity and placement of information on the AR profile. The three main design insights are to (1) limit the topics to personal information, recent events, preferences and hobbies; (2) use a text-based card format with emojis and make a clear distinction between preferred and not preferred topics through font size and placement difference; and (3) limit the number of information provision pages to less than four pages; however, we were not able to resolve the problem of the guilt and artificiality users feel in acquiring information about others from the AR profile. Thus, to resolve this problem, we suggest shifting our paradigm from a techno-solutionist perspective to breaking the illusion of the omnipotence of technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. ShellNeRF: Learning a Controllable High‐resolution Model of the Eye and Periocular Region.
- Author
-
Li, G., Sarkar, K., Meka, A., Buehler, M., Mueller, F., Gotardo, P., Hilliges, O., and Beeler, T.
- Subjects
- *
FACE-to-face communication , *PROBLEM solving , *EYELASHES , *EYE , *MOTION capture (Human mechanics) , *SIGNALS & signaling , *TELEPRESENCE - Abstract
Eye gaze and expressions are crucial non‐verbal signals in face‐to‐face communication. Visual effects and telepresence demand significant improvements in personalized tracking, animation, and synthesis of the eye region to achieve true immersion. Morphable face models, in combination with coordinate‐based neural volumetric representations, show promise in solving the difficult problem of reconstructing intricate geometry (eyelashes) and synthesizing photorealistic appearance variations (wrinkles and specularities) of eye performances. We propose a novel hybrid representation ‐ ShellNeRF ‐ that builds a discretized volume around a 3DMM face mesh using concentric surfaces to model the deformable 'periocular' region. We define a canonical space using the UV layout of the shells that constrains the space of dense correspondence search. Combined with an explicit eyeball mesh for modeling corneal light‐transport, our model allows for animatable photorealistic 3D synthesis of the whole eye region. Using multi‐view video input, we demonstrate significant improvements over state‐of‐the‐art in expression re‐enactment and transfer for high‐resolution close‐up views of the eye region. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. A comparison study between XR interfaces for driver assistance in take over request
- Author
-
Abhishek Mukhopadhyay, Vinay Krishna Sharma, Prashant Gaikwad Tatyarao, Aumkar Kishore Shah, Ananthram M C Rao, P Raj Subin, and Pradipta Biswas
- Subjects
ISO 9241 pointing task ,Lane detection ,Automatic lane navigation ,Motion path planning ,Mixed / Augmented reality ,Human factors ,Transportation engineering ,TA1001-1280 - Abstract
Extended Reality (XR) is the umbrella term to address Augmented, Virtual, and Mixed Reality interfaces. XR interfaces also enables natural and intuitive interaction with secondary driving tasks like maps, music, calls and so on without the need to take the eyes off the road. This work evaluates ISO 9241 pointing task in XR interfaces and analyzes the ease of interaction, physical, and mental efforts required in augmented and mixed reality interfaces. With fully automated vehicles becoming an everyday reality is still some research years away, the drivers in a semi-automated vehicle must be prepared to intervene upon a Take Over Request. In such cases, the human drivers may not be required to have full manual control of the vehicle throughout its driving operation but interfere as and when required and perform passive supervision the other times. In this paper, we evaluate the impacts of using XR interfaces in assisting drivers in taking over requests, and during the first second of controlling the vehicles. A prototype of a simulated semi-autonomous driving assistance system is developed with similar interfaces in AR and MR. User studies were performed for a comparative analysis of mixed reality with augmented reality displays/interfaces based on response time to take over requests. In both ISO 9241 Pointing Task and automotive task, the AR interface took significantly less time than the MR interface in terms of task performance. Participants also reported significantly less requirement of mental and physical effort in using screen-based AR interfaces than HoloLens based MR interface.
- Published
- 2023
- Full Text
- View/download PDF
5. The Impacts of Referent Display on Gesture and Speech Elicitation.
- Author
-
Williams, Adam S. and Ortega, Francisco R.
- Subjects
SPEECH & gesture ,PARTICIPATORY design ,GESTURE ,AUTOMATIC speech recognition ,SOCIAL interaction ,AUGMENTED reality - Abstract
Elicitation studies have become a popular method of participatory design. While traditionally used to examine unimodal gesture interactions, elicitation has started being used with other novel interaction modalities. Unfortunately, there has been no work that examines the impact of referent display on elicited interaction proposals. To address that concern this work provides a detailed comparison between two elicitation studies that were similar in design apart from the way that participants were prompted for interaction proposals (i.e., the referents). Based on this comparison the impact of referent display on speech and gesture interaction proposals are each discussed. The interaction proposals between these elicitation studies were not identical. Gesture proposals were the least impacted by referent display, showing high proposal similarity between the two works. Speech proposals were highly biased by text referents with proposals directly mirroring text-based referents an average of 69.36% of the time. In short, the way that referents are presented during elicitation studies can impact the resulting interaction proposals; however, the level of impact found is dependent on the modality of input elicited. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. A3RT: Attention-Aware AR Teleconferencing with Life-Size 2.5D Video Avatars
- Author
-
Wang, Xuanyu, Zhang, Weizhan, Fu, Hongbo, Wang, Xuanyu, Zhang, Weizhan, and Fu, Hongbo
- Abstract
Augmented Reality (AR) teleconferencing aims to enable remotely separated users to meet with each other in their own physical spaces as if they are face-to-face. Among all solutions, the video-avatar-based approach has the advantage of balancing fidelity and the sense of co-presence using easy-to-setup devices, including only a camera and an AR Head-Mounted Display (HMD). However, non-verbal cues indicating 'who is looking at whom' are always lost or misdelivered in multiparty teleconferencing experiences. To make users aware of such non-verbal cues, existing solutions explore screen-based visualizations, incorporate additional hardware, or alter to use a virtual avatar representation. However, they lack immersion, are less feasible for everyday usage due to complex installations, or lose the fidelity of remote users' authentic appearances. In this paper, we decompose such attention awareness into the awareness of being looked at and the awareness of attention between other users and address them in a decoupled process. Specifically, through a user study, we first find an unobtrusive and reasonable layout 'Attention Circle' to retarget a looker's head gaze to the one being looked at. We then conduct the second user study to find an effective and intuitive 'rotatable 2.5D video avatar with attention thumbnail' visualization to aid users in being aware of other users' attention. With the design choice distilled from the studies, we implement A3RT, a proof-of-concept prototype system that empowers attention-aware 2.5D-video-avatar-based multiparty AR teleconferencing in an easy, everyday setup. Ablation and usability studies on the prototype verify the effectiveness of our proposed components and the full system.
- Published
- 2024
7. Tilt Map: Interactive Transitions Between Choropleth Map, Prism Map and Bar Chart in Immersive Environments.
- Author
-
Yang, Yalong, Dwyer, Tim, Marriott, Kim, Jenny, Bernhard, and Goodwin, Sarah
- Subjects
MAP design ,TASK performance ,VIRTUAL reality ,DENSITY of states ,POPULATION density ,PRISMS - Abstract
We introduce Tilt Map, a novel interaction technique for intuitively transitioning between 2D and 3D map visualisations in immersive environments. Our focus is visualising data associated with areal features on maps, for example, population density by state. Tilt Map transitions from 2D choropleth maps to 3D prism maps to 2D bar charts to overcome the limitations of each. Our article includes two user studies. The first study compares subjects’ task performance interpreting population density data using 2D choropleth maps and 3D prism maps in virtual reality (VR). We observed greater task accuracy with prism maps, but faster response times with choropleth maps. The complementarity of these views inspired our hybrid Tilt Map design. Our second study compares Tilt Map to: a side-by-side arrangement of the various views; and interactive toggling between views. The results indicate benefits for Tilt Map in user preference; and accuracy (versus side-by-side) and time (versus toggle). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Exploring Smartphone-enabled Text Selection in AR-HMD.
- Author
-
Darbar, Rajkumar, Prouzeau, Arnaud, Odicio-Vilchez, Joan, Lainé, Thibault, and Hachet, Martin
- Subjects
SMARTPHONES ,AUGMENTED reality ,SOCIAL acceptance ,CROWDSOURCING ,COMPUTER graphics - Abstract
Text editing is important and at the core of most complex tasks, like writing an email or browsing the web. Efficient and sophisticated techniques exist on desktops and touch devices, but are still underexplored for Augmented Reality Head Mounted Display (AR-HMD). Performing text selection, a necessary step before text editing, in AR display commonly uses techniques such as hand-tracking, voice commands, eye/head-gaze, which are cumbersome and lack precision. In this paper, we explore the use of a smartphone as an input device to support text selection in AR-HMD because of its availability, familiarity, and social acceptability. We propose four eyes-free text selection techniques, all using a smartphone -- continuous touch, discrete touch, spatial movement, and raycasting. We compare them in a user study where users have to select text at various granularity levels. Our results suggest that continuous touch, in which we used the smartphone as a trackpad, outperforms the other three techniques in terms of task completion time, accuracy, and user preference. [ABSTRACT FROM AUTHOR]
- Published
- 2021
9. High Dynamic Range Point Clouds for Real‐Time Relighting.
- Author
-
Sabbadin, Manuele, Palma, Gianpaolo, Banterle, Francesco, Boubekeur, Tamy, and Cignoni, Paolo
- Subjects
POINT cloud ,AUGMENTED reality ,MAP collections ,ELECTRONIC data processing ,HIGH dynamic range imaging - Abstract
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per‐sample color response to relight virtual objects in visual effects (VFX) look‐dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real‐time Point‐Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low‐quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G‐buffers, to achieve real‐time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically‐based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Hybrid Touch/Tangible Spatial Selection in Augmented Reality
- Author
-
Mickael Sereno, Stéphane Gosset, Lonni Besançon, Tobias Isenberg, Analysis and Visualization (AVIZ), Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Interdisciplinaire des Sciences du Numérique (LISN), Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Interaction avec l'Humain (IaH), Laboratoire Interdisciplinaire des Sciences du Numérique (LISN), Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS), and Linköping University (LIU)
- Subjects
Scientific visualization ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,spatial selection ,[SCCO.COMP]Cognitive science/Computer science ,Mixed / augmented reality ,Touch screens ,Human-computer interaction HCI ,Computer Graphics and Computer-Aided Design ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
International audience; We study tangible touch tablets combined with Augmented Reality Head-Mounted Displays (AR-HMDs) to perform spatial 3D selections. We are primarily interested in the exploration of 3D unstructured datasets such as cloud points or volumetric datasets. AR-HMDs immerse users by showing datasets stereoscopically, and tablets provide a set of 2D exploration tools. Because AR-HMDs merge the visualization, interaction, and the users’ physical spaces, users can also use the tablets as tangible objects in their 3D space. Nonetheless, the tablets’ touch displays provide their own visualization and interaction spaces, separated from those of the AR-HMD. This raises several research questions compared to traditional setups. In this paper, we theorize, discuss, and study different available mappings for manual spatial selections using a tangible tablet within an AR-HMD space. We then study the use of this tablet within a 3D AR environment, compared to its use with a 2D external screen.
- Published
- 2022
- Full Text
- View/download PDF
11. HYDROSYS – A Mixed Reality Platform for On-Site Visualization of Environmental Data
- Author
-
Nurminen, Antti, Kruijff, Ernst, Veas, Eduardo, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Tanaka, Katsumi, editor, Fröhlich, Peter, editor, and Kim, Kyoung-Sook, editor
- Published
- 2011
- Full Text
- View/download PDF
12. Towards Efficient Visual Guidance in Limited Field-of-View Head-Mounted Displays.
- Author
-
Bork, Felix, Schnelzer, Christian, Eck, Ulrich, and Navab, Nassir
- Subjects
MIXED reality ,HEAD-mounted displays ,THREE-dimensional display systems ,VIRTUAL reality ,RADAR - Abstract
Understanding, navigating, and performing goal-oriented actions in Mixed Reality (MR) environments is a challenging task and requires adequate information conveyance about the location of all virtual objects in a scene. Current Head-Mounted Displays (HMDs) have a limited field-of-view where augmented objects may be displayed. Furthermore, complex MR environments may be comprised of a large number of objects which can be distributed in the extended surrounding space of the user. This paper presents two novel techniques for visually guiding the attention of users towards out-of-view objects in HMD-based MR: the 3D Radar and the Mirror Ball. We evaluate our approaches against existing techniques during three different object collection scenarios, which simulate real-world exploratory and goal-oriented visual search tasks. To better understand how the different visualizations guide the attention of users, we analyzed the head rotation data for all techniques and introduce a novel method to evaluate and classify head rotation trajectories. Our findings provide supporting evidence that the type of visual guidance technique impacts the way users search for virtual objects in MR. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. Recent Advances in Projection Mapping Algorithms, Hardware and Applications.
- Author
-
Grundhöfer, A. and Iwai, D.
- Subjects
THREE-dimensional modeling ,GEOMETRIC modeling ,GRAPHICAL projection ,COMPUTER simulation ,DATA modeling - Abstract
Abstract: This State‐of‐the‐Art‐Report covers the recent advances in research fields related to projection mapping applications. We summarize the novel enhancements to simplify the 3D geometric calibration task, which can now be reliably carried out either interactively or automatically using self‐calibration methods. Furthermore, improvements regarding radiometric calibration and compensation as well as the neutralization of global illumination effects are summarized. We then introduce computational display approaches to overcome technical limitations of current projection hardware in terms of dynamic range, refresh rate, spatial resolution, depth‐of‐field, view dependency, and color space. These technologies contribute towards creating new application domains related to projection‐based spatial augmentations. We summarize these emerging applications, and discuss new directions for industries. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Practical Radiometric Compensation for Projection Display on Textured Surfaces using a Multidimensional Model.
- Author
-
Li, Yuqi, Majumder, Aditi, Gopi, M., Wang, Chong, and Zhao, Jieyu
- Subjects
RADIOMETRIC methods ,PIXELS ,COMPUTER simulation ,MODEL-integrated computing ,REFLECTANCE - Abstract
Abstract: Radiometric compensation methods remove the effect of the underlying spatially varying surface reflectance of the texture when projecting on textured surfaces. All prior work sample the surface reflectance dependent radiometric transfer function from the projector to the camera at every pixel that requires the camera to observe tens or hundreds of images projected by the projector. In this paper, we cast the radiometric compensation problem as a sampling and reconstruction of multi‐dimensional radiometric transfer function that models the color transfer function from the projector to an observing camera and the surface reflectance in a unified manner. Such a multi‐dimensional representation makes no assumption about linearity of the projector to camera color transfer function and can therefore handle projectors with non‐linear color transfer functions(e.g. DLP, LCOS, LED‐based or laser‐based). We show that with a well‐curated sampling of this multi‐dimensional function, achieved by exploiting the following key properties, is adequate for its accurate representation: (a) the spectral reflectance of most real‐world materials are smooth and can be well‐represented using a lower‐dimension function; (b) the reflectance properties of the underlying texture have strong redundancies – for example, multiple pixels or even regions can have similar surface reflectance; (c) the color transfer function from the projector to camera have strong input coherence. The proposed sampling allows us to reduce the number of projected images that needs to be observed by a camera by up to two orders of magnitude, the minimum being only two. We then present a new multi‐dimensional scattered data interpolation technique to reconstruct the radiometric transfer function at a high spatial density (i.e. at every pixel) to compute the compensation image. We show that the accuracy of our interpolation technique is higher than any existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
15. OA-SLAM: Leveraging Objects for Camera Relocalization in Visual SLAM
- Author
-
Zins, Matthieu, Simon, Gilles, Berger, Marie-Odile, Recalage visuel avec des modèles physiquement réalistes (TANGRAM), Inria Nancy - Grand Est, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Department of Algorithms, Computation, Image and Geometry (LORIA - ALGO), Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Laboratoire Lorrain de Recherche en Informatique et ses Applications (LORIA), and Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lorraine (UL)-Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
FOS: Computer and information sciences ,Artificial intelligence ,Computer graphics ,Graphics systems and interfaces ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,Computer vision ,Mixed / augmented reality ,[INFO]Computer Science [cs] ,Computer vision problem ,Computing methodologies - Abstract
In this work, we explore the use of objects in Simultaneous Localization and Mapping in unseen worlds and propose an object-aided system (OA-SLAM). More precisely, we show that, compared to low-level points, the major benefit of objects lies in their higher-level semantic and discriminating power. Points, on the contrary, have a better spatial localization accuracy than the generic coarse models used to represent objects (cuboid or ellipsoid). We show that combining points and objects is of great interest to address the problem of camera pose recovery. Our main contributions are: (1) we improve the relocalization ability of a SLAM system using high-level object landmarks; (2) we build an automatic system, capable of identifying, tracking and reconstructing objects with 3D ellipsoids; (3) we show that object-based localization can be used to reinitialize or resume camera tracking. Our fully automatic system allows on-the-fly object mapping and enhanced pose tracking recovery, which we think, can significantly benefit to the AR community. Our experiments show that the camera can be relocalized from viewpoints where classical methods fail. We demonstrate that this localization allows a SLAM system to continue working despite a tracking loss, which can happen frequently with an uninitiated user. Our code and test data are released at gitlab.inria.fr/tangram/oa-slam., Comment: ISMAR 2022
- Published
- 2022
- Full Text
- View/download PDF
16. Safeguarding our Dance Cultural Heritage
- Author
-
Aristidou, Andreas, Chalmers, Alan, Chrysanthou, Yiorgos, Loscos, Celine, Multon, Franck, Sarupuri, Bhuvan, Stavrakis, Efstathios, University of Cyprus [Nicosia] (UCY), University of Warwick [Coventry], Université de Reims Champagne-Ardenne (URCA), Laboratoire d'Informatique en Calcul Intensif et Image pour la Simulation (LICIIS), Analysis-Synthesis Approach for Virtual Human Simulation (MIMETIC), Université de Rennes 2 (UR2)-Inria Rennes – Bretagne Atlantique, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-RÉALITÉ VIRTUELLE, HUMAINS VIRTUELS, INTERACTIONS ET ROBOTIQUE (IRISA-D5), Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-Institut National de Recherche en Informatique et en Automatique (Inria)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Université de Rennes (UR)-Institut National des Sciences Appliquées - Rennes (INSA Rennes), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT)-Institut de Recherche en Informatique et Systèmes Aléatoires (IRISA), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université de Bretagne Sud (UBS)-École normale supérieure - Rennes (ENS Rennes)-CentraleSupélec-Centre National de la Recherche Scientifique (CNRS)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), Algolysis [Cyprus], European Association for Computer Graphics, and ANR-17-JPCH-0004,SCHEDAR,Safeguarding the Cultural HEritage of Dance through Augmented Reality(2017)
- Subjects
Computer graphics ,+Computer+graphics%22">CCS Concepts: Computing methodologies --> Computer graphics ,Animation ,Motion processing ,Mixed / augmented reality ,Virtual reality ,Learning paradigms ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,CCS Concepts • Computing methodologies → Computer graphics ,Computing methodologies ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] - Abstract
Folk dancing is a key aspect of intangible cultural heritage that often reflects the socio-cultural and political influences prevailing in different periods and nations; each dance produces a meaning, a story with the help of music, costumes and dance moves. It has been transmitted from generation to generation, and to different countries, mainly due to movements of people carrying and disseminating their civilization. However, folk dancing, amongst other intangible heritage, is at high risk of disappearing due to wars, the moving of populations, economic crises, modernization, but most importantly, because these fragile creations have been modified over time through the process of collective recreation, and/or changes in the way of life. In this tutorial, we show how the European Project, SCHEDAR, exploited emerging technologies to digitize, analyze, and holistically document our intangible heritage creations, that is a critical necessity for the preservation and the continuity of our identity as Europeans., Tutorials, Andreas Aristidou, Alan Chalmers, Yiorgos Chrysanthou, Celine Loscos, Franck Multon, J. E. Parkins, Bhuvan Sarupuri, and Efstathios Stavrakis
- Published
- 2022
- Full Text
- View/download PDF
17. Audio Augmented Reality for Cultural Heritage Outdoors
- Author
-
Kritikos, Yannis, Giariskanis, Fotios, Mania, Katerina, Protopapadaki, Eftychia, Papanastasiou, Anthi, and Papadopoulou, Eleni
- Subjects
+Mixed+%2F+augmented+reality%22">CCS Concepts: Human-centered computing --> Mixed / augmented reality ,+Interactive+learning+environments%22">Applied computing --> Interactive learning environments ,Applied computing ,Interactive learning environments ,Human centered computing ,Mixed / augmented reality - Abstract
Audio Augmented Reality (AAR) is a novel area of AR representing the augmentation of reality with auditory input. The way auditory input is combined with 3D superimposed information in AAR is challenging, especially in noisy and busy environments outdoors. This paper presents a novel, work-in-progress, mobile AAR experience that is deployed in a city environment while walking past seven archaeological excavation sites in the city of Chania, Crete, Greece. The proposed AAR experience embeds 3D graphics and audio related to the cultural content while a visitor walks around the city, offering a non-linear narrative. Visual and audio digital elements are accurately geo-located while a personalized AAR SoundScape Generator boosts audience creativity. AAR design is optimized for outdoor use., Session 2, Yannis Kritikos, Fotios Giariskanis, Eftychia Protopapadaki, Anthi Papanastasiou, Eleni Papadopoulou, and Katerina Mania
- Published
- 2022
- Full Text
- View/download PDF
18. From Huh? to Aha! – a modern and thrilling approach to the user experience of structural measurements
- Author
-
Riedewald, Sandra, Corbett, David, and Garcia de Alcañiz Alonso, Juan Manuel
- Subjects
Camera metaphor ,Artificial intelligence ,Interaction design theory ,Mixed / augmented reality ,concepts and paradigms - Abstract
Does this bridge require maintenance? Is it worth for me, as an investor, to purchase this building? In order to answer such questions, data is collected with specialized instruments, evaluated by experts and passed through the hands of many people with varying levels of expertise. Understanding the stakeholders is a key element in both efficiency and error prevention. Screening Eagle Technologies is not only known in the industry for its technical innovations and has received multiple awards, but also for the ease of use of the measuring devices and the data processing software. The article introduces how, based on an amazingly simple operating metaphor, we consistently use AI, 3D and AR to help our users avoid mistakes and work efficiently when collecting, interpreting and passing on complex measurement data.
- Published
- 2022
- Full Text
- View/download PDF
19. Building Augmented and Virtual Reality Experiences for Children with Visual Diversity
- Author
-
Navarro-Newball, Andres Adolfo, Sierra Galvis, Martín Vladimir Alonso, Martínez, Juan Carlos, Betancourt, Juan José, Ramirez, Katherine, Velasquez, Andrés, Quinto, Valeria, Restrepo, Gerardo, Castillo, Andrés Darío, Asprilla, Elizabeth, Portilla, Anita, Serrano, Laura Lucia, Rodríguez, Frank Alexander, and Peñaloza, Eliana
- Subjects
Mixed / augmented reality ,+Virtual+reality%22">CCS Concepts: Computing methodologies --> Virtual reality ,Computing methodologies ,Virtual reality - Abstract
Currently, a binational network of universities carries out a collaborative project which seeks to promote inclusion and education in environmental issues for children. The so-called ''Colombia-Québec collaborative project'' seeks to develop interactive narratives about four Colombian animals to help develop language, cognitive and motricity skills in children while they gain awareness of endangered animals. Chosen animals include the cotton top tamarin, the jaguar, the spectacled bear, and the condor. We are building several interactive systems which take advantage of augmented and virtual reality technologies to expand narratives developed by speech and language therapists. Our goal is to use these systems to study the effects of the virtuality continuum in visually diverse children's development. We present our advances towards achieving it., Visual Computing and Applications, Andres Adolfo Navarro-Newball, Martín Vladimir Alonso Sierra Galvis, Juan Carlos Martínez, Juan José Betancourt, Katherine Ramirez, Andrés Velasquez, Valeria Quinto, Gerardo Restrepo, Andrés Darío Castillo, Elizabeth Asprilla, Anita Portilla, Laura Lucia Serrano, Frank Alexander Rodríguez, and Eliana Peñaloza
- Published
- 2022
- Full Text
- View/download PDF
20. Content-rich and Expansive Virtual Environments Using Passive Props As World Anchors
- Author
-
Steven G. Wheeler, Robert W. Lindeman, Alexandra Covaci, Simon Hoermann, and George Ghinea
- Subjects
mixed / augmented reality ,Computer science ,human-centered computing ,human computer interaction (HCI) ,Procedural generation ,Virtual space ,Virtual reality ,Human–computer interaction ,Content (measure theory) ,virtual reality ,Limit (mathematics) ,interaction paradigms ,Expansive ,Haptic technology - Abstract
In this paper, we present a system that allows developers to add passive haptic feedback into their virtual reality applications by making use of existing physical objects in the user’s real environment. Our approach has minimal dependence on procedural generation and does not limit the virtual space to the dimensions of the physical play-area.
- Published
- 2021
21. Estimating the Pose of a Medical Manikin for Haptic Augmentation of a Virtual Patient in Mixed Reality Training
- Author
-
Scherfgen, David and Schild, Jonas
- Subjects
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Human centered computing ,Mixed / augmented reality ,Virtual reality - Abstract
Virtual medical emergency training provides complex while safe interactions with virtual patients. Haptically integrating a medical manikin into virtual training has the potential to improve the interaction with a virtual patient and the training experience. We present a system that estimates the 3D pose of a medical manikin in order to haptically augment a human model in a virtual reality training environment, allowing users to physically touch a virtual patient. The system uses an existing convolutional neural network-based (CNN) body keypoint detector to locate relevant 2D keypoints of the manikin in the images of the stereo camera built into a head-mounted display. The manikin's position, orientation and joint angles are found by non-linear optimization. A preliminary analysis reports an error of 4.3 cm. The system is not yet capable of real-time processing., ICAT-EGVE 2021 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos, Posters, 3, 4, David Scherfgen and Jonas Schild, CCS Concepts: Human-centered computing --> Mixed / augmented reality; Virtual reality
- Published
- 2021
- Full Text
- View/download PDF
22. Effective close-range accuracy comparison of microsoft hololens generation one and two using vuforia imagetargets
- Author
-
Rieder, J.S.I. (author), van Tol, D.H. (author), Aschenbrenner, D. (author), Rieder, J.S.I. (author), van Tol, D.H. (author), and Aschenbrenner, D. (author)
- Abstract
This paper analyzes the effective accuracy for close-range operations for the first and the second generation of Microsoft HoloLens in combination with Vuforia Image Targets in a black-box approach. The implementation of Augmented Reality (AR) on optical see-through (OST), head-mounted devices (HMDs) has been proven viable for a variety of tasks, such as assembly, maintenance, or educational purposes. For most of these applications, minor localization errors are tolerated since no accurate alignment between the artificial and the real parts is required. For other potential applications, these accuracy errors represent a major obstacle. The "realistically achievable"accuracy remains largely unknown for close-range usages (e.g. within "arms-reach"of a user) for both generations of Microsoft HoloLens.Thus, the authors developed a method to benchmark and compare the applicability of these devices for tasks that demand a higher accuracy like composite manufacturing or medical surgery assistance. Furthermore, the method can be used for a broad variety of devices, establishing a platform for bench-marking and comparing these and future devices. This paper analyzes the performance of test users, which were asked to pinpoint the perceived location of holographic cones. The image recognition software package "Vuforia"was used to determine the spatial transform of the predefined ImageTarget. By comparing the user-markings with the algorithmic locations, a mean deviation of 2.59 ±1.79 [mm] (HL 1) and 1.11 ±0.98 [mm] (HL 2) has been found, which means that the mean accuracy improved by 57.1% and precision by 45.4%. The highest mean accuracy of a single test user has been measured with 0.47 ±1.683 [mm] (HL 1) and 0.085 ± 0.567 [mm] (HL 2)., Mechatronic Design, Stichting SAM|XL
- Published
- 2021
- Full Text
- View/download PDF
23. ProtoSketchAR: Prototyping in Augmented Reality via Sketchings
- Author
-
Arriu, Simone, Cherchi, Gianmarco, and Spano, Lucio Davide
- Subjects
Human centered computing ,Mixed / augmented reality - Abstract
Prototyping is a widely used technique in the early stages of system design, and it is an essential part of a new product development process. During this phase, designers identify the main functionalities, concepts and contents of the system without creating a fully functional system. This paper aims to discuss the development of ProtoSketchAR, a tool enabling Augmented Reality (AR) prototyping by sketching. The application has different interaction modes, depending on the performed functionality. Basically, it is possible to create 2D/3D sketches to be placed in the real environment and to manipulate them. These functionalities allow the creation of virtual elements that can be used to prototype screens of AR applications. The application is web-based so that it can be run on any device with a compatible AR browser, regardless of the operating system used., Smart Tools and Apps for Graphics - Eurographics Italian Chapter Conference, Short Papers 2: Miscellanea, 181, 186, Simone Arriu, Gianmarco Cherchi, and Lucio Davide Spano, CCS Concepts: Human-centered computing --> Mixed / augmented reality
- Published
- 2021
- Full Text
- View/download PDF
24. A Collaborative Molecular Graphics Tool for Knowledge Dissemination with Augmented Reality and 3D Printing
- Author
-
Noizet, M, Peltier, V, Deleau, H, Dauchez, M, Prévost, S, Jonquet-Prévoteau, J, Matrice extracellulaire et dynamique cellulaire - UMR 7369 (MEDyC), Université de Reims Champagne-Ardenne (URCA)-SFR CAP Santé (Champagne-Ardenne Picardie Santé), Université de Reims Champagne-Ardenne (URCA)-Université de Picardie Jules Verne (UPJV)-Université de Reims Champagne-Ardenne (URCA)-Université de Picardie Jules Verne (UPJV)-Centre National de la Recherche Scientifique (CNRS), Laboratoire d'Informatique en Calcul Intensif et Image pour la Simulation (LICIIS), and Université de Reims Champagne-Ardenne (URCA)
- Subjects
Scientific visualization ,Graphical user interfaces ,Modeling methodologies ,Mixed / augmented reality ,[INFO.INFO-BI]Computer Science [cs]/Bioinformatics [q-bio.QM] ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,[INFO.INFO-GR]Computer Science [cs]/Graphics [cs.GR] ,Computing methodologies ,Humancentered computing - Abstract
We propose in this article a concept called "augmented 3D printing with molecular modeling" as an application framework. Visualization is an essential means to represent complex biochemical and biological objects in order to understand their structures as functions. By pairing augmented reality systems and 3D printing, we propose to design a new collaborative molecular graphics tool (under implementation) for scientific visualization and visual analytics. The printed object is then used as a support for the visual augmentation by allowing the superimposition of different visualizations. Thus, still aware of his environment, the user can easily communicate with his collaborators while moving around the object. This user-friendly tool, dedicated to non-initiated scientists, will facilitate the dissemination of knowledge and collaboration between interdisciplinary researchers. Here, we present a first prototype and we focus on the main molecule tracking component. Initial feedback from our users suggests that our proposal is valid, and shows a real interest in this type of tool, with an intuitive interface., Workshop on Molecular Graphics and Visual Analysis of Molecular Data, Session 1, 1, 5, Mathieu Noizet, Valentine Peltier, Hervé Deleau, Manuel Dauchez, Stéphanie Prévost, and Jessica Jonquet-Prevoteau, CCS Concepts: Computing methodologies --> Modeling methodologies; Mixed / augmented reality; Scientific visualization; Humancentered computing --> Graphical user interfaces
- Published
- 2021
- Full Text
- View/download PDF
25. SuBloNet: Sparse Super Block Networks for Large Scale Volumetric Fusion
- Author
-
Rückert, Darius and Stamminger, Marc
- Subjects
3D imaging ,Mixed / augmented reality ,Reconstruction ,Computing methodologies - Abstract
Training and inference of convolutional neural networks (CNNs) on truncated signed distance fields (TSDFs) is a challenging task. Large parts of the scene are usually empty, which makes dense implementations inefficient in terms of memory consumption and compute throughput. However, due to the truncation distance, non-zero values are grouped around the surface creating small dense blocks inside the large empty space. We show that this structure can be exploited by storing the TSDF in a block sparse tensor and then decomposing it into rectilinear super blocks. A super block is a dense 3d cuboid of variable size and can be processed by conventional CNNs. We analyze the rectilinear decomposition and present a formulation for computing the bandwidth-optimal solution given a specific network architecture. However, this solution is NP-complete, therefore we also a present a heuristic approach for fast training and inference tasks. We verify the effectiveness of SuBloNet and report a speedup of 4x towards dense implementations and 1.7x towards state-of-the-art sparse implementations. Using the super block architecture, we show that recurrent volumetric fusion is now possible on large scale scenes. Such a systems is able to reconstruct high-quality surfaces from few noisy depth images., Vision, Modeling, and Visualization, Capturing and Rendering, 91, 98, Darius Rückert and Marc Stamminger, CCS Concepts: Computing methodologies --> Reconstruction; Mixed / augmented reality; 3D imaging
- Published
- 2021
- Full Text
- View/download PDF
26. Exploring Upper Limb Segmentation with Deep Learning for Augmented Virtuality
- Author
-
Gruosso, Monica, Capece, Nicola, and Erra, Ugo
- Subjects
Image segmentation ,Image processing ,Mixed / augmented reality ,Perception ,Computing methodologies ,Virtual reality ,Neural networks ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Sense of presence, immersion, and body ownership are among the main challenges concerning Virtual Reality (VR) and freehand-based interaction methods. Through specific hand tracking devices, freehand-based methods can allow users to use their hands for VE interaction. To visualize and make easy the freehand methods, recent approaches take advantage of 3D meshes to represent the user's hands in VE. However, this can reduce user immersion due to their unnatural correspondence with the real hands. We propose an augmented virtuality (AV) pipeline allows users to visualize their limbs in VE to overcome this limit. In particular, they were captured by a single monocular RGB camera placed in an egocentric perspective, segmented using a deep convolutional neural network (CNN), and streamed in the VE. In addition, hands were tracked through a Leap Motion controller to allow user interaction. We introduced two case studies as a preliminary investigation for this approach. Finally, both quantitative and qualitative evaluations of the CNN results were provided and highlighted the effectiveness of the proposed CNN achieving remarkable results in several real-life unconstrained scenarios., Smart Tools and Apps for Graphics - Eurographics Italian Chapter Conference, Augmented and Virtual Reality, 129, 137, Monica Gruosso, Nicola Capece, and Ugo Erra, CCS Concepts: Computing methodologies --> Virtual reality; Mixed / augmented reality; Image segmentation; Neural networks; Perception; Image processing
- Published
- 2021
- Full Text
- View/download PDF
27. A Tabletop for the Natural Inspection of Decorative Surfaces
- Author
-
Kindsvater, Anton, Eibich, Tom David, Weier, Martin, and Hinkenjann, André
- Subjects
Visualization systems and tools ,Human centered computing ,Mixed / augmented reality ,Visualization design and evaluation methods ,Computing methodologies ,Reflectance modeling - Abstract
Designs for decorative surfaces, such as flooring, must cover several square meters to avoid visible repeats. While the use of desktop systems is feasible to support the designer, it is challenging for a non-domain expert to get the right impression of the appearances of surfaces due to limited display sizes and a potentially unnatural interaction with digital designs. At the same time, large-format editing of structure and gloss is becoming increasingly important. Advances in the printing industry allow for more faithful reproduction of such surface details. Unfortunately, existing systems for visualizing surface designs cannot adequately account for gloss, especially for non-domain experts. Here, the complex interaction of light sources and the camera position must be controlled using software controls. As a result, only small parts of the data set can be properly inspected at a time. Also, real-world lighting is not considered here. This work presents a system for the processing and realistic visualization of large decorative surface designs. To this end, we present a tabletop solution that is coupled to a live 360° video feed and a spatial tracking system. This allows for reproducing natural view-dependent effects like real-world reflections, live image-based lighting, and the interaction with the design using virtual light sources employing natural interaction techniques that allow for a more accurate inspection even for non-domain experts., ICAT-EGVE 2021 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Interaction and Applications, 63, 72, Anton Kindsvater, Tom David Eibich, Martin Weier, and André Hinkenjann, CCS Concepts: Human-centered computing --> Visualization systems and tools; Visualization design and evaluation methods; Computing methodologies --> Mixed / augmented reality; Reflectance modeling
- Published
- 2021
- Full Text
- View/download PDF
28. Material and Lighting Reconstruction for Complex Indoor Scenes with Texture-space Differentiable Rendering
- Author
-
Nimier-David, Merlin, Dong, Zhao, Jakob, Wenzel, and Kaplanyan, Anton
- Subjects
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Mixed / augmented reality ,Ray tracing ,+Reconstruction%22">Computing methodologies --> Reconstruction ,Virtual reality ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Modern geometric reconstruction techniques achieve impressive levels of accuracy in indoor environments. However, such captured data typically keeps lighting and materials entangled. It is then impossible to manipulate the resulting scenes in photorealistic settings, such as augmented / mixed reality and robotics simulation. Moreover, various imperfections in the captured data, such as missing detailed geometry, camera misalignment, uneven coverage of observations, etc., pose challenges for scene recovery. To address these challenges, we present a robust optimization pipeline based on differentiable rendering to recover physically based materials and illumination, leveraging RGB and geometry captures. We introduce a novel texture-space sampling technique and carefully chosen inductive priors to help guide reconstruction, avoiding low-quality or implausible local minima. Our approach enables robust and high-resolution reconstruction of complex materials and illumination in captured indoor scenes. This enables a variety of applications including novel view synthesis, scene editing, local & global relighting, synthetic data augmentation, and other photorealistic manipulations., Eurographics Symposium on Rendering - DL-only Track, Differentiable Rendering, 73, 84, Merlin Nimier-David, Zhao Dong, Wenzel Jakob, Anton Kaplanyan, CCS Concepts: Computing methodologies --> Reconstruction; Mixed / augmented reality; Virtual reality; Ray tracing
- Published
- 2021
- Full Text
- View/download PDF
29. Real-time Content Projection onto a Tunnel from a Moving Subway Train
- Author
-
Kim, Jaedong, Eom, Haegwang, Kim, Jihwan, Kim, Younghui, and Noh, Junyong
- Subjects
ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Mixed / augmented reality ,Computing methodologies - Abstract
In this study, we present the first actual working system that can project content onto a tunnel wall from a moving subway train so that passengers can enjoy the display of digital content through a train window. To effectively estimate the position of the train in a tunnel, we propose counting sleepers, which are installed at regular interval along the railway, using a distance sensor. The tunnel profile is constructed using pointclouds captured by a depth camera installed next to the projector. The tunnel profile is used to identify projectable sections that will not contain too much interference by possible occluders. The tunnel profile is also used to retrieve the depth at a specific location so that a properly warped content can be projected for viewing by passengers through the window when the train is moving at runtime. Here, we show that the proposed system can operate on an actual train., Pacific Graphics Short Papers, Posters, and Work-in-Progress Papers, Image Processing and Synthesis, 87, 91, Jaedong Kim, Haegwang Eom, Jihwan Kim, Younghui Kim, and Junyong Noh, CCS Concepts: Computing methodologies --> Mixed / augmented reality
- Published
- 2021
- Full Text
- View/download PDF
30. The Role of Depth Perception in XR from a Neuroscience Perspective: A Primer and Survey
- Author
-
Hushagen, Vetle, Tresselt, Gustav C., Smit, Noeska N., and Specht, Karsten
- Subjects
Life and medical sciences ,Surveys and overviews ,Applied computing ,Human centered computing ,Mixed / augmented reality ,General and reference ,Virtual reality - Abstract
Augmented and virtual reality (XR) are potentially powerful tools for enhancing the efficiency of interactive visualization of complex data in biology and medicine. The benefits of visualization of digital objects in XR mainly arise from enhanced depth perception due to the stereoscopic nature of XR head mounted devices. With the added depth dimension, XR is in a prime position to convey complex information and support tasks where 3D information is important. In order to inform the development of novel XR applications in the biology and medicine domain, we present a survey which reviews the neuroscientific basis underlying the immersive features of XR. To make this literature more accessible to the visualization community, we first describe the basics of the visual system, highlighting how visual features are combined to objects and processed in higher cortical areas with a special focus on depth vision. Based on state of the art findings in neuroscience literature related to depth perception, we provide several recommendations for developers and designers. Our aim is to aid development of XR applications and strengthen development of tools aimed at molecular visualization, medical education, and surgery, as well as inspire new application areas., Eurographics Workshop on Visual Computing for Biology and Medicine, Let's look into your brains, 37, 54, Vetle Hushagen, Gustav C. Tresselt, Noeska N. Smit, and Karsten Specht, CCS Concepts: General and reference --> Surveys and overviews; Applied computing --> Life and medical sciences; Human-centered computing --> Virtual reality; Mixed / augmented reality
- Published
- 2021
- Full Text
- View/download PDF
31. A Virtual Reality Platform for Immersive Education in Computer Graphics
- Author
-
Hácha, Filip, Vanecek, Petr, and Váša, Libor
- Subjects
Mesh models ,Mixed / augmented reality ,Computing methodologies ,Virtual reality - Abstract
We present a general framework for teaching 3D imagination demanding topics using commonly available hardware for virtual reality (VR). Our software allows collaborating on 3D objects, interaction and annotation in a simple and intuitive manner. Students can follow the exposition from an arbitrary, naturally chosen point of view, and they interact with the shared scene by adding annotations and using a pointing metaphore. The software allows using various VR headsets as well as opting for a traditional 3D rendering on a 2D screen. The framework has been tested in a small course on polygon mesh processing, identifying and eliminating the most serious flaws. In its current form, most of the sources of friction have been addressed and the system can be employed in a variety of teaching scenarios., Eurographics 2021 - Education Papers, Education Papers I, 9, 14, Filip Hácha, Petr Vanecek, and Libor Váša, CCS Concepts: Computing methodologies --> Virtual reality; Mixed / augmented reality; Mesh models
- Published
- 2021
- Full Text
- View/download PDF
32. Talk2Hand: Knowledge Board Interaction in Augmented Reality Easing Analysis with Machine Learning Assistants
- Author
-
Hong, Yu-Lun, Watson, Benjamin, Thompson, Kenneth, and Paul, Davis
- Subjects
Visual analytics ,Human centered computing ,Mixed / augmented reality ,supervised learning ,Semi ,Theory of computation - Abstract
Analysts now often use machine learning (ML) assistants, but find them difficult to use, since most have little ML expertise. Talk2Hand improves the usability of ML assistants by supporting interaction with them using knowledge boards, which intuitively show association, visually aid human recall, and offer natural interaction that eases improvement of displayed associations and addition of new data into emerging models. Knowledge boards are familiar to most and studied by analytics researchers, but not in wide use, because of their large size and the challenges of using them for several projects simultaneously. Talk2Hand uses augmented reality to address these shortcomings, overlaying large but virtual knowledge boards onto typical analyst offices, and enabling analysts to switch easily between different knowledge boards. This paper describes our Talk2Hand prototype., EuroVis Workshop on Visual Analytics (EuroVA), Immersive Analytics and Interaction, 1, 5, Yu-Lun Hong, Benjamin Watson, Kenneth Thompson, and Davis Paul, CCS Concepts: Human-centered computing --> Visual analytics; Mixed / augmented reality; Theory of computation --> Semi-supervised learning
- Published
- 2021
- Full Text
- View/download PDF
33. Projection Alignment Correction for In-Vehicle Projector-Camera System
- Author
-
Amano, Toshiyuki and Kagawa, Taichi
- Subjects
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Human centered computing ,Mixed / augmented reality ,Computing methodologies - Abstract
In this study, we propose a projection registration method for the projections from a continuously moving vehicle for driver vision assistance during night driving. Accordingly, we employ a context-aware projection technique with adaptive pixel map- ping generation. Because vehicle movements lead to misalignment of the projection latency, a co-axial projector-camera con- figuration or high frame rate processing cannot solve this problem. However, adaptive pixel mapping corrects pixel mapping according to the vehicle speed and achieves a misalignment-free dynamic projection mapping. The effectiveness of the proposed method was evaluated through experiments using a moving projector-camera system mounted on a motorized linear stage., ICAT-EGVE 2021 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Interaction and Applications, 47, 51, Toshiyuki Amano and Taichi Kagawa, CCS Concepts: Human-centered computing --> Mixed / augmented reality; Computing methodologies --> Mixed / augmented reality
- Published
- 2021
- Full Text
- View/download PDF
34. Projection Mapping for In-Situ Surgery Planning by the Example of DIEP Flap Breast Reconstruction
- Author
-
Martschinke, Jana, Klein, Vanessa, Kurth, Philipp, Engel, Klaus, Ludolph, Ingo, Hauck, Theresa, Horch, Raymund, and Stamminger, Marc
- Subjects
Health informatics ,Applied computing ,Ray tracing ,Mixed / augmented reality ,Computing methodologies - Abstract
Nowadays, many surgical procedures require preoperative planning, mostly relying on data from 3D imaging techniques like computed tomography or magnetic resonance imaging. However, preoperative assessment of this data is carried out on the PC (using classical CT/MR viewing software) and not on the patient's body itself. Therefore, surgeons need to transfer both their overall understanding of the patient's individual anatomy and also specific markers and labels for important points from the PC to the patient only with the help of imaginative power or approximative measurement. In order to close the gap between preoperative planning on the PC and surgery on the patient, we propose a system to directly project preoperative knowledge to the body surface by projection mapping. As a result, we are able to display both assigned labels and a volumetric and view-dependent view of the 3D data in-situ. Furthermore, we offer a method to interactively navigate through the data and add 3D markers directly in the projected volumetric view. We demonstrate the benefits of our approach using DIEP flap breast reconstruction as an example. By means of a small pilot study, we show that our method outperforms standard surgical planning in accuracy and can easily be understood and utilized even by persons without any medical knowledge., Eurographics Workshop on Visual Computing for Biology and Medicine, Conspiring to cut people open, 145, 153, Jana Martschinke, Vanessa Klein, Philipp Kurth, Klaus Engel, Ingo Ludolph, Theresa Hauck, Raymund Horch, and Marc Stamminger, CCS Concepts: Applied computing --> Health informatics; Computing methodologies --> Ray tracing; Mixed / augmented reality
- Published
- 2021
- Full Text
- View/download PDF
35. Triggering the Past: Cultural Heritage Interpretation Using Augmented and Virtual Reality at a Living History Museum
- Author
-
Shitut, Kunal, Geigel, Joe, Decker, Juilee, Jacobs, Gary, and Doherty, Amanda
- Subjects
Applied computing ,Mixed / augmented reality ,Computers in other domains ,Computing methodologies ,Virtual reality - Abstract
In this paper, we present a use case for the introduction of historical digital characters in the context of a living history museum. We describe a prototype system and framework that enables the use of augmented and virtual reality for placing a these characters in the museum space. The system uses a conversational interface for natural interaction, supports the scanning of objects in the museum space for guiding the conversation, and provides a common user experience on a variety of mixed reality devices including the Microsoft Hololens, mobile devices, and WebXR enabled Web browsers. We describe our character creation workflow, provide technical details on the implementation and discuss the user testing of the system. Findings from our testing suggest, that despite the analog, hands-on tradition of living history museums, the use of immersive technologies has the potential to greatly enhance the visitor experience while engaging users within the physical space of the museum., Eurographics Workshop on Graphics and Cultural Heritage, Virtual Museums, 61, 70, Kunal Shitut, Joe Geigel, Juilee Decker, Gary Jacobs, and Amanda Doherty, CCS Concepts: Computing methodologies --> Mixed / augmented reality; Virtual reality; Applied computing --> Computers in other domains
- Published
- 2021
- Full Text
- View/download PDF
36. Conveying Firsthand Experience: The Circuit Parcours Technique for Efficient and Engaging Teaching in Courses about Virtual Reality and Augmented Reality
- Author
-
Dörner, Ralf and Horst, Robin
- Subjects
Multimedia information systems ,Computing education ,Information systems ,Human centered computing ,Mixed / augmented reality ,Social and professional topics - Abstract
Providing the opportunity for hands-on experience is crucial when teaching courses about Virtual Reality (VR) and Augmented Reality (AR). However, the workload on the educator's side for providing these opportunities might be prohibitive. In addition, other organizational challenges can arise, for example, demonstrations of VR/AR application in a course might be too time-consuming, especially if the course is attended by many students. We present the Circuit Parcours Technique to meet these challenges. Here, in a well-organized event, stations with VR/AR demonstrations are provided in parallel, and students are enlisted to prepare and conduct the demonstrations. The event is embedded in a four-phase model. In this education paper, the technique is precisely described, examples for its flexible usage in different teaching situations are provided, advantages such as time efficiency are discussed, and lessons learned are shared from our experience with using this method for more than 10 years. Moreover, learning goals are identified that can be achieved with this technique besides gaining personal experience., Eurographics 2021 - Education Papers, Education Papers II, 15, 21, Ralf Dörner and Robin Horst, CCS Concepts: Social and professional topics --> Computing education; Information systems --> Multimedia information systems; Human-centered computing --> Mixed / augmented reality
- Published
- 2021
- Full Text
- View/download PDF
37. Compelling AR Earthquake Simulation with AR Screen Shaking
- Author
-
Chotchaicharin, Setthawut, Schirm, Johannes, Isoyama, Naoya, Monteiro, Diego Vilela, Uchiyama, Hideaki, Sakata, Nobuchika, and Kiyokawa, Kiyoshi
- Subjects
Human centered computing ,Mixed / augmented reality ,User studies ,time simulation ,Virtual reality ,Computing methodologies ,Real - Abstract
Many countries around the world suffer losses from earthquake disasters. To reduce the injury of individuals, safety training is essential to raise people's preparedness. To conduct virtual training, previous work uses virtual reality (VR) to mimic the real world, without considering augmented reality (AR). Our goal is to simulate earthquakes in a familiar environment, for example in one's own office, helping users to take the simulation more seriously.With this approach, we make it possible to flexibly switch between different environments of different sizes, only requiring developers to adjust the furniture layout. We propose an AR earthquake simulation using a video see-through VR headset, then use real earthquake data and implement a novel AR screen shake technique, which simulates the forces applied to the user's head by shaking the entire view. We run a user study (n=25), where participants experienced an earthquake both in a VR scene and two AR scenes with and without the AR screen shake technique. Along with a questionnaire, we collected real-time heart rate and balance information from participants for analysis. Our results suggest that both AR scenes offer a more compelling experience compared to the VR scene, and the AR screen shake improved immediacy and was preferred by most participants. This showed us how virtual safety training can greatly benefit from an AR implementation, motivating us to further explore this approach for the case of earthquakes., ICAT-EGVE 2021 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Feedback and Registration, 73, 81, Setthawut Chotchaicharin, Johannes Schirm, Naoya Isoyama, Diego Vilela Monteiro, Hideaki Uchiyama, Nobuchika Sakata, and Kiyoshi Kiyokawa, CCS Concepts: Human-centered computing --> Mixed / augmented reality; Virtual reality; User studies; Computing methodologies --> Real-time simulation
- Published
- 2021
- Full Text
- View/download PDF
38. Decoupled Localization and Sensing with HMD-based AR for Interactive Scene Acquisition
- Author
-
Søren Skovsen, Harald Haraldsson, Serge Belongie, Abe Davis, and Henrik Karstoft
- Subjects
SIMPLE (military communications protocol) ,HCI theory ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,02 engineering and technology ,Decoupling (cosmology) ,Visual feedback ,Tracking (particle physics) ,Human-centered computing ,Human computer interaction (HCI) ,Control theory ,concepts and models ,0202 electrical engineering, electronic engineering, information engineering ,Structure from motion ,Mixed / augmented reality ,020201 artificial intelligence & image processing ,Computer vision ,Augmented reality ,Artificial intelligence ,business ,Interaction paradigms - Abstract
Real-Time tracking and visual feedback offer interactive AR-Assisted capture systems as a convenient and low-cost alternative to specialized sensor rigs and robotic gantries. We present a simple strategy for decoupling localization and visual feedback in these applications from the primary sensor being used to capture the scene. Our strategy is to use an AR HMD and 6-DOF controller for tracking and feedback, synchronized with a separate primary sensor for capturing the scene. This approach allows for convenient real-Time localization of sensors that cannot do their own localization (e.g., microphones). In this poster paper, we present a prototype implementation of this strategy and investigate the accuracy of decoupled tracking by mounting a high resolution camera as the primary sensor, and comparing decoupled runtime pose estimates to the pose estimates of a high-resolution offline structure from motion.
- Published
- 2020
- Full Text
- View/download PDF
39. When High Fidelity Matters: AR and VR Improve the Learning of a 3D Object
- Author
-
Jocelyne Troccaz, Amélie Rochet-Capellan, Thibault Louis, François Bérard, Nady Hoyek, Université Grenoble Alpes (UGA), Centre National de la Recherche Scientifique (CNRS), GIPSA - Perception, Contrôle, Multimodalité et Dynamiques de la parole (GIPSA-PCMD), GIPSA Pôle Parole et Cognition (GIPSA-PPC), Grenoble Images Parole Signal Automatique (GIPSA-lab), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA)-Grenoble Images Parole Signal Automatique (GIPSA-lab), Université Claude Bernard Lyon 1 (UCBL), Université de Lyon, ANR-16-CE38-0011,Anatomy2020,outils d'interaction avec le corps pour l'apprentissage actif de l'anatomie(2016), and ANR-19-P3IA-0003,MIAI,MIAI @ Grenoble Alpes(2019)
- Subjects
Computer science ,media_common.quotation_subject ,Fidelity ,Optical head-mounted display ,02 engineering and technology ,Virtual reality ,Task (project management) ,High fidelity ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Mixed / augmented reality ,User studies ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,media_common ,05 social sciences ,Head mounted display ,050301 education ,Spatial augmented reality ,020207 software engineering ,Mental rotation ,Object (computer science) ,Human-centered computing ,Augmented reality ,0503 education ,Learning task - Abstract
International audience; Virtual and Augmented Reality Environments have long been seen as having strong potential for educational applications. However, research showing actual evidences of their benefits is sparse. Indeed , some recent studies point to unnoticeable benefits, or even a detrimental effect due to an increase of cognitive demand for the students when using these environments. In this work, we question if a clear benefit of AR and VR can be robustly measured for a specific education-related task: learning a 3D object. We ran a controlled study in which we compared three interaction techniques. Two techniques are VR-and AR-based; they offer a High Fidelity (HF) virtual reproduction of observing and manipulating physical objects. The third technique is based on a multi-touch tablet and was used as a baseline. We selected a task of 3D object learning as one potentially benefitting from the HF reproduction of object manipulation. The experiment results indicate that VR and AR HF techniques can have a substantial benefit for education as the object was recognized more than 27% faster when learnt using the HF techniques than when using the tablet.
- Published
- 2020
- Full Text
- View/download PDF
40. 3D Tabletop AR
- Author
-
Yann Laurillau, Carole Plasson, Laurence Nigay, Dominique Cunin, Laboratoire d'Informatique de Grenoble (LIG), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA), and ESAD Grenoble-Valence (ESAD Grenoble-Valence)
- Subjects
3D interaction ,Computer science ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.2: User Interfaces ,Mid-air ,020207 software engineering ,02 engineering and technology ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.1: Multimedia Information Systems/H.5.1.1: Artificial, augmented, and virtual realities ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.1: Multimedia Information Systems ,Task (project management) ,Human Computer Interaction (HCI) ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI) ,HMD ,Touch ,Human–computer interaction ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Decomposition (computer science) ,Table (database) ,Mixed / augmented reality ,Augmented reality ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Tabletop - Abstract
International audience; This paper contributes a first comparative study of three techniques for selecting 3D objects anchored to the table in tabletop Augmented Reality (AR). The impetus for this study is that touch interaction makes more sense when the targeted objects are anchored to the table. We experimentally compare touch and a mixed (touch+mid-air) techniques with the common direct mid-air technique. The touch and mixed techniques involve a decomposition of the 3D task into a 2D task by touch on the table followed by a 1D task by touch or mid-air interaction. Results show that: (1) The touch and mixed techniques present completion times similar to the mid-air technique and are more accurate than the mid-air technique; (2) The mixed technique defines a good compromise between accuracy of touch interaction and speed of mid-air interaction.
- Published
- 2020
- Full Text
- View/download PDF
41. Bring2Me: Bringing Virtual Widgets Back to the User's Field of View in Mixed Reality
- Author
-
Laurence Nigay, Charles Bailly, François Leitner, Laboratoire d'Informatique de Grenoble (LIG), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), and Université Grenoble Alpes (UGA)
- Subjects
Focus (computing) ,Forcing (recursion theory) ,Computer science ,Head (linguistics) ,05 social sciences ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.2: User Interfaces ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,020207 software engineering ,Context (language use) ,Field of view ,02 engineering and technology ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.1: Multimedia Information Systems/H.5.1.1: Artificial, augmented, and virtual realities ,Mixed reality ,Field (computer science) ,Pointing task ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI) ,HMD ,Human–computer interaction ,Mixed Reality ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Mixed / Augmented reality ,050107 human factors - Abstract
International audience; Current Mixed Reality (MR) Head-Mounted Displays (HMDs) offer a limited Field Of View (FOV) of the mixed environment. Turning the head is thus necessary to visually perceive the virtual objects that are placed within the real world. However, turning the head also means loosing the initial visual context. This limitation is critical in contexts like augmented surgery where surgeons need to visually focus on the operative field. To address this limitation we propose to bring virtual objects/widgets back to the users' FOV instead of forcing the users to turn their head. We carry an initial investigation to demonstrate the approach by designing and evaluating three new menu techniques to first bring the menu back to the users' FOV before selecting an item. Results show that our three menu techniques are 1.5s faster on average than the baseline head-motion menu technique and are largely preferred by participants.
- Published
- 2020
- Full Text
- View/download PDF
42. Target Expansion in Context
- Author
-
Denis Morand, Laurence Nigay, Patrick Perea, Laboratoire d'Informatique de Grenoble (LIG), Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA), and Schneider Electric ( SE)
- Subjects
Target Expansion ,Handheld augmented reality ,Point of interest ,Computer science ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.2: User Interfaces ,05 social sciences ,Fixed position ,020207 software engineering ,Context (language use) ,02 engineering and technology ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI)/H.5.1: Multimedia Information Systems/H.5.1.1: Artificial, augmented, and virtual realities ,Cursor (databases) ,Pointing ,ACM: H.: Information Systems/H.5: INFORMATION INTERFACES AND PRESENTATION (e.g., HCI) ,Task (computing) ,Handheld Augmented Reality ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Mixed / augmented reality ,0501 psychology and cognitive sciences ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Relevant information ,Menu ,050107 human factors - Abstract
International audience; Target expansion techniques facilitate pointing by enlarging the effective sizes of targets. As opposed to the numerous studies on target expansion solely focusing on optimizing pointing, we study the compound task of pointing at a Point of Interest (POI) and then interacting with the POI menu in handheld Augmented Reality (AR). A POI menu in AR has a fixed position because it contains relevant information about its location in the real world. We present two techniques that make the cursor jump to the closest opened POI menu after pointing at a POI. Our experimental results show that 1) for selecting a POI the expansion techniques are 31 % faster than the baseline screen-centered crosshair pointing technique, 2) the expansion techniques with/without a jumping cursor to the closest opened POI menu offer similar performances and 3) Touch relative pointing is preferred by participants because it minimizes physical movements.
- Published
- 2020
- Full Text
- View/download PDF
43. Atividade educacional utilizando realidade aumentada para o ensino de física no ensino superior : Educational activity using Augmented Reality for Teaching Physics in Higher Education
- Author
-
Herpich, Fabrício, Lima, Wilson V. C., Nunes, Felipe B., Lobo, Cesar O., and Tarouco, Liane Margarida Rockenbach
- Subjects
Ensino de física ,Aprendizagem móvel ,Simulação computacional ,Qualidade de software ,Mobile learning ,Ciencias Informáticas ,Software quality ,Multimídia ,Physics teaching ,Teaching-learning ,Multimedia ,Realidade aumentada ,Ensino-aprendizagem ,Computational simulation ,Mixed / augmented reality - Abstract
O uso de recursos de Realidade Aumentada em dispositivos móveis para a Educação tem sido explorado nos últimos anos de forma mais significativa. Desta forma, o objetivo deste artigo foi avaliar a qualidade de uma abordagem educacional neste contexto em termos de Usabilidade, Engajamento, Motivação e Aprendizagem. Um estudo voltado para o processo de ensino e aprendizagem de Física foi conduzido com 27 alunos de uma universidade federal, sendo aplicado o questionário MAREEA para avaliar a abordagem. Os resultados obtidos foram satisfatórios e instigadores, em que as quatro dimensões foram avaliadas positivamente, havendo também um importante feedback dos participantes para as melhorias nos recursos educacionais em realidade aumentada., The use of Augmented Reality features in mobile devices for Education has been explored in recent years in a more meaningful way. In this way, the objective of this article was to evaluate the quality of an educational approach in this context in terms of Usability, Engagement, Motivation, and Learning. A study focused on the teaching and learning process of Physics was conducted with 27 students from a federal university, using the MAREEA questionnaire to evaluate the approach. The results were satisfactory and instigators, in which the four dimensions were evaluated positively, and there was also significant feedback from the participants for improvements in educational resources in augmented reality., Facultad de Informática
- Published
- 2020
44. Eating together while being apart: A pilot study on the effects of mixed-reality conversations and virtual environments on older eaters’ solitary meal experience and food intake
- Author
-
Federico J.A. Perez-Cueto, Jon Ram Bruun-Pedersen, Dannie Michael Korsgaard, Thomas Bjørner, and Pernille Krog Sørensen
- Subjects
Meal ,Food intake ,Energy (esotericism) ,media_common.quotation_subject ,digestive, oral, and skin physiology ,Information systems applications ,Living room ,Mixed reality ,Human computer interaction (HCI) ,Developmental psychology ,Collaborative and social computing systems and tools ,Mood ,Human-centered computing ,medicine ,Information systems ,Mixed / augmented reality ,Conversation ,Social isolation ,medicine.symptom ,Psychology ,Interaction paradigms ,media_common - Abstract
The aim of this study was to investigate the potential of mixed-reality systems to virtually manipulate the eating experience and facilitate increased food intake among older participants. Social isolation is often associated with undernourishment among older adults receiving care services. Mixed-reality systems that blend real elements and a virtual world can conveniently allow older adults to eat a meal in their home while experiencing having a conversation with friends through virtual avatars in a virtual environment. A within-subjects study on thirty older participants investigated whether the mixed-reality illusion of eating in a living room with and without familiar others contributed positively to the meal experience and increased energy intake. The results did not display any significant changes in energy intake but highlighted that the virtual living room had a more energetic and pleasant atmosphere and that meals eaten in the virtual room were perceived to be of a higher quality compared to meals eaten in the real lab environment. Eating while engaging in avatar-based social interactions with three remotely located friends resulted in lower sensations of being alone and positive mood changes. A discussion of the reasons for the absence of increases in energy intake is included. The aim of this study was to investigate the potential of mixed-reality systems to virtually manipulate the eating experience and facilitate increased food intake among older participants. Social isolation is often associated with undernourishment among older adults receiving care services. Mixed-reality systems that blend real elements and a virtual world can conveniently allow older adults to eat a meal in their home while experiencing having a conversation with friends through virtual avatars in a virtual environment. A within-subjects study on thirty older participants investigated whether the mixed-reality illusion of eating in a living room with and without familiar others contributed positively to the meal experience and increased energy intake. The results did not display any significant changes in energy intake but highlighted that the virtual living room had a more energetic and pleasant atmosphere and that meals eaten in the virtual room were perceived to be of a higher quality compared to meals eaten in the real lab environment. Eating while engaging in avatar-based social interactions with three remotely located friends resulted in lower sensations of being alone and positive mood changes. A discussion of the reasons for the absence of increases in energy intake is included.
- Published
- 2020
- Full Text
- View/download PDF
45. Eating together while being apart:A pilot study on the effects of mixed-reality conversations and virtual environments on older eaters' solitary meal experience and food intake
- Author
-
Korsgaard, Dannie, Bjorner, Thomas, Bruun-Pedersen, Jon R., Sorensen, Pernille K., Perez-Cueto, Federico J.A., Korsgaard, Dannie, Bjorner, Thomas, Bruun-Pedersen, Jon R., Sorensen, Pernille K., and Perez-Cueto, Federico J.A.
- Abstract
The aim of this study was to investigate the potential of mixed-reality systems to virtually manipulate the eating experience and facilitate increased food intake among older participants. Social isolation is often associated with undernourishment among older adults receiving care services. Mixed-reality systems that blend real elements and a virtual world can conveniently allow older adults to eat a meal in their home while experiencing having a conversation with friends through virtual avatars in a virtual environment. A within-subjects study on thirty older participants investigated whether the mixed-reality illusion of eating in a living room with and without familiar others contributed positively to the meal experience and increased energy intake. The results did not display any significant changes in energy intake but highlighted that the virtual living room had a more energetic and pleasant atmosphere and that meals eaten in the virtual room were perceived to be of a higher quality compared to meals eaten in the real lab environment. Eating while engaging in avatar-based social interactions with three remotely located friends resulted in lower sensations of being alone and positive mood changes. A discussion of the reasons for the absence of increases in energy intake is included.
- Published
- 2020
46. A Review of Visual Perception Research in Optical See-Through Augmented Reality
- Author
-
Erickson, Austin, Kim, Kangsoo, Bruder, Gerd, and Welch, Gregory F.
- Subjects
Surveys and overviews ,Human centered computing ,Mixed / augmented reality ,General and reference - Abstract
In the field of augmented reality (AR), many applications involve user interfaces (UIs) that overlay visual information over the user's view of their physical environment, e.g., as text, images, or three-dimensional scene elements. In this scope, optical seethrough head-mounted displays (OST-HMDs) are particularly interesting as they typically use an additive light model, which denotes that the perception of the displayed virtual imagery is a composite of the lighting conditions of one's environment, the coloration of the objects that make up the virtual imagery, and the coloration of physical objects that lay behind them. While a large body of literature focused on investigating the visual perception of UI elements in immersive and flat panel displays, comparatively less effort has been spent on OST-HMDs. Due to the unique visual effects with OST-HMDs, we believe that it is important to review the field to understand the perceptual challenges, research trends, and future directions. In this paper, we present a systematic survey of literature based on the IEEE and ACM digital libraries, which explores users' perception of displaying text-based information on an OST-HMD, and aim to provide relevant design suggestions based on the meta-analysis results. We carefully review 14 key papers relevant to the visual perception research in OST-HMDs with UI elements, and present the current state of the research field, associated trends, noticeable research gaps in the literature, and recommendations for potential future research in this domain., ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Haptic and Visual Perception, 27, 35, Austin Erickson, Kangsoo Kim, Gerd Bruder, and Gregory F. Welch, CCS Concepts: Human-centered computing --> Mixed / augmented reality; General and reference --> Surveys and overviews
- Published
- 2020
- Full Text
- View/download PDF
47. Towards Interactive Virtual Dogs as a Pervasive Social Companion in Augmented Reality
- Author
-
Norouzi, Nahal, Kim, Kangsoo, Bruder, Gerd, and Welch, Greg
- Subjects
Human centered computing ,Mixed / augmented reality - Abstract
Pets and animal-assisted intervention sessions have shown to be beneficial for humans' mental, social, and physical health. However, for specific populations, factors such as hygiene restrictions, allergies, and care and resource limitations reduce interaction opportunities. In parallel, understanding the capabilities of animals' technological representations, such as robotic and digital forms, have received considerable attention and has fueled the utilization of many of these technological representations. Additionally, recent advances in augmented reality technology have allowed for the realization of virtual animals with flexible appearances and behaviors to exist in the real world. In this demo, we present a companion virtual dog in augmented reality that aims to facilitate a range of interactions with populations, such as children and older adults.We discuss the potential benefits and limitations of such a companion and propose future use cases and research directions., ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos, Demos, 29, 30, Nahal Norouzi, Kangsoo Kim, Gerd Bruder, and Greg Welch, CCS Concepts: Human-centered computing --> Mixed / augmented reality
- Published
- 2020
- Full Text
- View/download PDF
48. On the Use of Jumping Gestures for Immersive Teleportation in VR
- Author
-
Kruse, Lucie, Jung, Sungchul, Li, Richard, and Lindeman, Robert
- Subjects
Teleportation ,Jumping ,Usability ,Virtual Reality ,Presence ,Cybersickness ,Human centered computing ,Mixed / augmented reality ,Controller input ,User studies ,Gaze input ,Locomotion Techniques - Abstract
Virtual environments can be infinitely large, but users only have a limited amount of space in the physical world. One way to navigate within large virtual environments is through teleportation. Teleportation requires two steps: targeting a place and sudden shifting. Conventional teleportation uses a controller to point to a target position and a button press or release to immediately teleport the user to the position. Since the teleportation does not require physical movement, the user can explore the entire virtual environment. However, as this is supernatural and can lead to momentary disorientation, it can break the sense of presence, and thus degrade the overall virtual reality experience. To compensate for the downside of this technique, we explore the effects of a jumping gesture as a teleportation trigger. We conducted a study with two factors: 1) triggering method (Jumping and Standing), and 2) targeting method (Head-direction and Controller). We found that the conventional way of using a controller while standing showed better efficiency, the highest usability and lower cybersickness. Nevertheless, Jumping+ Controller invoked a high sense of engagement and fun, and therefore provides an interesting new technique, especially for VR games., ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Navigation in Virtual Environments, 113, 120, Lucie Kruse, Sungchul Jung, Richard Li, and Robert Lindeman, Keywords: Teleportation, Virtual Reality, Locomotion Techniques, Jumping, Presence, Cybersickness, Usability, Gaze-input, Controller-input; CCS Concepts: Human-centered computing --> User studies; Virtual reality; Mixed / augmented reality
- Published
- 2020
- Full Text
- View/download PDF
49. Breathing Life into Statues Using Augmented Reality
- Author
-
Ioannou, Eleftherios and Maddock, Steve
- Subjects
Fine arts ,Image processing ,Applied computing ,Mixed / augmented reality ,Animation ,Computing methodologies ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
AR art is a relatively recent phenomenon, one that brings innovation in the way that artworks can be produced and presented in real-world locations and environments. We present an AR art app, running in real time on a smartphone, that can be used to bring to life inanimate objects such as statues. The work relies on a virtual copy of the real object, which is produced using photogrammetry, as well as a skeleton rig for subsequent animation. As part of the work, we present a new diminishing reality technique, based on the use of particle systems, to make the real object 'disappear' and be replaced by the animating virtual copy, effectively animating the inanimate. The approach is demonstrated on two objects: a juice carton and a small giraffe sculpture., Computer Graphics and Visual Computing (CGVC), AR and VR, 71, 78, Eleftherios Ioannou and Steve Maddock, CCS Concepts: Computing methodologies --> Mixed / augmented reality; Animation; Image processing; Applied computing --> Fine arts
- Published
- 2020
- Full Text
- View/download PDF
50. First Contact - Take 2 Using XR to Overcome Intercultural Discomfort (racism)
- Author
-
Gunn, M. J., Sasikumar, P., and Bai, H.
- Subjects
Hardware ,Sensors ,Collaborative interaction ,Mixed / augmented reality ,Computer supported cooperative work ,centered computing methodologies ,Human - Abstract
Digital/human encounter is explored through a suite of experiences that comprises common/room, an art installation that uses various modes of Extended Reality (XR) to support social engagement and generate connections with a view to reversing processes of societal atomisation and intercultural discomfort (racism). common/room is exhibited as an informal dining room where each table hosts a distinct experience, designed to bring people together for discussion, active listening and considered response. Although XR is my tool, real encounter is my aim. Each experience, be it using 360 3D video in a headset or projection, or AR, simulates informal face-to-face encounters in the setting of a domestic kitchen or dining room. Visitors to one such experience, First Contact - Take 2, are invited to sit at a dining table where, by wearing a head-mounted Augmented Reality display, they encounter a volumetric representation of an indigenous Maori woman seated opposite. She speaks out of her culture that has refined collective endeavour and relational psychology over millennia., ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments - Posters and Demos, Demos, 25, 26, M. J. Gunn, P. Sasikumar, and H. Bai, CCS Concepts: Human-centered computing methodologies --> Mixed / augmented reality; Collaborative interaction; Computer supported cooperative work; Hardware --> Sensors
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.