540 results
Search Results
2. IEEE Transactions on Visualization and Computer Graphics. Message from the paper chairs and guest editors.
- Author
-
Coquillart S, LaViola JJ, Pan Z, and Schmalstieg D
- Subjects
- Computer Graphics, User-Computer Interface
- Published
- 2013
- Full Text
- View/download PDF
3. Message from the paper chairs and guest editors. Conference proceedings.
- Author
-
van Ham F, Machiraju R, Mueller K, Scheuermann G, and Weaver C
- Subjects
- Computer Graphics
- Published
- 2011
- Full Text
- View/download PDF
4. Image-based color ink diffusion rendering.
- Author
-
Wang CM and Wang RJ
- Subjects
- Algorithms, Computer Simulation, Diffusion, Paintings, Paper, Color, Coloring Agents chemistry, Computer Graphics, Image Interpretation, Computer-Assisted methods, Ink, Models, Chemical, User-Computer Interface
- Abstract
This paper proposes an image-based painterly rendering algorithm for automatically synthesizing an image with color ink diffusion. We suggest a mathematical model with a physical base to simulate the phenomenon of color colloidal ink diffusing into absorbent paper. Our algorithm contains three main parts: a feature extraction phase, a Kubelka-Munk (KM) color mixing phase, and a color ink diffusion synthesis phase. In the feature extraction phase, the information of the reference image is simplified by luminance division and color segmentation. In the color mixing phase, the KM theory is employed to approximate the result when one pigment is painted upon another pigment layer. Then, in the color ink diffusion synthesis phase, the physically-based model that we propose is employed to simulate the result of color ink diffusion in absorbent paper using a texture synthesis technique. Our image-based ink diffusing rendering (IBCIDR) algorithm eliminates the drawback of conventional Chinese ink simulations, which are limited to the black ink domain, and our approach demonstrates that, without using any strokes, a color image can be automatically converted to the diffused ink style with a visually pleasing appearance.
- Published
- 2007
- Full Text
- View/download PDF
5. Efficient and Accurate Semi-Automatic Neuron Tracing with Extended Reality.
- Author
-
Chen J, Yuan Z, Xi J, Gao Z, Li Y, Zhu X, Shi YS, Guan F, and Wang Y
- Subjects
- Animals, Humans, Brain cytology, Brain diagnostic imaging, Mice, Neurons cytology, Algorithms, Imaging, Three-Dimensional methods, Computer Graphics
- Abstract
Neuron tracing, alternately referred to as neuron reconstruction, is the procedure for extracting the digital representation of the three-dimensional neuronal morphology from stacks of microscopic images. Achieving accurate neuron tracing is critical for profiling the neuroanatomical structure at single-cell level and analyzing the neuronal circuits and projections at whole-brain scale. However, the process often demands substantial human involvement and represents a nontrivial task. Conventional solutions towards neuron tracing often contend with challenges such as non-intuitive user interactions, suboptimal data generation throughput, and ambiguous visualization. In this paper, we introduce a novel method that leverages the power of extended reality (XR) for intuitive and progressive semi-automatic neuron tracing in real time. In our method, we have defined a set of interactors for controllable and efficient interactions for neuron tracing in an immersive environment. We have also developed a GPU-accelerated automatic tracing algorithm that can generate updated neuron reconstruction in real time. In addition, we have built a visualizer for fast and improved visual experience, particularly when working with both volumetric images and 3D objects. Our method has been successfully implemented with one virtual reality (VR) headset and one augmented reality (AR) headset with satisfying results achieved. We also conducted two user studies and proved the effectiveness of the interactors and the efficiency of our method in comparison with other approaches for neuron tracing.
- Published
- 2024
- Full Text
- View/download PDF
6. SmoothRide: A Versatile Solution to Combat Cybersickness in Elevation-Altering Environments.
- Author
-
Ang S and Quarles J
- Subjects
- Humans, Male, Female, Adult, Young Adult, User-Computer Interface, Galvanic Skin Response physiology, Virtual Reality, Motion Sickness physiopathology, Computer Graphics
- Abstract
Cybersickness continues to bar many individuals from taking full advantage of virtual reality (VR) technology. Previous work has established that navigating virtual terrain with elevation changes poses a significant risk in this regard. In this paper, we investigate the effectiveness of three cybersickness reduction strategies on users performing a navigation task across virtual elevation-altering terrain. These strategies include static field of view (FOV) reduction, a flat surface approach that disables terrain collision and maintains constant elevation for users, and SmoothRide, a novel technique designed to dampen a user's perception of vertical motion as they travel. To assess the impact of these strategies, we conducted a within-subjects study involving 61 participants. Each strategy was compared against a control condition, where users navigated across terrain without any cybersickness reduction measures in place. Cybersickness data were collected using the Fast Motion Sickness Scale (FMS) and Simulator Sickness Questionnaire (SSQ), along with galvanic skin response (GSR) data. We measured user presence using the IGroup Presence questionnaire (IPQ) and a Single-Item Presence Scale (SIP). Our findings reveal that users experienced significantly lower levels of cybersickness using SmoothRide or FOV reduction. Presence scores reported on the IPQ were statistically similar between SmoothRide and the control condition. Conversely, terrain flattening had adverse effects on user presence scores, and we could not identify a significant effect on cybersickness compared to the control. We demonstrate that SmoothRide is an effective, lightweight, configurable, and easy-to-integrate tool for reducing cybersickness in simulations featuring elevation-altering terrain.
- Published
- 2024
- Full Text
- View/download PDF
7. Frankenstein's Monster in the Metaverse: User Interaction With Customized Virtual Agents.
- Author
-
Schmidt S, Koysurenbars I, and Steinicke F
- Subjects
- Humans, Female, Male, Adult, Young Adult, Augmented Reality, Computer Graphics, User-Computer Interface, Virtual Reality
- Abstract
Enabled by the latest achievements in artificial intelligence (AI), computer graphics as well as virtual, augmented, and mixed reality (VR/AR/MR), virtual agents are increasingly resembling humans in both their appearance and intelligent behavior. This results in enormous potential for agents to support users in their daily lives, for example in customer service, healthcare, education or the envisioned all-encompassing metaverse. Today's technology would allow users to customize their conversation partners in the metaverse - as opposed to reality - according to their preferences, potentially improving the user experience. On the other hand, there is little research on how reshaping the head of a communication partner might affect the immediate interaction with them. In this paper, we investigate the user requirements for and the effects of agent customization. In a two-stage user study ($N=30$), we collected both self-reported evaluations (e.g., intrinsic motivation) and interaction metrics (e.g., interaction duration and number of tried out items) for the process of agent customization itself as well as data on how users perceived the subsequent human-agent interaction in VR. Our results indicate that users only wish to have full customization for agents in their personal social circle, while for general services, a selection or even a definite assignment of pre-configured agents is sufficient. When customization is offered, attributes such as gender, clothing or hair are subjectively more relevant to users than facial features such as skin or eye color. Although the customization of human interaction partners is beyond our control, customization of virtual agents significantly increases perceived social presence as well as rapport and trust. Further findings on user motivation and agent diversity are discussed in the paper.
- Published
- 2024
- Full Text
- View/download PDF
8. Immersive Study Analyzer: Collaborative Immersive Analysis of Recorded Social VR Studies.
- Author
-
Lammert A, Rendle G, Immohr F, Neidhardt A, Brandenburg K, Raake A, and Froehlich B
- Subjects
- Humans, Male, Female, Virtual Reality, Computer Graphics, User-Computer Interface
- Abstract
Virtual Reality (VR) has become an important tool for conducting behavioral studies in realistic, reproducible environments. In this paper, we present ISA, an Immersive Study Analyzer system designed for the comprehensive analysis of social VR studies. For in-depth analysis of participant behavior, ISA records all user actions, speech, and the contextual environment of social VR studies. A key feature is the ability to review and analyze such immersive recordings collaboratively in VR, through support of behavioral coding and user-defined analysis queries for efficient identification of complex behavior. Respatialization of the recorded audio streams enables analysts to follow study participants' conversations in a natural and intuitive way. To support phases of close and loosely coupled collaboration, ISA allows joint and individual temporal navigation, and provides tools to facilitate collaboration among users at different temporal positions. An expert review confirms that ISA effectively supports collaborative immersive analysis, providing a novel and effective tool for nuanced understanding of user behavior in social VR studies.
- Published
- 2024
- Full Text
- View/download PDF
9. A Survey of Medical Visualization Through the Lens of Metaphors.
- Author
-
Preim B, Meuschke M, and Weis V
- Subjects
- Humans, Terminology as Topic, Computer Graphics, Metaphor, User-Computer Interface
- Abstract
We provide an overview of metaphors that were used in medical visualization and related user interfaces. Metaphors are employed to translate concepts from a source domain to a target domain. The survey is grounded in a discussion of metaphor-based design involving the identification and reflection of candidate metaphors. We consider metaphors that have a source domain in one branch of medicine, e.g., the virtual mirror that solves problems in orthopedics and laparoscopy with a mirror that resembles the dentist's mirror. Other metaphors employ the physical world as the source domain, such as crepuscular rays that inspire a solution for access planning in tumor therapy. Aviation is another source of inspiration, leading to metaphors, such as surgical cockpits, surgical control towers, and surgery navigation according to an instrument flight. This paper should raise awareness for metaphors and their potential to focus the design of computer-assisted systems on useful features and a positive user experience. Limitations and potential drawbacks of a metaphor-based user interface design for medical applications are also considered.
- Published
- 2024
- Full Text
- View/download PDF
10. Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations.
- Author
-
Wimer BL, South L, Wu K, Szafir DA, Borkin MA, and Metoyer RA
- Subjects
- Humans, Disabled Persons, User-Computer Interface, Databases, Factual, Vision Disorders, Computer Graphics
- Abstract
The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this article, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants involved, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations.
- Published
- 2024
- Full Text
- View/download PDF
11. Machine Learning Approaches for 3D Motion Synthesis and Musculoskeletal Dynamics Estimation: A Survey.
- Author
-
Loi I, Zacharaki EI, and Moustakas K
- Subjects
- Humans, Movement physiology, Musculoskeletal System diagnostic imaging, Machine Learning, Computer Graphics, Imaging, Three-Dimensional methods
- Abstract
The inference of 3D motion and dynamics of the human musculoskeletal system has traditionally been solved using physics-based methods that exploit physical parameters to provide realistic simulations. Yet, such methods suffer from computational complexity and reduced stability, hindering their use in computer graphics applications that require real-time performance. With the recent explosion of data capture (mocap, video) machine learning (ML) has started to become popular as it is able to create surrogate models harnessing the huge amount of data stemming from various sources, minimizing computational time (instead of resource usage), and most importantly, approximate real-time solutions. The main purpose of this paper is to provide a review and classification of the most recent works regarding motion prediction, motion synthesis as well as musculoskeletal dynamics estimation problems using ML techniques, in order to offer sufficient insight into the state-of-the-art and draw new research directions. While the study of motion may appear distinct to musculoskeletal dynamics, these application domains provide jointly the link for more natural computer graphics character animation, since ML-based musculoskeletal dynamics estimation enables modeling of more long-term, temporally evolving, ergonomic effects, while offering automated and fast solutions. Overall, our review offers an in-depth presentation and classification of ML applications in human motion analysis, unlike previous survey articles focusing on specific aspects of motion prediction.
- Published
- 2024
- Full Text
- View/download PDF
12. HDhuman: High-Quality Human Novel-View Rendering From Sparse Views.
- Author
-
Zhou T, Huang J, Yu T, Shao R, and Li K
- Subjects
- Humans, Algorithms, Image Processing, Computer-Assisted methods, Imaging, Three-Dimensional methods, Computer Graphics
- Abstract
In this paper, we aim to address the challenge of novel view rendering of human performers that wear clothes with complex texture patterns using a sparse set of camera views. Although some recent works have achieved remarkable rendering quality on humans with relatively uniform textures using sparse views, the rendering quality remains limited when dealing with complex texture patterns as they are unable to recover the high-frequency geometry details that are observed in the input views. To this end, we propose HDhuman, which uses a human reconstruction network with a pixel-aligned spatial transformer and a rendering network with geometry-guided pixel-wise feature integration to achieve high-quality human reconstruction and rendering. The designed pixel-aligned spatial transformer calculates the correlations between the input views and generates human reconstruction results with high-frequency details. Based on the surface reconstruction results, the geometry-guided pixel-wise visibility reasoning provides guidance for multi-view feature integration, enabling the rendering network to render high-quality images at 2k resolution on novel views. Unlike previous neural rendering works that always need to train or fine-tune an independent network for a different scene, our method is a general framework that is able to generalize to novel subjects. Experiments show that our approach outperforms all the prior generic or specific methods on both synthetic data and real-world data. Source code and test data will be made publicly available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/HDhuman/index.html.
- Published
- 2024
- Full Text
- View/download PDF
13. LoCoMoTe - A Framework for Classification of Natural Locomotion in VR by Task, Technique and Modality.
- Author
-
Croucher C, Powell W, Stevens B, Miller-Dicks M, Powell V, Wiltshire TJ, and Spronck P
- Subjects
- Humans, Algorithms, Machine Learning, User-Computer Interface, Walking physiology, Virtual Reality, Locomotion physiology, Computer Graphics
- Abstract
Virtual reality (VR) research has provided overviews of locomotion techniques, how they work, their strengths and overall user experience. Considerable research has investigated new methodologies, particularly machine learning to develop redirection algorithms. To best support the development of redirection algorithms through machine learning, we must understand how best to replicate human navigation and behaviour in VR, which can be supported by the accumulation of results produced through live-user experiments. However, it can be difficult to identify, select and compare relevant research without a pre-existing framework in an ever-growing research field. Therefore, this work aimed to facilitate the ongoing structuring and comparison of the VR-based natural walking literature by providing a standardised framework for researchers to utilise. We applied thematic analysis to study methodology descriptions from 140 VR-based papers that contained live-user experiments. From this analysis, we developed the LoCoMoTe framework with three themes: navigational decisions, technique implementation, and modalities. The LoCoMoTe framework provides a standardised approach to structuring and comparing experimental conditions. The framework should be continually updated to categorise and systematise knowledge and aid in identifying research gaps and discussions.
- Published
- 2024
- Full Text
- View/download PDF
14. To Stick or Not to Stick? Studying the Impact of Offset Recovery Techniques During Mid-Air Interactions.
- Author
-
Mavromatis M, Hoyet L, Lecuyer A, Dewez D, and Argelaguet F
- Subjects
- Humans, Male, Female, Adult, Young Adult, Hand physiology, Computer Graphics, Virtual Reality, User-Computer Interface
- Abstract
During mid-air interactions, common approaches (such as the god-object method) typically rely on visually constraining the user's avatar to avoid visual interpenetrations with the virtual environment in the absence of kinesthetic feedback. This paper explores two methods which influence how the position mismatch (positional offset) between users' real and virtual hands is recovered when releasing the contact with virtual objects. The first method (sticky) constrains the user's virtual hand until the mismatch is recovered, while the second method (unsticky) employs an adaptive offset recovery method. In the first study, we explored the effect of positional offset and of motion alteration on users' behavioral adjustments and users' perception. In a second study, we evaluated variations in the sense of embodiment and the preference between the two control laws. Overall, both methods presented similar results in terms of performance and accuracy, yet, positional offsets strongly impacted motion profiles and users' performance. Both methods also resulted in comparable levels of embodiment. Finally, participants usually expressed strong preferences toward one of the two methods, but these choices were individual-specific and did not appear to be correlated solely with characteristics external to the individuals. Taken together, these results highlight the relevance of exploring the customization of motion control algorithms for avatars.
- Published
- 2024
- Full Text
- View/download PDF
15. Measuring Embodiment: Movement Complexity and the Impact of Personal Characteristics.
- Author
-
Peck TC and Good JJ
- Subjects
- Humans, Male, Female, Adult, Young Adult, Virtual Reality, Movement physiology, Adolescent, Middle Aged, Video Games, User-Computer Interface, Computer Graphics
- Abstract
A user's personal experiences and characteristics may impact the strength of an embodiment illusion and affect resulting behavioral changes in unknown ways. This paper presents a novel re-analysis of two fully-immersive embodiment user-studies (n = 189 and n = 99) using structural equation modeling, to test the effects of personal characteristics on subjective embodiment. Results demonstrate that individual characteristics (gender, participation in science, technology, engineering or math - Experiment 1, age, video gaming experience - Experiment 2) predicted differing self-reported experiences of embodiment Results also indicate that increased self-reported embodiment predicts environmental response, in this case faster and more accurate responses within the virtual environment. Importantly, head-tracking data is shown to be an effective objective measure for predicting embodiment, without requiring researchers to utilize additional equipment.
- Published
- 2024
- Full Text
- View/download PDF
16. Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts.
- Author
-
Suh A, Appleby G, Anderson EW, Finelli L, Chang R, and Cashman D
- Subjects
- Humans, Communication, Guidelines as Topic, Computer Graphics
- Abstract
Presenting a predictive model's performance is a communication bottleneck that threatens collaborations between data scientists and subject matter experts. Accuracy and error metrics alone fail to tell the whole story of a model - its risks, strengths, and limitations - making it difficult for subject matter experts to feel confident in their decision to use a model. As a result, models may fail in unexpected ways or go entirely unused, as subject matter experts disregard poorly presented models in favor of familiar, yet arguably substandard methods. In this paper, we describe an iterative study conducted with both subject matter experts and data scientists to understand the gaps in communication between these two groups. We find that, while the two groups share common goals of understanding the data and predictions of the model, friction can stem from unfamiliar terms, metrics, and visualizations - limiting the transfer of knowledge to SMEs and discouraging clarifying questions being asked during presentations. Based on our findings, we derive a set of communication guidelines that use visualization as a common medium for communicating the strengths and weaknesses of a model. We provide a demonstration of our guidelines in a regression modeling scenario and elicit feedback on their use from subject matter experts. From our demonstration, subject matter experts were more comfortable discussing a model's performance, more aware of the trade-offs for the presented model, and better equipped to assess the model's risks - ultimately informing and contextualizing the model's use beyond text and numbers.
- Published
- 2024
- Full Text
- View/download PDF
17. PalmEx: Adding Palmar Force-Feedback for 3D Manipulation With Haptic Exoskeleton Gloves.
- Author
-
Bouzbib E, Teyssier M, Howard T, Pacchierotti C, and Lecuyer A
- Subjects
- Humans, Adult, Male, Female, Imaging, Three-Dimensional methods, Young Adult, User-Computer Interface, Equipment Design, Feedback, Sensory physiology, Exoskeleton Device, Virtual Reality, Hand physiology, Computer Graphics
- Abstract
Haptic exoskeleton gloves are a widespread solution for providing force-feedback in Virtual Reality (VR), especially for 3D object manipulations. However, they are still lacking an important feature regarding in-hand haptic sensations: the palmar contact. In this paper, we present PalmEx, a novel approach which incorporates palmar force-feedback into exoskeleton gloves to improve the overall grasping sensations and manual haptic interactions in VR. PalmEx's concept is demonstrated through a self-contained hardware system augmenting a hand exoskeleton with an encountered palmar contact interface - physically encountering the users' palm. We build upon current taxonomies to elicit PalmEx's capabilities for both the exploration and manipulation of virtual objects. We first conduct a technical evaluation optimising the delay between the virtual interactions and their physical counterparts. We then empirically evaluate PalmEx's proposed design space in a user study (n=12) to assess the potential of a palmar contact for augmenting an exoskeleton. Results show that PalmEx offers the best rendering capabilities to perform believable grasps in VR. PalmEx highlights the importance of the palmar stimulation, and provides a low-cost solution to augment existing high-end consumer hand exoskeletons.
- Published
- 2024
- Full Text
- View/download PDF
18. BiRD: Using Bidirectional Rotation Gain Differences to Redirect Users during Back-and-forth Head Turns in Walking.
- Author
-
Xu SZ, Chen FXY, Gong R, Zhang FL, and Zhang SH
- Subjects
- Humans, Animals, Orientation, Environment, Birds, Computer Graphics, Walking
- Abstract
Redirected walking (RDW) facilitates user navigation within expansive virtual spaces despite the constraints of limited physical spaces. It employs discrepancies between human visual-proprioceptive sensations, known as gains, to enable the remapping of virtual and physical environments. In this paper, we explore how to apply rotation gain while the user is walking. We propose to apply a rotation gain to let the user rotate by a different angle when reciprocating from a previous head rotation, to achieve the aim of steering the user to a desired direction. To apply the gains imperceptibly based on such a Bidirectional Rotation gain Difference (BiRD), we conduct both measurement and verification experiments on the detection thresholds of the rotation gain for reciprocating head rotations during walking. Unlike previous rotation gains which are measured when users are turning around in place (standing or sitting), BiRD is measured during users' walking. Our study offers a critical assessment of the acceptable range of rotational mapping differences for different rotational orientations across the user's walking experience, contributing to an effective tool for redirecting users in virtual environments.
- Published
- 2024
- Full Text
- View/download PDF
19. Who says you are so sick? An investigation on individual susceptibility to cybersickness triggers using EEG, EGG and ECG.
- Author
-
Tian N and Boulic R
- Subjects
- Humans, Fixation, Ocular, Electroencephalography, Electrocardiography adverse effects, Computer Graphics, Motion Sickness diagnosis
- Abstract
In this research paper, we conducted a study to investigate the connection between three objective measures: Electrocardio-gram(EGG), Electrogastrogram (EGG), and Electroencephalogram (EEG), and individuals' susceptibility to cybersickness. Our primary objective was to identify which of these factors plays a central role in causing discomfort when experiencing rotations along three different axes: Roll, Pitch, and Yaw. This study involved 35 participants who were tasked with destroying asteroids using their eye gaze while undergoing passive rotations in four separate sessions. The results, when combined with subjective measurements (specifically, Fast motion sickness questionnaire (FMS) and Simulator sickness questionnaire (SSQ) score), demonstrated that EGG measurements were superior in detecting symptoms associated with nausea. As for ECG measurements, our observations did reveal significant changes in Heart Rate Variability (HRV) parameters. However, we caution against relying solely on ECG as a dependable indicator for assessing the extent of cybersickness. Most notably, EEG signals emerged as a crucial resource for discerning individual differences related to these rotational axes. Our findings were significant not only in the context of periodic activities but also underscored the potential of aperiodic activities in detecting the severity of cybersickness and an individual's susceptibility to rotational triggers.
- Published
- 2024
- Full Text
- View/download PDF
20. PetPresence: Investigating the Integration of Real-World Pet Activities in Virtual Reality.
- Author
-
Xiong N, Liu Q, and Zhu K
- Subjects
- Humans, Animals, Dogs, Pilot Projects, Movement, Computer Graphics, Virtual Reality
- Abstract
For VR interaction, the home environment with complicated spatial setup and dynamics may hinder the VR user experience. In particular, pets' movement may be more unpredictable. In this paper, we investigate the integration of real-world pet activities into immersive VR interaction. Our pilot study showed that the active pet movements, especially dogs, could negatively impact users' performance and experience in immersive VR. We proposed three different types of pet integration, namely semitransparent real-world portal, non-interactive object in VR, and interactive object in VR. We conducted the user study with 16 pet owners and their pets. The results showed that compared to the baseline condition without any pet-integration technique, the approach of integrating the pet as interactive objects in VR yielded significantly higher participant ratings in perceived realism, joy, multisensory engagement, and connection with their pets in VR.
- Published
- 2024
- Full Text
- View/download PDF
21. My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning.
- Author
-
Gaba A, Kaufman Z, Cheung J, Shvakel M, Hall KW, Brun Y, and Bearfield CX
- Subjects
- Male, Humans, Female, Machine Learning, Bias, Surveys and Questionnaires, Trust psychology, Computer Graphics
- Abstract
Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer "Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?" Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.
- Published
- 2024
- Full Text
- View/download PDF
22. Enthusiastic and Grounded, Avoidant and Cautious: Understanding Public Receptivity to Data and Visualizations.
- Author
-
He HA, Walny J, Thoma S, Carpendale S, and Willett W
- Subjects
- Humans, Computer Graphics, Data Visualization
- Abstract
Despite an abundance of open data initiatives aimed to inform and empower "general" audiences, we still know little about the ways people outside of traditional data analysis communities experience and engage with public data and visualizations. To investigate this gap, we present results from an in-depth qualitative interview study with 19 participants from diverse ethnic, occupational, and demographic backgrounds. Our findings characterize a set of lived experiences with open data and visualizations in the domain of energy consumption, production, and transmission. This work exposes information receptivity - an individual's transient state of willingness or openness to receive information -as a blind spot for the data visualization community, complementary to but distinct from previous notions of data visualization literacy and engagement. We observed four clusters of receptivity responses to data- and visualization-based rhetoric: Information-Avoidant, Data-Cautious, Data-Enthusiastic, and Domain-Grounded. Based on our findings, we highlight research opportunities for the visualization community. This exploratory work identifies the existence of diverse receptivity responses, highlighting the need to consider audiences with varying levels of openness to new information. Our findings also suggest new approaches for improving the accessibility and inclusivity of open data and visualization initiatives targeted at broad audiences. A free copy of this paper and all supplemental materials are available at https://OSF.IO/MPQ32.
- Published
- 2024
- Full Text
- View/download PDF
23. Eleven Years of Gender Data Visualization: A Step Towards More Inclusive Gender Representation.
- Author
-
Cabric F, Bjarnadottir MV, Ling M, Rafnsdottir GL, and Isenberg P
- Subjects
- Humans, Surveys and Questionnaires, Data Visualization, Computer Graphics
- Abstract
We present an analysis of the representation of gender as a data dimension in data visualizations and propose a set of considerations around visual variables and annotations for gender-related data. Gender is a common demographic dimension of data collected from study or survey participants, passengers, or customers, as well as across academic studies, especially in certain disciplines like sociology. Our work contributes to multiple ongoing discussions on the ethical implications of data visualizations. By choosing specific data, visual variables, and text labels, visualization designers may, inadvertently or not, perpetuate stereotypes and biases. Here, our goal is to start an evolving discussion on how to represent data on gender in data visualizations and raise awareness of the subtleties of choosing visual variables and words in gender visualizations. In order to ground this discussion, we collected and coded gender visualizations and their captions from five different scientific communities (Biology, Politics, Social Studies, Visualisation, and Human-Computer Interaction), in addition to images from Tableau Public and the Information Is Beautiful awards showcase. Overall we found that representation types are community-specific, color hue is the dominant visual channel for gender data, and nonconforming gender is under-represented. We end our paper with a discussion of considerations for gender visualization derived from our coding and the literature and recommendations for large data collection bodies. A free copy of this paper and all supplemental materials are available at https://osf.io/v9ams/.
- Published
- 2024
- Full Text
- View/download PDF
24. Affective Visualization Design: Leveraging the Emotional Impact of Data.
- Author
-
Lan X, Wu Y, and Cao N
- Subjects
- Humans, Computer Graphics, Emotions
- Abstract
In recent years, more and more researchers have reflected on the undervaluation of emotion in data visualization and highlighted the importance of considering human emotion in visualization design. Meanwhile, an increasing number of studies have been conducted to explore emotion-related factors. However, so far, this research area is still in its early stages and faces a set of challenges, such as the unclear definition of key concepts, the insufficient justification of why emotion is important in visualization design, and the lack of characterization of the design space of affective visualization design. To address these challenges, first, we conducted a literature review and identified three research lines that examined both emotion and data visualization. We clarified the differences between these research lines and kept 109 papers that studied or discussed how data visualization communicates and influences emotion. Then, we coded the 109 papers in terms of how they justified the legitimacy of considering emotion in visualization design (i.e., why emotion is important) and identified five argumentative perspectives. Based on these papers, we also identified 61 projects that practiced affective visualization design. We coded these design projects in three dimensions, including design fields (where), design tasks (what), and design methods (how), to explore the design space of affective visualization design.
- Published
- 2024
- Full Text
- View/download PDF
25. SpeechMirror: A Multimodal Visual Analytics System for Personalized Reflection of Online Public Speaking Effectiveness.
- Author
-
Huang Z, He Q, Maher K, Deng X, Lai YK, Ma C, Qin SF, Liu YJ, and Wang H
- Subjects
- Humans, Communication, Speech, Computer Graphics
- Abstract
As communications are increasingly taking place virtually, the ability to present well online is becoming an indispensable skill. Online speakers are facing unique challenges in engaging with remote audiences. However, there has been a lack of evidence-based analytical systems for people to comprehensively evaluate online speeches and further discover possibilities for improvement. This paper introduces SpeechMirror, a visual analytics system facilitating reflection on a speech based on insights from a collection of online speeches. The system estimates the impact of different speech techniques on effectiveness and applies them to a speech to give users awareness of the performance of speech techniques. A similarity recommendation approach based on speech factors or script content supports guided exploration to expand knowledge of presentation evidence and accelerate the discovery of speech delivery possibilities. SpeechMirror provides intuitive visualizations and interactions for users to understand speech factors. Among them, SpeechTwin, a novel multimodal visual summary of speech, supports rapid understanding of critical speech factors and comparison of different speech samples, and SpeechPlayer augments the speech video by integrating visualization of the speaker's body language with interaction, for focused analysis. The system utilizes visualizations suited to the distinct nature of different speech factors for user comprehension. The proposed system and visualization techniques were evaluated with domain experts and amateurs, demonstrating usability for users with low visualization literacy and its efficacy in assisting users to develop insights for potential improvement.
- Published
- 2024
- Full Text
- View/download PDF
26. I am a Genius! Influence of Virtually Embodying Leonardo da Vinci on Creative Performance.
- Author
-
Gorisse G, Wellenreiter S, Fleury S, Lecuyer A, Richir S, and Christmann O
- Subjects
- Humans, Cognition, Creativity, Computer Graphics, Virtual Reality
- Abstract
Virtual reality (VR) provides users with the ability to substitute their physical appearance by embodying virtual characters (avatars) using head-mounted displays and motion-capture technologies. Previous research demonstrated that the sense of embodiment toward an avatar can impact user behavior and cognition. In this paper, we present an experiment designed to investigate whether embodying a well-known creative genius could enhance participants' creative performance. Following a preliminary online survey ( N = 157) to select a famous character suited to the purpose of this study, we developed a VR application allowing participants to embody Leonardo da Vinci or a self-avatar. Self-avatars were approximately matched with participants in terms of skin tone and morphology. 40 participants took part in three tasks seamlessly integrated in a virtual workshop. The first task was based on a Guilford's Alternate Uses test (GAU) to assess participants' divergent abilities in terms of fluency and originality. The second task was based on a Remote Associates Test (RAT) to evaluate convergent abilities. Lastly, the third task consisted in designing potential alternative uses of an object displayed in the virtual environment using a 3D sketching tool. Participants embodying Leonardo da Vinci demonstrated significantly higher divergent thinking abilities, with a substantial difference in fluency between the groups. Conversely, participants embodying a self-avatar performed significantly better in the convergent thinking task. Taken together, these results promote the use of our virtual embodiment approach, especially in applications where divergent creativity plays an important role, such as design and innovation.
- Published
- 2023
- Full Text
- View/download PDF
27. Virtual Reality Sickness Reduces Attention During Immersive Experiences.
- Author
-
Mimnaugh KJ, Center EG, Suomalainen M, Becerra I, Lozano E, Murrieta-Cid R, Ojala T, LaValle SM, and Federmeier KD
- Subjects
- Humans, Electroencephalography, Task Performance and Analysis, Surveys and Questionnaires, Computer Graphics, Virtual Reality
- Abstract
In this paper, we show that Virtual Reality (VR) sickness is associated with a reduction in attention, which was detected with the P3b Event-Related Potential (ERP) component from electroencephalography (EEG) measurements collected in a dual-task paradigm. We hypothesized that sickness symptoms such as nausea, eyestrain, and fatigue would reduce the users' capacity to pay attention to tasks completed in a virtual environment, and that this reduction in attention would be dynamically reflected in a decrease of the P3b amplitude while VR sickness was experienced. In a user study, participants were taken on a tour through a museum in VR along paths with varying amounts of rotation, shown previously to cause different levels of VR sickness. While paying attention to the virtual museum (the primary task), participants were asked to silently count tones of a different frequency (the secondary task). Control measurements for comparison against the VR sickness conditions were taken when the users were not wearing the Head-Mounted Display (HMD) and while they were immersed in VR but not moving through the environment. This exploratory study shows, across multiple analyses, that the effect mean amplitude of the P3b collected during the task is associated with both sickness severity measured after the task with a questionnaire (SSQ) and with the number of counting errors on the secondary task. Thus, VR sickness may impair attention and task performance, and these changes in attention can be tracked with ERP measures as they happen, without asking participants to assess their sickness symptoms in the moment.
- Published
- 2023
- Full Text
- View/download PDF
28. Influence of User Posture and Virtual Exercise on Impression of Locomotion During VR Observation.
- Author
-
Saint-Aubert J, Cogne M, Bonan I, Launey Y, and Lecuyer A
- Subjects
- Humans, Locomotion, Walking, Posture, Computer Graphics, Virtual Reality
- Abstract
A seated user watching his avatar walking in Virtual Reality (VR) may have an impression of walking. In this paper, we show that such an impression can be extended to other postures and other locomotion exercises. We present two user studies in which participants wore a VR headset and observed a first-person avatar performing virtual exercises. In the first experiment, the avatar walked and the participants (n=36) tested the simulation in 3 different postures (standing, sitting and Fowler's posture). In the second experiment, other participants (n=18) were sitting and observed the avatar walking, jogging or stepping over virtual obstacles. We evaluated the impression of locomotion by measuring the impression of walking (respectively jogging or stepping) and embodiment in both experiments. The results show that participants had the impression of locomotion in either sitting, standing and Fowler's posture. However, Fowler's posture significantly decreased both the level of embodiment and the impression of locomotion. The sitting posture seems to decrease the sense of agency compared to standing posture. Results also show that the majority of the participants experienced an impression of locomotion during the virtual walking, jogging, and stepping exercises. The embodiment was not influenced by the type of virtual exercise. Overall, our results suggest that an impression of locomotion can be elicited in different users' postures and during different virtual locomotion exercises. They provide valuable insight for numerous VR applications in which the user observes a self-avatar moving, such as video games, gait rehabilitation, training, etc.
- Published
- 2023
- Full Text
- View/download PDF
29. Probablement, Wahrscheinlich, Likely? A Cross-Language Study of How People Verbalize Probabilities in Icon Array Visualizations.
- Author
-
Rakotondravony N, Ding Y, and Harrison L
- Subjects
- Humans, Software, Computer Graphics, Language
- Abstract
Visualizations today are used across a wide range of languages and cultures. Yet the extent to which language impacts how we reason about data and visualizations remains unclear. In this paper, we explore the intersection of visualization and language through a cross-language study on estimative probability tasks with icon-array visualizations. Across Arabic, English, French, German, and Mandarin, n=50 participants per language both chose probability expressions - e.g. likely, probable - to describe icon-array visualizations (Vis-to-Expression), and drew icon-array visualizations to match a given expression (Expression-to-Vis). Results suggest that there is no clear one-to-one mapping of probability expressions and associated visual ranges between languages. Several translated expressions fell significantly above or below the range of the corresponding English expressions. Compared to other languages, French and German respondents appear to exhibit high levels of consistency between the visualizations they drew and the words they chose. Participants across languages used similar words when describing scenarios above 80% chance, with more variance in expressions targeting mid-range and lower values. We discuss how these results suggest potential differences in the expressiveness of language as it relates to visualization interpretation and design goals, as well as practical implications for translation efforts and future studies at the intersection of languages, culture, and visualization. Experiment data, source code, and analysis scripts are available at the following repository: https://osf.io/g5d4r/.
- Published
- 2023
- Full Text
- View/download PDF
30. Visual Comparison of Language Model Adaptation.
- Author
-
Sevastjanova R, Cakmak E, Ravfogel S, Cotterell R, and El-Assady M
- Subjects
- Male, Female, Humans, Language, Software, Artifacts, Natural Language Processing, Computer Graphics
- Abstract
Neural language models are widely used; however, their model parameters often need to be adapted to the specific domains and tasks of an application, which is time- and resource-consuming. Thus, adapters have recently been introduced as a lightweight alternative for model adaptation. They consist of a small set of task-specific parameters with a reduced training time and simple parameter composition. The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces. To help developers overcome these challenges, we provide a twofold contribution. First, in close collaboration with NLP researchers, we conducted a requirement analysis for an approach supporting adapter evaluation and detected, among others, the need for both intrinsic (i.e., embedding similarity-based) and extrinsic (i.e., prediction-based) explanation methods. Second, motivated by the gathered requirements, we designed a flexible visual analytics workspace that enables the comparison of adapter properties. In this paper, we discuss several design iterations and alternatives for interactive, comparative visual explanation methods. Our comparative visualizations show the differences in the adapted embedding vectors and prediction outcomes for diverse human-interpretable concepts (e.g., person names, human qualities). We evaluate our workspace through case studies and show that, for instance, an adapter trained on the language debiasing task according to context-0 (decontextualized) embeddings introduces a new type of bias where words (even gender-independent words such as countries) become more similar to female- than male pronouns. We demonstrate that these are artifacts of context-0 embeddings, and the adapter effectively eliminates the gender information from the contextualized word representations.
- Published
- 2023
- Full Text
- View/download PDF
31. ErgoExplorer: Interactive Ergonomic Risk Assessment from Video Collections.
- Author
-
Fernandez MM, Rados S, Matkovic K, Groller ME, and Delrieux C
- Subjects
- Humans, Computer Graphics, Ergonomics
- Abstract
Ergonomic risk assessment is now, due to an increased awareness, carried out more often than in the past. The conventional risk assessment evaluation, based on expert-assisted observation of the workplaces and manually filling in score tables, is still predominant. Data analysis is usually done with a focus on critical moments, although without the support of contextual information and changes over time. In this paper we introduce ErgoExplorer, a system for the interactive visual analysis of risk assessment data. In contrast to the current practice, we focus on data that span across multiple actions and multiple workers while keeping all contextual information. Data is automatically extracted from video streams. Based on carefully investigated analysis tasks, we introduce new views and their corresponding interactions. These views also incorporate domain-specific score tables to guarantee an easy adoption by domain experts. All views are integrated into ErgoExplorer, which relies on coordinated multiple views to facilitate analysis through interaction. ErgoExplorer makes it possible for the first time to examine complex relationships between risk assessments of individual body parts over long sessions that span multiple operations. The newly introduced approach supports analysis and exploration at several levels of detail, ranging from a general overview, down to inspecting individual frames in the video stream, if necessary. We illustrate the usefulness of the newly proposed approach applying it to several datasets.
- Published
- 2023
- Full Text
- View/download PDF
32. MedChemLens: An Interactive Visual Tool to Support Direction Selection in Interdisciplinary Experimental Research of Medicinal Chemistry.
- Author
-
Shi C, Nie F, Hu Y, Xu Y, Chen L, Ma X, and Luo Q
- Subjects
- Humans, Chemistry, Pharmaceutical, Computer Graphics
- Abstract
Interdisciplinary experimental science (e.g., medicinal chemistry) refers to the disciplines that integrate knowledge from different scientific backgrounds and involve experiments in the research process. Deciding "in what direction to proceed" is critical for the success of the research in such disciplines, since the time, money, and resource costs of the subsequent research steps depend largely on this decision. However, such a direction identification task is challenging in that researchers need to integrate information from large-scale, heterogeneous materials from all associated disciplines and summarize the related publications of which the core contributions are often showcased in diverse formats. The task also requires researchers to estimate the feasibility and potential in future experiments in the selected directions. In this work, we selected medicinal chemistry as a case and presented an interactive visual tool, MedChemLens, to assist medicinal chemists in choosing their intended directions of research. This task is also known as drug target (i.e., disease-linked proteins) selection. Given a candidate target name, MedChemLens automatically extracts the molecular features of drug compounds from chemical papers and clinical trial records, organizes them based on the drug structures, and interactively visualizes factors concerning subsequent experiments. We evaluated MedChemLens through a within-subjects study (N=16). Compared with the control condition (i.e., unrestricted online search without using our tool), participants who only used MedChemLens reported faster search, better-informed selections, higher confidence in their selections, and lower cognitive load.
- Published
- 2023
- Full Text
- View/download PDF
33. Extending the Nested Model for User-Centric XAI: A Design Study on GNN-based Drug Repurposing.
- Author
-
Wang Q, Huang K, Chandak P, Zitnik M, and Gehlenborg N
- Subjects
- Neural Networks, Computer, Algorithms, Drug Repositioning, Computer Graphics
- Abstract
Whether AI explanations can help users achieve specific tasks efficiently (i.e., usable explanations) is significantly influenced by their visual presentation. While many techniques exist to generate explanations, it remains unclear how to select and visually present AI explanations based on the characteristics of domain users. This paper aims to understand this question through a multidisciplinary design study for a specific problem: explaining graph neural network (GNN) predictions to domain experts in drug repurposing, i.e., reuse of existing drugs for new diseases. Building on the nested design model of visualization, we incorporate XAI design considerations from a literature review and from our collaborators' feedback into the design process. Specifically, we discuss XAI-related design considerations for usable visual explanations at each design layer: target user, usage context, domain explanation, and XAI goal at the domain layer; format, granularity, and operation of explanations at the abstraction layer; encodings and interactions at the visualization layer; and XAI and rendering algorithm at the algorithm layer. We present how the extended nested model motivates and informs the design of DrugExplorer, an XAI tool for drug repurposing. Based on our domain characterization, DrugExplorer provides path-based explanations and presents them both as individual paths and meta-paths for two key XAI operations, why and what else. DrugExplorer offers a novel visualization design called MetaMatrix with a set of interactions to help domain users organize and compare explanation paths at different levels of granularity to generate domain-meaningful insights. We demonstrate the effectiveness of the selected visual presentation and DrugExplorer as a whole via a usage scenario, a user study, and expert interviews. From these evaluations, we derive insightful observations and reflections that can inform the design of XAI visualizations for other scientific applications.
- Published
- 2023
- Full Text
- View/download PDF
34. Survey on Visual Analysis of Event Sequence Data.
- Author
-
Guo Y, Guo S, Jin Z, Kaul S, Gotz D, and Cao N
- Subjects
- Computer Graphics, Electronic Health Records
- Abstract
Event sequence data record series of discrete events in the time order of occurrence. They are commonly observed in a variety of applications ranging from electronic health records to network logs, with the characteristics of large-scale, high-dimensional and heterogeneous. This high complexity of event sequence data makes it difficult for analysts to manually explore and find patterns, resulting in ever-increasing needs for computational and perceptual aids from visual analytics techniques to extract and communicate insights from event sequence datasets. In this paper, we review the state-of-the-art visual analytics approaches, characterize them with our proposed design space, and categorize them based on analytical tasks and applications. From our review of relevant literature, we have also identified several remaining research challenges and future research opportunities.
- Published
- 2022
- Full Text
- View/download PDF
35. Show Me Your Face: Towards an Automated Method to Provide Timely Guidance in Visual Analytics.
- Author
-
Ceneda D, Arleo A, Gschwandtner T, and Miksch S
- Subjects
- Machine Learning, Software, Facial Expression, Computer Graphics, Algorithms
- Abstract
Providing guidance during a Visual Analytics session can support analysts in pursuing their goals more efficiently. However, the effectiveness of guidance depends on many factors: Determining the right timing to provide it is one of them. Although in complex analysis scenarios choosing the right timing could make the difference between a dependable and a superfluous guidance, an analysis of the literature suggests that this problem did not receive enough attention. In this paper, we describe a methodology to determine moments in which guidance is needed. Our assumption is that the need of guidance would influence the user state-of-mind, as in distress situations during the analytical process, and we hypothesize that such moments could be identified by analyzing the user's facial expressions. We propose a framework composed by a facial recognition software and a machine learning model trained to detect when to provide guidance according to changes of the user facial expressions. We trained the model by interviewing eight analysts during their work and ranked multiple facial features based on their relative importance in determining the need of guidance. Finally, we show that by applying only minor modifications to its architecture, our prototype was able to detect a need of guidance on the fly and made our methodology well suited also for real-time analysis sessions. The results of our evaluations show that our methodology is indeed effective in determining when a need of guidance is present, which constitutes a prerequisite to providing timely and effective guidance in VA.
- Published
- 2022
- Full Text
- View/download PDF
36. Design of a Pupil-Matched Occlusion-Capable Optical See-Through Wearable Display.
- Author
-
Wilson A and Hua H
- Subjects
- User-Computer Interface, Pupil, Equipment Design, Head, Computer Graphics, Wearable Electronic Devices
- Abstract
State-of-the-art optical see-through head-mounted displays (OST-HMD) for augmented reality applications lack the ability to correctly render light blocking behavior between digital and physical objects, known as mutual occlusion capability. In this article, we present a novel optical architecture for enabling a high performance, occlusion-capable optical see-through head-mounted display (OCOST-HMD). The design utilizes a single-layer, double-pass architecture, creating a compact OCOST-HMD that is capable of rendering per-pixel mutual occlusion, correctly pupil-matched viewing perspective between virtual and real scenes, and a wide see-through field of view (FOV). Based on this architecture, we present a design embodiment and a compact prototype implementation. The prototype demonstrates a virtual display with an FOV of 34° by 22°, an angular resolution of 1.06 arc minutes per pixel, and an average image contrast greater than 40 percent at the Nyquist frequency of 53 cycles/mm. Furthermore, the device achieves a see-through FOV of 90° by 50°, within which about 40° diagonally is occlusion-enabled, and has an angular resolution of 1.0 arc minutes (comparable to a 20/20 vision) and a dynamic range greater than 100:1. We conclude the paper with a quantitative comparison of the key optical performance such as modulation transfer function, image contrast, and color rendering accuracy of our OCOST-HMD system with and without occlusion enabled for various lighting environments.
- Published
- 2022
- Full Text
- View/download PDF
37. Use Virtual Reality to Enhance Intercultural Sensitivity: A Randomised Parallel Longitudinal Study.
- Author
-
Li C, Lo Kon AL, and Shing Ip HH
- Subjects
- Male, Female, Humans, Longitudinal Studies, Emotions, Feedback, Computer Graphics, Virtual Reality
- Abstract
Prior studies suggest that emotional empathy is one of the components of intercultural sensitivity - the affective dimension under the concept of intercultural communication competence. Based on existing theories and findings, this paper reports a randomised parallel longitudinal study investigating the use of virtual reality (VR) exposure to enhance intercultural sensitivity. A total of 80 participants (36 females and 44 males) joined the study and were included in the data analysis. The participants were randomly assigned to the VR group, the video group, and the control group. Their intercultural sensitivity was measured three times: one week before the exposure ($T_{1}$), right after the exposure ($T_{2}$), and three weeks after the exposure ($T_{3}$). The results suggested that (1) the intercultural sensitivity of the VR group was significantly enhanced in both within-subject comparisons and between-subject comparisons, (2) there were no significant differences in intercultural sensitivity between the VR group and the video group at $T_{2}$, but the VR group retained the enhancement better at $T_{3}$, and (3) the sense of presence and emotional empathy well predicted the change in intercultural sensitivity of the VR group. The results, together with the participants' feedback and comments, provide new insights into the practice of using VR for intercultural sensitivity training and encourage future research on exploring the contributing factors of the results.
- Published
- 2022
- Full Text
- View/download PDF
38. Making Resets away from Targets: POI aware Redirected Walking.
- Author
-
Xu SZ, Liu TQ, Liu JH, Zollmann S, and Zhang SH
- Subjects
- Walking, Computer Simulation, Environment, Computer Graphics, User-Computer Interface
- Abstract
Rapidly developing Redirected Walking (ROW) technologies have enabled VR applications to immerse users in large virtual environments (VE) while actually walking in relatively small physical environments (PE). When an unavoidable collision emerges in a PE, the ROW controller suspends the user's immersive experience and resets the user to a new direction in PE. Existing ROW methods mainly aim to reduce the number of resets. However, from the perspective of the user experience, when users are about to reach a point of interest (POI) in a VE, reset interruptions are more likely to have an impact on user experience. In this paper, we propose a new ROW method, aiming to keep resets occurring at a longer distance from the virtual target, as well as to reduce the number of resets. Simulation experiments and real user studies demonstrate that our method outperforms state-of-the-art ROW methods in the number of resets and dramatically increases the distance between the reset locations and the virtual targets.
- Published
- 2022
- Full Text
- View/download PDF
39. Quantifying the Effects of Working in VR for One Week.
- Author
-
Biener V, Kalamkar S, Nouri N, Ofek E, Pahud M, Dudley JJ, Hu J, Kristensson PO, Weerasinghe M, Pucihar KC, Kljun M, Streuber S, and Grubert J
- Subjects
- Humans, User-Computer Interface, Computer Graphics, Virtual Reality
- Abstract
Virtual Reality (VR) provides new possibilities for modern knowledge work. However, the potential advantages of virtual work environments can only be used if it is feasible to work in them for an extended period of time. Until now, there are limited studies of long-term effects when working in VR. This paper addresses the need for understanding such long-term effects. Specifically, we report on a comparative study $i$, in which participants were working in VR for an entire week-for five days, eight hours each day-as well as in a baseline physical desktop environment. This study aims to quantify the effects of exchanging a desktop-based work environment with a VR-based environment. Hence, during this study, we do not present the participants with the best possible VR system but rather a setup delivering a comparable experience to working in the physical desktop environment. The study reveals that, as expected, VR results in significantly worse ratings across most measures. Among other results, we found concerning levels of simulator sickness, below average usability ratings and two participants dropped out on the first day using VR, due to migraine, nausea and anxiety. Nevertheless, there is some indication that participants gradually overcame negative first impressions and initial discomfort. Overall, this study helps lay the groundwork for subsequent research, by clearly highlighting current shortcomings and identifying opportunities for improving the experience of working in VR.
- Published
- 2022
- Full Text
- View/download PDF
40. Traces in Virtual Environments: A Framework and Exploration to Conceptualize the Design of Social Virtual Environments.
- Author
-
Hirsch L, George C, and Butz A
- Subjects
- Humans, Environment, User-Computer Interface, Computer Graphics, Emotions
- Abstract
Creating social Virtual Environments (VEs) is an ongoing challenge. Traces of prior human interactions, or traces of use, are used in Physical Environments (PEs) to create more meaningful relationships with the PE and the people within it. In this paper, we explore how the concept of traces of use can be transferred from PEs to VEs to increase known success factors for social VEs, such as increased social presence. First, we introduce a conceptualization and discussion ($N=4$ expert interviews) of a "Traces in VEs" framework. Second, we evaluate the framework in two lab studies ($N=46$ in total), exploring the effect of traces in (i) VE vs. PE, and (ii) on social presence. Our findings confirm that traces increase the feeling of social presence. However, their meaning may differ depending on the environment. Our framework offers a structured overview of relevant components and relationships that need to be considered when designing meaningful user experiences in VE using traces. Thus, our work is valuable for practitioners and researchers who systematically want to create social VEs.
- Published
- 2022
- Full Text
- View/download PDF
41. PowerNet: Learning-Based Real-Time Power-Budget Rendering.
- Author
-
Zhang Y, Wang R, Huo Y, Hua W, and Bao H
- Subjects
- Neural Networks, Computer, Smartphone, Algorithms, Computer Graphics
- Abstract
With the prevalence of embedded GPUs on mobile devices, power-efficient rendering has become a widespread concern for graphics applications. Reducing the power consumption of rendering applications is critical for extending battery life. In this paper, we present a new real-time power-budget rendering system to meet this need by selecting the optimal rendering settings that maximize visual quality for each frame under a given power budget. Our method utilizes two independent neural networks trained entirely by synthesized datasets to predict power consumption and image quality under various workloads. This approach spares time-consuming precomputation or runtime periodic refitting and additional error computation. We evaluate the performance of the proposed framework on different platforms, two desktop PCs and two smartphones. Results show that compared to the previous state of the art, our system has less overhead and better flexibility. Existing rendering engines can integrate our system with negligible costs.
- Published
- 2022
- Full Text
- View/download PDF
42. Influence Maximization With Visual Analytics.
- Author
-
Arleo A, Didimo W, Liotta G, Miksch S, and Montecchiani F
- Subjects
- Algorithms, Humans, Social Networking, Stochastic Processes, Computer Graphics, Models, Theoretical
- Abstract
In social networks, individuals' decisions are strongly influenced by recommendations from their friends, acquaintances, and favorite renowned personalities. The popularity of online social networking platforms makes them the prime venues to advertise products and promote opinions. The Influence Maximization (IM) problem entails selecting a seed set of users that maximizes the influence spread, i.e., the expected number of users positively influenced by a stochastic diffusion process triggered by the seeds. Engineering and analyzing IM algorithms remains a difficult and demanding task due to the NP-hardness of the problem and the stochastic nature of the diffusion processes. Despite several heuristics being introduced, they often fail in providing enough information on how the network topology affects the diffusion process, precious insights that could help researchers improve their seed set selection. In this paper, we present VAIM, a visual analytics system that supports users in analyzing, evaluating, and comparing information diffusion processes determined by different IM algorithms. Furthermore, VAIM provides useful insights that the analyst can use to modify the seed set of an IM algorithm, so to improve its influence spread. We assess our system by: (i) a qualitative evaluation based on a guided experiment with two domain experts on two different data sets; (ii) a quantitative estimation of the value of the proposed visualization through the ICE-T methodology by Wall et al. (IEEE TVCG - 2018). The twofold assessment indicates that VAIM effectively supports our target users in the visual analysis of the performance of IM algorithms.
- Published
- 2022
- Full Text
- View/download PDF
43. VAC-CNN: A Visual Analytics System for Comparative Studies of Deep Convolutional Neural Networks.
- Author
-
Xuan X, Zhang X, Kwon OH, and Ma KL
- Subjects
- Machine Learning, Computer Graphics, Neural Networks, Computer
- Abstract
The rapid development of Convolutional Neural Networks (CNNs) in recent years has triggered significant breakthroughs in many machine learning (ML) applications. The ability to understand and compare various CNN models available is thus essential. The conventional approach with visualizing each model's quantitative features, such as classification accuracy and computational complexity, is not sufficient for a deeper understanding and comparison of the behaviors of different models. Moreover, most of the existing tools for assessing CNN behaviors only support comparison between two models and lack the flexibility of customizing the analysis tasks according to user needs. This paper presents a visual analytics system, VAC-CNN (Visual Analytics for Comparing CNNs), that supports the in-depth inspection of a single CNN model as well as comparative studies of two or more models. The ability to compare a larger number of (e.g., tens of) models especially distinguishes our system from previous ones. With a carefully designed model visualization and explaining support, VAC-CNN facilitates a highly interactive workflow that promptly presents both quantitative and qualitative information at each analysis stage. We demonstrate VAC-CNN's effectiveness for assisting novice ML practitioners in evaluating and comparing multiple CNN models through two use cases and one preliminary evaluation study using the image classification tasks on the ImageNet dataset.
- Published
- 2022
- Full Text
- View/download PDF
44. The Impact of Embodiment and Avatar Sizing on Personal Space in Immersive Virtual Environments.
- Author
-
Buck LE, Chakraborty S, and Bodenheimer B
- Subjects
- Judgment, Computer Graphics, Personal Space
- Abstract
In this paper, we examine how embodiment and manipulation of a self-avatar's dimensions - specifically the arm length - affect users' judgments of the personal space around them in an immersive virtual environment. In the real world, personal space is the immediate space around the body in which physical interactions are possible. Personal space is increasingly studied in virtual environments because of its importance to social interactions. Here, we specifically look at two components of personal space, interpersonal and peripersonal space, and how they are affected by embodiment and the sizing of a self-avatar. We manipulated embodiment, hypothesizing that higher levels of embodiment will result in larger measures of interpersonal space and smaller measures of peripersonal space. Likewise, we manipulated the arm length of a self-avatar, hypothesizing that while interpersonal space would change with changing arm length, peripersonal space would not. We found that the representation of both interpersonal and peripersonal space change when the user experiences differing levels of embodiment in accordance with our hypotheses, and that only interpersonal space was sensitive to changes in the dimensions of a self-avatar's arms. These findings provide increased understanding of the role of embodiment and self-avatars in the regulation of personal space, and provide foundations for improved design of social interaction in virtual environments.
- Published
- 2022
- Full Text
- View/download PDF
45. Validating Simulation-Based Evaluation of Redirected Walking Systems.
- Author
-
Azmandian M, Yahata R, Grechkin T, Thomas J, and Rosenberg ES
- Subjects
- Algorithms, Computer Simulation, Walking, Computer Graphics, User-Computer Interface
- Abstract
Developing effective strategies for redirected walking requires extensive evaluations across a variety of factors that influence performance. Because these large-scale experiments are often not practical with user studies, researchers have instead utilized simulations to systematically test different algorithm parameters, physical space configurations, and virtual walking paths. Although simulation offers an efficient way to evaluate redirected walking algorithms, it remains an open question whether this evaluation methodology is ecologically valid. In this paper, we investigate the interaction between locomotion behavior and redirection gains at a micro-level (across small path segments) and macro-level (across an entire experience). This examination involves analyzing data from real users and comparing algorithm performance metrics with a simulated user model. The results identify specific properties of user locomotion behavior that influence the application of redirected walking gains and resets. Overall, we found that the simulation provided a conservative estimate of the average performance with real users and observed that performance trends when comparing two redirected walking algorithms were preserved. In general, these results indicate that simulation is an empirically valid evaluation methodology for redirected walking algorithms.
- Published
- 2022
- Full Text
- View/download PDF
46. Online Projector Deblurring Using a Convolutional Neural Network.
- Author
-
Kageyama Y, Iwai D, and Sato K
- Subjects
- Artifacts, Computer Simulation, Humans, Neural Networks, Computer, Algorithms, Computer Graphics
- Abstract
Projector deblurring is an important technology for dynamic projection mapping (PM), where the distance between a projector and a projection surface changes in time. However, conventional projector deblurring techniques do not support dynamic PM because they need to project calibration patterns to estimate the amount of defocus blur each time the surface moves. We present a deep neural network that can compensate for defocus blur in dynamic PM. The primary contribution of this paper is a unique network structure that consists of an extractor and a generator. The extractor explicitly estimates a defocus blur map and a luminance attenuation map. These maps are then injected into the middle layers of the generator network that computes the compensation image. We also propose a pseudo-projection technique for synthesizing physically plausible training data, considering the geometric misregistration that potentially happens in actual PM systems. We conducted simulation and actual PM experiments and confirmed that: (1) the proposed network structure is more suitable than a simple, more general structure for projector deblurring; (2) the network trained with the proposed pseudo-projection technique can compensate projection images for defocus blur artifacts in dynamic PM; and (3) the network supports the translation speed of the surface movement within a certain range that covers normal human motions.
- Published
- 2022
- Full Text
- View/download PDF
47. Adaptive Redirection: A Context-Aware Redirected Walking Meta-Strategy.
- Author
-
Azmandian M, Yahata R, Grechkin T, and Rosenberg ES
- Subjects
- Computer Simulation, Humans, Locomotion, Walking, Computer Graphics, User-Computer Interface
- Abstract
Previous research has established redirected walking as a potential answer to exploring large virtual environments via natural locomotion within a limited physical space. However, much of the previous work has either focused on investigating human perception of redirected walking illusions or developing novel redirection techniques. In this paper, we take a broader look at the problem and formalize the concept of a complete redirected walking system. This work establishes the theoretical foundations for combining multiple redirection strategies into a unified framework known as adaptive redirection. This meta-strategy adapts based on the context, switching between a suite of strategies with a priori knowledge of their performance under the various circumstances. This paper also introduces a novel static planning strategy that optimizes gain parameters for a predetermined virtual path, known as the Combinatorially Optimized Pre-Planned Exploration Redirector (COPPER). We conducted a simulation-based experiment that demonstrates how adaptation rules can be determined empirically using machine learning, which involves partitioning the spectrum of contexts into regions according to the redirection strategy that performs best. Adaptive redirection provides a foundation for making redirected walking work in practice and can be extended to improve performance in the future as new techniques are integrated into the framework.
- Published
- 2022
- Full Text
- View/download PDF
48. Remote research on locomotion interfaces for virtual reality: Replication of a lab-based study on teleporting interfaces.
- Author
-
Kelly JW, Hoover M, Doty TA, Renner A, Zimmerman M, Knuth K, Cherep LA, and Gilbert SB
- Subjects
- Cues, Humans, Locomotion, Motion, Computer Graphics, Virtual Reality
- Abstract
The wide availability of consumer-oriented virtual reality (VR) equipment has enabled researchers to recruit existing VR owners to participate remotely using their own equipment. Yet, there are many differences between lab environments and home environments, as well as differences between participant samples recruited for lab studies and remote studies. This paper replicates a lab-based experiment on VR locomotion interfaces using a remote sample. Participants completed a triangle-completion task (travel two path legs, then point to the path origin) using their own VR equipment in a remote, unsupervised setting. Locomotion was accomplished using two versions of the teleporting interface varying in availability of rotational self-motion cues. The size of the traveled path and the size of the surrounding virtual environment were also manipulated. Results from remote participants largely mirrored lab results, with overall better performance when rotational self-motion cues were available. Some differences also occurred, including a tendency for remote participants to rely less on nearby landmarks, perhaps due to increased competence with using the teleporting interface to update self-location. This replication study provides insight for VR researchers on aspects of lab studies that may or may not replicate remotely.
- Published
- 2022
- Full Text
- View/download PDF
49. FeatureEnVi: Visual Analytics for Feature Engineering Using Stepwise Selection and Semi-Automatic Extraction Approaches.
- Author
-
Chatzimparmpas A, Martins RM, Kucher K, and Kerren A
- Subjects
- Algorithms, Computer Graphics, Machine Learning
- Abstract
The machine learning (ML) life cycle involves a series of iterative steps, from the effective gathering and preparation of the data-including complex feature engineering processes-to the presentation and improvement of results, with various algorithms to choose from in every step. Feature engineering in particular can be very beneficial for ML, leading to numerous improvements such as boosting the predictive results, decreasing computational times, reducing excessive noise, and increasing the transparency behind the decisions taken during the training. Despite that, while several visual analytics tools exist to monitor and control the different stages of the ML life cycle (especially those related to data and algorithms), feature engineering support remains inadequate. In this paper, we present FeatureEnVi, a visual analytics system specifically designed to assist with the feature engineering process. Our proposed system helps users to choose the most important feature, to transform the original features into powerful alternatives, and to experiment with different feature generation combinations. Additionally, data space slicing allows users to explore the impact of features on both local and global scales. FeatureEnVi utilizes multiple automatic feature selection techniques; furthermore, it visually guides users with statistical evidence about the influence of each feature (or subsets of features). The final outcome is the extraction of heavily engineered features, evaluated by multiple validation metrics. The usefulness and applicability of FeatureEnVi are demonstrated with two use cases and a case study. We also report feedback from interviews with two ML experts and a visualization researcher who assessed the effectiveness of our system.
- Published
- 2022
- Full Text
- View/download PDF
50. E-ffective: A Visual Analytic System for Exploring the Emotion and Effectiveness of Inspirational Speeches.
- Author
-
Maher K, Huang Z, Song J, Deng X, Lai YK, Ma C, Wang H, Liu YJ, and Wang H
- Subjects
- Emotions, Computer Graphics, Speech
- Abstract
What makes speeches effective has long been a subject for debate, and until today there is broad controversy among public speaking experts about what factors make a speech effective as well as the roles of these factors in speeches. Moreover, there is a lack of quantitative analysis methods to help understand effective speaking strategies. In this paper, we propose E-ffective, a visual analytic system allowing speaking experts and novices to analyze both the role of speech factors and their contribution in effective speeches. From interviews with domain experts and investigating existing literature, we identified important factors to consider in inspirational speeches. We obtained the generated factors from multi-modal data that were then related to effectiveness data. Our system supports rapid understanding of critical factors in inspirational speeches, including the influence of emotions by means of novel visualization methods and interaction. Two novel visualizations include E-spiral (that shows the emotional shifts in speeches in a visually compact way) and E-script (that connects speech content with key speech delivery information). In our evaluation we studied the influence of our system on experts' domain knowledge about speech factors. We further studied the usability of the system by speaking novices and experts on assisting analysis of inspirational speech effectiveness.
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.