7 results on '"Zielasko D"'
Search Results
2. Interactive 3D Force‐Directed Edge Bundling
- Author
-
Zielasko, D., primary, Weyers, B., additional, Hentschel, B., additional, and Kuhlen, T. W., additional
- Published
- 2016
- Full Text
- View/download PDF
3. Simple and Efficient? Evaluation of Transitions for Task-Driven Cross-Reality Experiences.
- Author
-
Feld N, Bimberg P, Weyers B, and Zielasko D
- Abstract
The inquiry into the impact of diverse transitions between cross-reality environments on user experience remains a compelling research endeavor. Existing work often offers fragmented perspectives on various techniques or confines itself to a singular segment of the reality-virtuality spectrum, be it virtual reality or augmented reality. This study embarks on bridging this knowledge gap by systematically assessing the effects of six prevalent transitions while users remain immersed in tasks spanning both virtual and physical domains. In particular, we investigate the effect of different transitions while the user is continuously engaged in a demanding task instead of purely focusing on a given transition. As a preliminary step, we evaluate these six transitions within the realm of pure virtual reality to establish a baseline. Our findings reveal a clear preference among participants for brief and efficient transitions in a task-driven experience, instead of transitions that prioritize interactivity and continuity. Subsequently, we extend our investigation into a cross-reality context, encompassing transitions between virtual and physical environments. Once again, our results underscore the prevailing preference for concise and effective transitions. Furthermore, our research offers intriguing insights about the potential mitigation of visual incoherence between virtual and augmented reality environments by utilizing different transitions.
- Published
- 2024
- Full Text
- View/download PDF
4. Come Look at This: Supporting Fluent Transitions between Tightly and Loosely Coupled Collaboration in Social Virtual Reality.
- Author
-
Bimberg P, Zielasko D, Weyers B, Froehlich B, and Weissker T
- Abstract
Collaborative work in social virtual reality often requires an interplay of loosely coupled collaboration from different virtual locations and tightly coupled face-to-face collaboration. Without appropriate system mediation, however, transitioning between these phases requires high navigation and coordination efforts. In this paper, we present an interaction system that allows collaborators in virtual reality to seamlessly switch between different collaboration models known from related work. To this end, we present collaborators with functionalities that let them work on individual sub-tasks in different virtual locations, consult each other using asymmetric interaction patterns while keeping their current location, and temporarily or permanently join each other for face-to-face interaction. We evaluated our methods in a user study with 32 participants working in teams of two. Our quantitative results indicate that delegating the target selection process for a long-distance teleport significantly improves placement accuracy and decreases task load within the team. Our qualitative user feedback shows that our system can be applied to support flexible collaboration. In addition, the proposed interaction sequence received positive evaluations from teams with varying VR experiences.
- Published
- 2024
- Full Text
- View/download PDF
5. Sitting or Standing in VR: About Comfort, Conflicts, and Hazards.
- Author
-
Zielasko D, Riecke BE, Billinghurst M, Fiorentino M, and Johnsen K
- Abstract
This article examines the choices between sitting and standing in virtual reality (VR) experiences, addressing conflicts, challenges, and opportunities. It explores issues such as the risk of motion sickness in stationary users and virtual rotations, the formation of mental models, consistent authoring, affordances, and the integration of embodied interfaces for enhanced interactions. Furthermore, it delves into the significance of multisensory integration and the impact of postural mismatches on immersion and acceptance in VR. Ultimately, the article underscores the importance of aligning postural choices and embodied interfaces with the goals of VR applications, be it for entertainment or simulation, to enhance user experiences.
- Published
- 2024
- Full Text
- View/download PDF
6. Integrating Continuous and Teleporting VR Locomotion into a Seamless 'HyperJump' Paradigm.
- Author
-
Adhikari A, Zielasko D, Aguilar I, Bretin A, Kruijff E, Heyde MV, and Riecke BE
- Abstract
Continuous locomotion in VR provides uninterrupted optical flow, which mimics real-world locomotion and supports path integration . However, optical flow limits the maximum speed and acceleration that can be effectively used without inducing cybersickness. In contrast, teleportation provides neither optical flow nor acceleration cues, and users can jump to any length without increasing cybersickness. However, teleportation cannot support continuous spatial updating and can increase disorientation. Thus, we designed 'HyperJump' in an attempt to merge benefits from continuous locomotion and teleportation. HyperJump adds iterative jumps every half a second on top of the continuous movement and was hypothesized to facilitate faster travel without compromising spatial awareness/orientation. In a user study, Participants travelled around a naturalistic virtual city with and without HyperJump (equivalent maximum speed). They followed waypoints to new landmarks, stopped near them and pointed back to all previously visited landmarks in random order. HyperJump was added to two continuous locomotion interfaces (controller- and leaning-based). Participants had better spatial awareness/orientation with leaning-based interfaces compared to controller-based (assessed via rapid pointing). With HyperJump, participants travelled significantly faster, while staying on the desired course without impairing their spatial knowledge. This provides evidence that optical flow can be effectively limited such that it facilitates faster travel without compromising spatial orientation. In future design iterations, we plan to utilize audio-visual effects to support jumping metaphors that help users better anticipate and interpret jumps, and use much larger virtual environments requiring faster speeds, where cybersickness will become increasingly prevalent and thus teleporting will become more important.
- Published
- 2023
- Full Text
- View/download PDF
7. Integrating Visualizations into Modeling NEST Simulations.
- Author
-
Nowke C, Zielasko D, Weyers B, Peyser A, Hentschel B, and Kuhlen TW
- Abstract
Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work.
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.