2,436 results on '"Headset"'
Search Results
52. MIXED REALITY IN MEDICAL SIMULATION: A COMPREHENSIVE DESIGN METHODOLOGY
- Author
-
Alessandra Papetti, Michele Germani, Agnese Brunzini, and Erica Adrario
- Subjects
medicine.medical_specialty ,Computer science ,business.industry ,Headset ,Medical simulation ,Rachicentesis ,Field (computer science) ,Mixed reality ,User experience design ,Human–computer interaction ,medicine ,business ,Scenario design ,Design methods - Abstract
In the medical education field, the use of highly sophisticated simulators and extended reality (XR) simulations allow training complex procedures and acquiring new knowledge and attitudes. XR is considered useful for the enhancement of healthcare education; however, several issues need further research.The main aim of this study is to define a comprehensive method to design and optimize every kind of simulator and simulation, integrating all the relevant elements concerning the scenario design and prototype development.A complete framework for the design of any kind of advanced clinical simulation is proposed and it has been applied to realize a mixed reality (MR) prototype for the simulation of the rachicentesis. The purpose of the MR application is to immerse the trainee in a more realistic environment and to put him/her under pressure during the simulation, as in real practice.The application was tested with two different devices: the headset Vox Gear Plus for smartphone and the Microsoft Hololens. Eighteen students of the 6th year of Medicine and Surgery Course were enrolled in the study. Results show the comparison of user experience related to the two different devices and simulation performance using the Hololens.
- Published
- 2021
53. Evaluation of a Wearable AR Platform for Guiding Complex Craniotomies in Neurosurgery
- Author
-
Nicola Montemurro, Vincenzo Ferrari, Sara Condino, Ulrich W. Thomale, Nadia Cattari, Fabrizio Cutolo, and Renzo D'Amato
- Subjects
Adult ,Male ,Neuronavigation ,Computer science ,medicine.medical_treatment ,Headset ,0206 medical engineering ,Neurosurgery ,Biomedical Engineering ,Margin of error ,Wearable computer ,Optical head-mounted display ,02 engineering and technology ,Augmented reality ,Augmented reality accuracy ,Computer assisted surgery ,Craniotomy ,Head mounted display ,Wearable Electronic Devices ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Humans ,Computer vision ,Computer-assisted surgery ,Augmented Reality ,Phantoms, Imaging ,business.industry ,Skull ,Middle Aged ,Magnetic Resonance Imaging ,020601 biomedical engineering ,Visualization ,Female ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Today, neuronavigation is widely used in daily clinical routine to perform safe and efficient surgery. Augmented reality (AR) interfaces can provide anatomical models and preoperative planning contextually blended with the real surgical scenario, overcoming the limitations of traditional neuronavigators. This study aims to demonstrate the reliability of a new-concept AR headset in navigating complex craniotomies. Moreover, we aim to prove the efficacy of a patient-specific template-based methodology for fast, non-invasive, and fully automatic planning-to-patient registration. The AR platform navigation performance was assessed with an in-vitro study whose goal was twofold: to measure the real-to-virtual 3D target visualization error (TVE), and assess the navigation accuracy through a user study involving 10 subjects in tracing a complex craniotomy. The feasibility of the template-based registration was preliminarily tested on a volunteer. The TVE mean and standard deviation were 1.3 and 0.6 mm. The results of the user study, over 30 traced craniotomies, showed that 97% of the trajectory length was traced within an error margin of 1.5 mm, and 92% within a margin of 1 mm. The in-vivo test confirmed the feasibility and reliability of the patient-specific template for registration. The proposed AR headset allows ergonomic and intuitive fruition of preoperative planning, and it can represent a valid option to support neurosurgical tasks.
- Published
- 2021
54. Haptic-enabled collaborative learning in virtual reality for schools
- Author
-
Mary Webb, Megan Tracey, Natasha E. Barrett, Ozan Tokatli, Faustina Hwang, Chris I. Jones, William S. Harwin, and Ros Johnson
- Subjects
Virtual model ,Computer science ,Headset ,05 social sciences ,Educational technology ,050301 education ,Collaborative learning ,Library and Information Sciences ,Virtual reality ,050105 experimental psychology ,Education ,Identification (information) ,Human–computer interaction ,Immersion (virtual reality) ,0501 psychology and cognitive sciences ,0503 education ,Haptic technology - Abstract
This paper reports on a study which designed and developed a multi-fingered haptic interface in conjunction with a three-dimensional (3D) virtual model of a section of the cell membrane in order to enable students to work collaboratively to learn cell biology. Furthermore, the study investigated whether the addition of haptic feedback to the 3D virtual reality (VR) simulation affected learning of key concepts in nanoscale cell biology for students aged 12 to 13. The haptic interface was designed so that the haptic feedback could be turned on or switched off. Students (N = 64), in two secondary schools, worked in pairs, on activities designed to support learning of specific difficult concepts. Findings from observation of the activities and interviews revealed that students believed that being immersed in the 3D VR environment and being able to feel structures and movements within the model and work collaboratively assisted their learning. More specifically, the pilot/co-pilot model that we developed was successful for enabling collaborative learning and reducing the isolating effects of immersion with a 3D headset. Results of pre and post-tests of conceptual knowledge showed significant knowledge gains but addition of haptic feedback did not affect the knowledge gains significantly. The study enabled identification of important issues to consider when designing and using haptic-enabled 3D VR environments for collaborative learning.
- Published
- 2021
55. Psychophysiological changes during workspace virtualization
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,Brain activity and meditation ,Headset ,05 social sciences ,Neurological examination ,Electroencephalography ,Virtual reality ,Audiology ,Attention span ,050105 experimental psychology ,03 medical and health sciences ,0302 clinical medicine ,Delta Rhythm ,medicine ,0501 psychology and cognitive sciences ,Psychology ,030217 neurology & neurosurgery ,Cognitive load - Abstract
Aim. To estimate psychophysiological changes during workspace virtualization.Materials and Methods. We evaluated the psychophysiological profile of 10 healthy right-handed males aged 25 to 45 years before, during and after the working in a virtual reality (VR) headset. All participants had higher education, normal or corrected to normal vision, and were experienced computer users. Psychometric testing included a neurological examination, assessment of functional and feedback-related brain activity (reaction time, errors, and missed signals) and attention span, quantification of processed symbols in the 1st and 4th minutes of Bourdon test, analysis of short-term memory (10 words, 10 numbers and 10 meaningless syllables memorization) and spatial perception, and multi-channel electroencephalography recording in rest.Results. Deterioration of psychometric indicators after a cognitive load in a VR headset was documented only in the most difficult tasks: the number of errors increased by 93% in the brain performance test and by 65% in the attention distribution test. The analysis of electroencephalography data showed that the delta rhythm and theta1 rhythm activity decreased by 28 and 13%, respectively, after working in a VR headset as compared to baseline values, while alpha1 rhythm activity increased by 96%. Probably, the observed electroencephalography changes corresponded to the patterns of brain activation associated with cognitive load and the resulting fatigue.Conclusions. We developed a suitable approach for the psychometric testing before and after working in VR headset, which demonstrated general tolerance and acceptable subjective difficulties to VR load.
- Published
- 2021
56. Virtual reality-based simulation improves gynecologic brachytherapy proficiency, engagement, and trainee self-confidence
- Author
-
Jacob W. Trotter, Nishant K. Shah, Neil K. Taunk, Shibu Anamalayil, Taoran Li, and Emily Hubley
- Subjects
medicine.medical_specialty ,Headset ,medicine.medical_treatment ,media_common.quotation_subject ,Brachytherapy ,Virtual reality ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Humans ,Medicine ,Computer Simulation ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Curriculum ,Radiation oncologist ,media_common ,Cervical cancer ,business.industry ,Virtual Reality ,Internship and Residency ,Procedural knowledge ,medicine.disease ,Self-confidence ,Oncology ,030220 oncology & carcinogenesis ,Female ,Clinical Competence ,business - Abstract
PURPOSE Intracavitary brachytherapy is critical in treatment of cervical cancer with the highest rates of local control and survival. Only about 50% of graduating residents express confidence to develop a brachytherapy practice with caseload as the greatest barrier. We hypothesize that virtual reality (VR)-based intracavitary brachytherapy simulation will improve resident confidence, engagement, and proficiency. METHODS We created a VR training video of an intracavitary brachytherapy case performed by a board-certified gynecologic radiation oncologist and medical physicist. Residents performed a timed intracavitary procedure on a pelvic simulator before and after viewing the VR simulation module on a commercially available VR headset while five objective measures of implant quality were recorded. The residents completed a pre- and postsimulation questionnaire assessing self-confidence, procedural knowledge, and perceived usefulness of the session. RESULTS There were 14 residents, including five postgraduate year (PGY)-2, three PGY-3, four PGY-4, and two PGY-5, who participated in the VR curriculum. There were improvements in resident confidence (1.43–3.36), and subjective technical skill in assembly (1.57–3.50) and insertion (1.64–3.21) after the simulation. Average time of implant decreased from 5:51 to 3:34 (p = 0.0016). Median technical proficiencies increased from 4/5 to 5/5. Overall, the residents found VR to be a useful learning tool and indicated increased willingness to perform the procedure again. CONCLUSIONS VR intracavitary brachytherapy simulation improves residents’ self-confidence, subjective and objective technical skills, and willingness to perform brachytherapy. Furthermore, VR is an immersive, engaging, time-efficient, inexpensive, and enjoyable tool that promotes residents interest in brachytherapy.
- Published
- 2021
57. Aplikasi Tes Buta Warna Berbasis Virtual Reality
- Author
-
Subari Subari and Moh Alek Mustofa
- Subjects
Color vision ,Computer science ,Human–computer interaction ,Headset ,Table (database) ,Virtual reality ,Android (operating system) ,Popularity ,Method test ,Test (assessment) - Abstract
Kesehatan merupakan anugrah terindah yang diberikan oleh Allah SWT. Apalagi kesehatan yang berhubungan dengan mata. Salah satu penyakit yang banyak ditemui pada masyarakat adalah buta warna. Buta warna merupakan keadan dimana mata seseorang tidak dapat membedakan beberapa warna yang dapat dilihat oleh mata orang normal. Terdapat beberapa metode tes yang digunakan untuk mendeteksi buta warna, diantaranya yaitu Tes Metode Ishihara. Tes Metode Ishihara merupakan tes yang digunakan untuk mendeteksi gangguan persepsi warna, berupa tabel khusus yaitu lembaran pseudoisokromatik yang disusun oleh titik-titik dengan kepadatan warna berbeda yang dapat dilihat dengan mata normal, tapi tidak bisa dilihat oleh mata yang mengalami defisiensi sebagian warna. Dengan metode ishihara dapat diketahui golongan buta warna yang diderita. Buta warna terbagi menjadi dua macam yaitu buta warna total dan buta warna parsial. Perkembangan teknologi pada era saat ini tidak dapat dibendung lagi banyak teknologi yang bermunculan, salah satunya yaitu Virtual Reality. Terknologi virtual reality menyediakan pengalaman yang berbeda yaitu membawa sesorang kedalam dunia virtual yang menyerupai dunia nyata. Pemanfaatan teknologi ini sudah banyak digunakan mulai dari aspek kesehatan, pemetaan wilayah, arsitektur bangunan, dan banyak permainan smartphone yang mendukung teknologi ini. Dengan semakin populernya penggunaan smarphone pada masyarakat, maka pembuatan aplikasi tes buta warna berbasis virtual reality diharapkan dapat membantu semua orang yang membutuhkan. Untuk pengoperasian aplikasi ini membutuhkan alat bantu yang lain yaitu berupa headset virtual reality. Aplikasi dari pengetesan ini dapat menampilkan objek 3D, sehingga akan semakin menarik pada saat melakukan penggunaan aplikasi ini.
- Published
- 2021
58. Eye Tracking Interaction on Unmodified Mobile VR Headsets Using the Selfie Camera
- Author
-
George Alex Koulieris, Katerina Mania, and Panagiotis Drakopoulos
- Subjects
General Computer Science ,Panorama ,Computer science ,Headset ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Experimental and Cognitive Psychology ,02 engineering and technology ,Theoretical Computer Science ,law.invention ,law ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,0501 psychology and cognitive sciences ,Computer vision ,Input method ,Iris (anatomy) ,050107 human factors ,Eye tracking ,Pixel ,business.industry ,05 social sciences ,020207 software engineering ,Mobile VR ,Lens (optics) ,medicine.anatomical_structure ,Artificial intelligence ,business ,Mobile device - Abstract
Summarization: Input methods for interaction in smartphone-based virtual and mixed reality (VR/MR) are currently based on uncomfortable head tracking controlling a pointer on the screen. User fixations are a fast and natural input method for VR/MR interaction. Previously, eye tracking in mobile VR suffered from low accuracy, long processing time, and the need for hardware add-ons such as anti-reflective lens coating and infrared emitters. We present an innovative mobile VR eye tracking methodology utilizing only the eye images from the front-facing (selfie) camera through the headset’s lens, without any modifications. Our system first enhances the low-contrast, poorly lit eye images by applying a pipeline of customised low-level image enhancements suppressing obtrusive lens reflections. We then propose an iris region-of-interest detection algorithm that is run only once. This increases the iris tracking speed by reducing the iris search space in mobile devices. We iteratively fit a customised geometric model to the iris to refine its coordinates. We display a thin bezel of light at the top edge of the screen for constant illumination. A confidence metric calculates the probability of successful iris detection. Calibration and linear gaze mapping between the estimated iris centroid and physical pixels on the screen results in low latency, real-time iris tracking. A formal study confirmed that our system’s accuracy is similar to eye trackers in commercial VR headsets in the central part of the headset’s field-of-view. In a VR game, gaze-driven user completion time was as fast as with head-tracked interaction, without the need for consecutive head motions. In a VR panorama viewer, users could successfully switch between panoramas using gaze. Presented on: ACM Transactions on Applied Perception
- Published
- 2021
59. Evaluation of the impact of different levels of self-representation and body tracking on the sense of presence and embodiment in immersive VR
- Author
-
Guilherme Goncalves, Luís Barbosa, Maximino Bessa, Miguel Melo, and José Vasconcelos-Raposo
- Subjects
Computer science ,media_common.quotation_subject ,Headset ,Sense of presence ,Fidelity ,02 engineering and technology ,Virtual reality ,computer.software_genre ,Tracking (particle physics) ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,0501 psychology and cognitive sciences ,Computer vision ,050107 human factors ,media_common ,Avatar ,business.industry ,05 social sciences ,020207 software engineering ,Computer Graphics and Computer-Aided Design ,Human-Computer Interaction ,Virtual machine ,Artificial intelligence ,business ,human activities ,computer ,Software - Abstract
The main goal of this paper is to investigate the effect of different types of self-representations through floating members (hands vs. hands + feet), virtual full body (hands + feet vs. full-body avatar), walking fidelity (static feet, simulated walking, real walking), and number of tracking points used (head + hands, head + hands + feet, head + hands + feet + hip) on the sense of presence and embodiment through questionnaires. The sample consisted of 98 participants divided into a total of six conditions in a between-subjects design. The HTC Vive headset, controllers, and trackers were used to perform the experiment. Users were tasked to find a series of hidden objects in a virtual environment and place them in a travel bag. We concluded that (1) the addition of feet to floating hands can impair the experienced realism ( $$p=0.039$$ ), (2) both floating members and full-body avatars can be used without affecting presence and embodiment ( $$p>0.05$$ ) as long as there is the same level of control over the self-representation, (3) simulated walking scores of presence and embodiment were similar when compared to static feet and real walking tracking data ( $$p>0.05$$ ), and (4) adding hip tracking overhead, hand and feet tracking (when using a full-body avatar) allows for a more realistic response to stimuli ( $$p=0.002$$ ) and a higher overall feeling of embodiment ( $$p=0.023$$ ).
- Published
- 2021
60. GestOnHMD: Enabling Gesture-based Interaction on Low-cost VR Head-Mounted Display
- Author
-
Taizhou Chen, Kening Zhu, Lantian Xu, and Xianshan Xu
- Subjects
Adult ,Male ,business.product_category ,Computer science ,Headset ,Optical head-mounted display ,02 engineering and technology ,Virtual reality ,Young Adult ,Deep Learning ,Human–computer interaction ,Computer Graphics ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Google Cardboard ,Headphones ,Gestures ,Virtual Reality ,Signal Processing, Computer-Assisted ,020207 software engineering ,Acoustics ,Equipment Design ,Interaction technique ,Hand ,Computer Graphics and Computer-Aided Design ,Gesture recognition ,Signal Processing ,Female ,Smart Glasses ,Smartphone ,Computer Vision and Pattern Recognition ,business ,Software ,Gesture - Abstract
Low-cost virtual-reality (VR) head-mounted displays (HMDs) with the integration of smartphones have brought the immersive VR to the masses, and increased the ubiquity of VR. However, these systems are often limited by their poor interactivity. In this paper, we present GestOnHMD, a gesture-based interaction technique and a gesture-classification pipeline that leverages the stereo microphones in a commodity smartphone to detect the tapping and the scratching gestures on the front, the left, and the right surfaces on a mobile VR headset. Taking the Google Cardboard as our focused headset, we first conducted a gesture-elicitation study to generate 150 user-defined gestures with 50 on each surface. We then selected 15, 9, and 9 gestures for the front, the left, and the right surfaces respectively based on user preferences and signal detectability. We constructed a data set containing the acoustic signals of 18 users performing these on-surface gestures, and trained the deep-learning classification pipeline for gesture detection and recognition. Lastly, with the real-time demonstration of GestOnHMD, we conducted a series of online participatory-design sessions to collect a set of user-defined gesture-referent mappings that could potentially benefit from GestOnHMD.
- Published
- 2021
61. Augmented reality in the operating room: a clinical feasibility study
- Author
-
Anne-Gita Scheibler, David E. Bauer, Cyrill Dennler, Philipp Fürnstahl, Mazda Farshad, José Miguel Spirig, Tobias Götschi, University of Zurich, and Bauer, David E
- Subjects
Operating Rooms ,medicine.medical_specialty ,Hololens ,Image quality ,2745 Rheumatology ,Headset ,610 Medicine & health ,Voice command device ,Augmented reality ,Diseases of the musculoskeletal system ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,2732 Orthopedics and Sports Medicine ,0302 clinical medicine ,Rheumatology ,medicine ,Humans ,Orthopedics and Sports Medicine ,Medical physics ,Point (typography) ,business.industry ,Orthopedic Surgical Procedure ,Navigation ,Osteotomy ,Orthopedics ,Surgery, Computer-Assisted ,RC925-935 ,Proof of concept ,Orthopedic surgery ,Feasibility Studies ,10046 Balgrist University Hospital, Swiss Spinal Cord Injury Center ,business ,030217 neurology & neurosurgery ,Research Article - Abstract
Background Augmented Reality (AR) is a rapidly emerging technology finding growing acceptance and application in different fields of surgery. Various studies have been performed evaluating the precision and accuracy of AR guided navigation. This study investigates the feasibility of a commercially available AR head mounted device during orthopedic surgery. Methods Thirteen orthopedic surgeons from a Swiss university clinic performed 25 orthopedic surgical procedures wearing a holographic AR headset (HoloLens, Microsoft, Redmond, WA, USA) providing complementary three-dimensional, patient specific anatomic information. The surgeon’s experience of using the device during surgery was recorded using a standardized 58-item questionnaire grading different aspects on a 100-point scale with anchor statements. Results Surgeons were generally satisfied with image quality (85 ± 17 points) and accuracy of the virtual objects (84 ± 19 point). Wearing the AR device was rated as fairly comfortable (79 ± 13 points). Functionality of voice commands (68 ± 20 points) and gestures (66 ± 20 points) provided less favorable results. The greatest potential in the use of the AR device was found for surgical correction of deformities (87 ± 15 points). Overall, surgeons were satisfied with the application of this novel technology (78 ± 20 points) and future access to it was demanded (75 ± 22 points). Conclusion AR is a rapidly evolving technology with large potential in different surgical settings, offering the opportunity to provide a compact, low cost alternative requiring a minimum of infrastructure compared to conventional navigation systems. While surgeons where generally satisfied with image quality of the here tested head mounted AR device, some technical and ergonomic shortcomings were pointed out. This study serves as a proof of concept for the use of an AR head mounted device in a real-world sterile setting in orthopedic surgery.
- Published
- 2021
62. Quaint Devices: A Map of Headphone and Headset Plays∗
- Author
-
Robert Quillen Camp
- Subjects
business.product_category ,Visual Arts and Performing Arts ,InformationSystems_INFORMATIONINTERFACESANDPRESENTATION(e.g.,HCI) ,Computer science ,Social distance ,Headset ,Social proximity ,Window (computing) ,Virtual reality ,Popularity ,Human–computer interaction ,Zoom ,business ,Music ,Headphones - Abstract
The article provides a guide to use the headphones and virtual reality headsets to locate and dislocate theatre audience, in times of social proximity as well as social distance. It mentions about the popularity of the Zoom chat window and the exuberance with which audience members sometimes participate in the discussion that is simultaneous with a live performance.
- Published
- 2021
63. May I Remain Seated: A Pilot Study on the Impact of Reducing Room-Scale Trainings to Seated Conditions for Long Procedural Virtual Reality Trainings
- Author
-
Tehreem, Yusra, Fracaro, Sofia Garcia, Gallagher, Timothy, Toyoda, Ryo, Bernaerts, Kristel, Glassey, Jarka, Abegao, Fernando Russo, Wachsmuth, Sven, Wilk, Michael, Pfeiffer, Thies, Leerstoel Kester, Education and Learning: Development in Interaction, Leerstoel Kester, and Education and Learning: Development in Interaction
- Subjects
headset ,operator training ,Computer Networks and Communications ,virtual reality ,cybersickness ,procedural skills ,Electrical and Electronic Engineering ,chemical industry ,seated VR - Abstract
Although modern consumer level head-mounted-displays of today provide high-quality room scale tracking, and thus support a high level of immersion and presence, there are application contexts in which constraining oneself to seated set-ups is necessary. Classroom sized training groups are one highly relevant example. However, what is lost when constraining cybernauts to a stationary seated physical space? What is the impact on immersion, presence, cybersickness and what implications does this have on training success? Can a careful design for seated virtual reality (VR) amend some of these aspects? In this line of research, the study provides data on a comparison between standing and seated long (50–60 min) procedural VR training sessions of chemical operators in a realistic and lengthy chemical procedure (combination of digital and physical actions) inside a large 3-floor virtual chemical plant. Besides, a VR training framework based on Maslow's hierarchy of needs (MHN) is also proposed to systematically analyze the needs in VR environments. In the first of a series of studies, the physiological and safety needs of MHN are evaluated among seated and standing groups in the form of cybersickness, usability and user experience. The results (n=32, real personnel of a chemical plant) show no statistically significant differences among seated and standing groups. There were low levels of cybersickness along with good scores of usability and user experience for both conditions. From these results, it can be implied that the seated condition does not impose significant problems that might hinder its application in classroom training. A follow-up study with a larger sample will provide a more detailed analysis on differences in experienced presence and learning success.
- Published
- 2022
64. VREDNOVANJE PRIKAZA ENTERIJERSKIH SCENA PUTEM MOBILNIH UREĐAJA
- Author
-
Nikola Veselinović and Marko Jovanović
- Subjects
Human–computer interaction ,Computer science ,Headset ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Relevance (information retrieval) ,Mobile device - Abstract
The aim of the research is to examine the relevance of the headset in which the mobile device is integrated in terms of displaying interior scenes. The methods by which this research will be conducted are a direct comparison and comparison of the results obtained by displaying scenes on the headset, mobile devices and computer. The results are such that the scenes were successfully shown and realized in all three ways. The conclusion is that each of the ways has certain advantages and disadvantages, but as the goal is to examine the relevance of the headset, it can be said that the results are satisfactory and that this way of presenting scenes is very relevant.
- Published
- 2021
65. Correlation between Situational Awareness and EEG signals
- Author
-
Bani Anvari, Helge A. Wurdemann, Jan Luca Kästle, and Jakub Krol
- Subjects
0209 industrial biotechnology ,medicine.diagnostic_test ,Computer science ,Cognitive Neuroscience ,Speech recognition ,Headset ,Cognition ,02 engineering and technology ,Electroencephalography ,Computer Science Applications ,Temporal lobe ,Data set ,Correlation ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,Brain–computer interface ,Test data - Abstract
An important aspect in safety–critical domains is Situational Awareness (SA) where operators consolidate data into an understanding of the situation that needs to be updated dynamically as the situation changes over time. Among existing measures of SA, only physiological measures can assess the cognitive processes associated with SA in real-time. Some studies showed promise in detecting cognitive states associated with SA in complex tasks using brain signals (e.g. electroencephalogram/EEG). In this paper, an analytical methodology is proposed to identify EEG signatures associated with SA on various regions of the brain. A new data set from 32 participants completing the SA test in the PEBL is collected using a 32-channel dry-EEG headset. The proposed method is tested on the new data set and a correlation is identified between the frequency bands of β ( 12 - 30 Hz ) and γ ( 30 - 45 Hz ) and SA. Also, activation of neurons in the left and right hemisphere of the parietal and temporal lobe is observed. These regions are responsible for the visuo-spatial ability and memory and reasoning tasks. Among the presented results, the highest achieved accuracy on test data is 67 % .
- Published
- 2021
66. Effect of marker position and size on the registration accuracy of HoloLens in a non-clinical setting with implications for high-precision surgical tasks
- Author
-
Matthieu Poyade, Laura Pérez-Pachón, Jennifer S. Gregory, Parivrudh Sharma, Helena Brech, Terry Lowe, and Flora Gröning
- Subjects
Computer science ,Headset ,Biomedical Engineering ,Health Informatics ,Overlay ,Augmented reality ,030218 nuclear medicine & medical imaging ,Rendering (computer graphics) ,03 medical and health sciences ,Holographic headsets’ registration error ,0302 clinical medicine ,Imaging, Three-Dimensional ,Position (vector) ,Medical imaging ,Image-guided surgery ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,business.industry ,Reproducibility of Results ,030206 dentistry ,General Medicine ,Image marker ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Identification (information) ,Surgery, Computer-Assisted ,Surgery ,Original Article ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business - Abstract
Purpose Emerging holographic headsets can be used to register patient-specific virtual models obtained from medical scans with the patient’s body. Maximising accuracy of the virtual models’ inclination angle and position (ideally, ≤ 2° and ≤ 2 mm, respectively, as in currently approved navigation systems) is vital for this application to be useful. This study investigated the accuracy with which a holographic headset registers virtual models with real-world features based on the position and size of image markers. Methods HoloLens® and the image-pattern-recognition tool Vuforia Engine™ were used to overlay a 5-cm-radius virtual hexagon on a monitor’s surface in a predefined position. The headset’s camera detection of an image marker (displayed on the monitor) triggered the rendering of the virtual hexagon on the headset’s lenses. 4 × 4, 8 × 8 and 12 × 12 cm image markers displayed at nine different positions were used. In total, the position and dimensions of 114 virtual hexagons were measured on photographs captured by the headset’s camera. Results Some image marker positions and the smallest image marker (4 × 4 cm) led to larger errors in the perceived dimensions of the virtual models than other image marker positions and larger markers (8 × 8 and 12 × 12 cm). ≤ 2° and ≤ 2 mm errors were found in 70.7% and 76% of cases, respectively. Conclusion Errors obtained in a non-negligible percentage of cases are not acceptable for certain surgical tasks (e.g. the identification of correct trajectories of surgical instruments). Achieving sufficient accuracy with image marker sizes that meet surgical needs and regardless of image marker position remains a challenge.
- Published
- 2021
67. A virtual reality interface for the immersive manipulation of live microscopic systems
- Author
-
Stefano Ferretti, Roberto Di Leonardo, Silvio Bianchi, and Giacomo Frangipane
- Subjects
Microscope ,Computer science ,Headset ,Interface (computing) ,Science ,Holography ,02 engineering and technology ,Virtual reality ,01 natural sciences ,Article ,law.invention ,Rendering (computer graphics) ,010309 optics ,law ,Computer graphics (images) ,0103 physical sciences ,Microscopy ,Multidisciplinary ,Virtual world ,Replica ,021001 nanoscience & nanotechnology ,Optical manipulation and tweezers ,Medicine ,0210 nano-technology - Abstract
For more than three centuries we have been watching and studying microscopic phenomena behind a microscope. We discovered that cells live in a physical environment whose predominant factors are no longer those of our scale and for which we lack a direct experience and consequently a deep intuition. Here we demonstrate a new instrument which, by integrating holographic and virtual reality technologies, allows the user to be completely immersed in a dynamic virtual world which is a simultaneous replica of a real system under the microscope. We use holographic microscopy for fast 3D imaging and real-time rendering on a virtual reality headset. At the same time, hand tracking data is used to dynamically generate holographic optical traps that can be used as virtual projections of the user hands to interactively grab and manipulate ensembles of microparticles or living motile cells.
- Published
- 2021
68. Meditating in Virtual Reality 3: 360° Video of Perceptual Presence of Instructor
- Author
-
Divya Mistry, Madison Waller, Paul A. Frewen, and Rakesh Jetly
- Subjects
050103 clinical psychology ,Health (social science) ,Mindfulness ,Social Psychology ,media_common.quotation_subject ,Headset ,Applied psychology ,Psychological intervention ,Experimental and Cognitive Psychology ,Virtual reality ,Experiential learning ,050105 experimental psychology ,Perception ,Presence ,Developmental and Educational Psychology ,0501 psychology and cognitive sciences ,Meditation ,360°-video ,Applied Psychology ,media_common ,Original Paper ,Virtual reality (VR) ,Relaxation (psychology) ,05 social sciences ,COVID-19 ,Psychology - Abstract
Objectives The need for remote delivery of mental health interventions including instruction in meditation has become paramount in the wake of the current global pandemic. However, the support one may usually feel within the physical presence of an instructor may be weakened when interventions are delivered remotely, potentially impacting one’s meditative experiences. Use of head-mounted displays (HMD) to display video-recorded instruction may increase one’s sense of psychological presence with the instructor as compared to presentation via regular flatscreen (e.g., laptop) monitor. This research therefore evaluated a didactic, trauma-informed care approach to instruction in mindfulness meditation by comparing meditative responses to an instructor-guided meditation when delivered face-to-face vs. by pre-recorded 360° videos viewed either on a standard flatscreen monitor (2D format) or via HMD (i.e., virtual reality [VR] headset; 3D format). Methods Young adults (n = 82) were recruited from a university introductory course and experienced a 360° video-guided meditation via HMD (VR condition, 3D format). They were also randomly assigned to practice the same meditation either via scripted face-to-face instruction (in vivo [IV] format) or when viewed on a standard laptop display (non-VR condition, 2D format). Positive and negative affect and meditative experience ratings were self-reported and participants’ maintenance of focused attention to breathing (i.e., meditation breath attention scores [MBAS]) were recorded during each meditation. Results Meditating in VR (3D format) was associated with a heightened experience of awe overall. When compared to face-to-face instruction (IV format), VR meditation was rated as less embarrassing but also less enjoyable and more tiring. When compared to 2D format, VR meditations were associated with greater experiences of relaxation, less distractibility from the process of breathing, and less fatigue. No differences were found between VR and non-VR meditation in concentration (MBAS). Baseline posttraumatic stress symptoms were risk factors for experiencing distress while meditating in either (VR and non-VR) instructional format. Of those who reported a preference for one format, approximately half preferred the VR format and approximately half preferred the IV format. Conclusions Recorded 360° video instruction in meditation viewed with a HMD (i.e., VR/3D format) appears to offer some experiential advantage over instructions given in 2D format and may offer a safe—and for some even preferred—alternative to teaching meditation face-to-face. Supplementary Information The online version contains supplementary material available at 10.1007/s12671-021-01612-w.
- Published
- 2021
69. Using head mounted display virtual reality simulations in large engineering classes: Operating vs observing
- Author
-
Tom Van Der Veen, R. Nazim Khan, Patrick Kenworthy, Andrew Guzzomi, Andrew Valentine, Ghulam Mubashar Hassan, and Sally Male
- Subjects
Human–computer interaction ,Computer science ,Engineering education ,Headset ,05 social sciences ,Active learning ,050301 education ,Optical head-mounted display ,Virtual reality ,Set (psychology) ,0503 education ,Education ,Student group - Abstract
A barrier to using head mounted display (HMD) virtual reality (VR) in education is access to hardware for large classes. This paper compares students’ learning when engaging with an HMD VR simulation as the operator and as the observer, to evaluate whether benefits of HMD VR can be achieved without requiring all students to operate the equipment. Postgraduate engineering students (N = 117) completed a safety hazard identification exercise in a workshop. The performance of students who operated and observed was compared. Results showed that students performed similarly in the exercise that followed the simulation whether they operated HMD VR (n = 33) or observed (n = 84). The finding suggests that educators may be able to use HMD VR simulations in classes with a large enrolment, by reducing the need for investment and management of a large number of sets of HMD VR equipment. Implications for practice or policy: Engineering educators can use HMD VR simulations to teach students about safety in design. Engineering students are able to identify safety hazards in a HMD VR simulation effectively whether they are operating the equipment or observing another student in their group operating the VR equipment. One HMD VR set per student group is sufficient. HMD VR simulations can be used inclusively, even when some students are unable or unwilling to wear the headset.
- Published
- 2021
70. Three-dimensional modeled environments versus 360 degree panoramas for mobile virtual reality training
- Author
-
Kenneth A. Ritter and Terrence L. Chambers
- Subjects
Human-Computer Interaction ,Computer graphics ,Panorama ,Human–computer interaction ,Computer science ,Headset ,Training (meteorology) ,3d model ,Degree (angle) ,Virtual field ,Virtual reality ,Computer Graphics and Computer-Aided Design ,Software - Abstract
Virtual field trip is a way of providing users with some knowledge and exposure of a facility without requiring them to physically visit the location. Due to the high computational costs that are necessary to produce virtual environments (VEs), the potential for photorealism is sacrificed. Often these three-dimensional (3D) modeled applications use an unrealistic VE and, therefore, do not provide a full depiction of real-world environments. Panoramas can be used to showcase complex scenarios that are difficult to model and are computationally expensive to view in virtual reality (VR). Utilizing 360° panoramas can provide a low-cost and quick-to-capture alternative with photorealistic representations of the actual environment. The advantages of photorealism over 3D models for training and education are not clearly defined. This paper initially summarizes the development of a VR training application and initial pilot study. Quantitative and qualitative study then was conducted to compare the effectiveness of a 360° panorama VR training application and a 3D modeled one. Switching to a mobile VR headset saves money, increases mobility, decreases set-up and breakdown time, and has less spatial requirements. Testing results of the 3D modeled VE group had an average normalized gain of 0.03 and the 360° panorama group, 0.43. Although the 3D modeled group had slightly higher realism according to the presence questionnaire and had slightly higher averages in the comparative analysis questionnaire, the 360° panorama application has shown to be the most effective for training and the quickest to develop.
- Published
- 2021
71. Implementation Of Training Aid Tools Development (Remote Control and Headset) For Sprint Tunanetra Athletes
- Author
-
Novita Novita, Joni Tohap Maruli Nababan, and Albadi Sinulingga
- Subjects
lcsh:LC8-6691 ,biology ,lcsh:Special aspects of education ,Athletes ,Headset ,Applied psychology ,lcsh:Recreation. Leisure ,lcsh:GV1-1860 ,biology.organism_classification ,law.invention ,blind sprint athletes ,Exercise program ,Sprint ,law ,assistive tools ,Psychology ,Remote control - Abstract
The purpose of this study was to develop training aids (remote control and headset) for sprint athletes with visual impairments as a directional controller during a sprint running exercise program. The benefit of this research is to produce training aids for sprint athletes with visual impairments. This study uses research and development research and development methods. The research was carried out in 2 places, namely the Yapentra SLB-A school for small-scale trials and Karya Murni's SLB-A large-scale trials for the development of blind sprint athletes in North Sumatra NPC. The analysis technique uses quantitative descriptive. The conclusions of this study are: (1) In using remote control and headsets, athletes focus more on personal abilities without thinking about collaboration, acceleration and communication, (2) Using remote control and headsets, athletes are more confident and more independent in sprinting activities. Using remote control and headsets, guades and coaches are more focused in conveying information and are more efficient in training blind athletes with sprint running numbers by looking at personal abilities and motivation as sprint athletes running norms.
- Published
- 2021
72. A Comparison of Procedural Safety Training in Three Conditions: Virtual Reality Headset, Smartphone, and Printed Materials
- Author
-
Fabio Buttussi and Luca Chittaro
- Subjects
business.product_category ,non-immersive VR ,Computer science ,Headset ,VR headset ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,02 engineering and technology ,Virtual reality ,smartphone ,Education ,procedural training ,mobile devices ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,aviation safety ,Headphones ,virtual instructor ,05 social sciences ,Virtual reality, procedural training, virtual instructor, immersive VR, non-immersive VR, VR headset, smartphone, mobile devices, user study, aviation safety ,General Engineering ,Educational technology ,050301 education ,020207 software engineering ,immersive VR ,Procedural knowledge ,Computer Science Applications ,Visualization ,user study ,Task analysis ,business ,0503 education ,Mobile device - Abstract
Virtual reality (VR) experiences are receiving increasing attention in education and training. Some VR setups can deliver immersive VR training (e.g., on multiple projected screens), while others can deliver nonimmersive VR training (e.g., on standard desktop monitors). Recently, consumer VR headsets make it possible to deliver immersive VR training with six-degrees-of-freedom tracking of trainees’ head as well as hand controllers, while most smartphones can deliver nonimmersive VR training without the need for additional hardware. Previous studies compared immersive and nonimmersive VR setups for training, highlighting effects on performance, learning, presence, and engagement, but no study focused on contrasting procedural training with (immersive) VR headsets and (nonimmersive) smartphones. This article conducts a comparison of these two VR setups in the aviation safety domain. The considered training concerned door opening procedures in different aircraft and included a virtual instructor. In addition, we compared the two VR setups with the traditional printed materials used in the considered domain, i.e., safety cards. Results show that both VR setups allowed gaining and retaining more procedural knowledge than printed materials, and led to higher confidence in performing procedures. However, only the VR headset was considered to be significantly more usable than the printed materials, and presence was higher with the VR headset than the smartphone. The VR headset turned out to be important also for engagement and satisfaction, which were higher with the VR headset than both the printed materials and the smartphone. We discuss the implications of these results.
- Published
- 2021
73. Low-Cost Electroencephalography Device for Use in Biophysics Teaching
- Author
-
Jan Šlégr and Ivana Škraňková
- Subjects
medicine.diagnostic_test ,SIMPLE (military communications protocol) ,Computer science ,Headset ,Serial port ,02 engineering and technology ,Electroencephalography ,021001 nanoscience & nanotechnology ,01 natural sciences ,Agricultural and Biological Sciences (miscellaneous) ,Education ,law.invention ,010309 optics ,Bluetooth ,ComputingMethodologies_PATTERNRECOGNITION ,law ,0103 physical sciences ,ComputingMilieux_COMPUTERSANDEDUCATION ,medicine ,Biophysics ,0210 nano-technology ,General Agricultural and Biological Sciences - Abstract
This paper describes the possibilities of supporting the teaching of neural tissue biology and biophysics through experiments with a simple, commonly available electroencephalography headset. Data are transmitted over a Bluetooth virtual serial port and can be analyzed in several ways by students or used solely as a potential motivational factor for teaching otherwise challenging and abstract curriculum about the human brain.
- Published
- 2021
74. Development of an EMG based SVM supported control solution for the PlatypOUs education mobile robot using MindRove headset
- Author
-
Gyorgy Eigner, Melinda Rácz, Peter Galambos, Erick Noboa, László Szűcs, and Gergely Márton
- Subjects
business.industry ,Computer science ,Headset ,Robotics ,Mobile robot ,law.invention ,Robot control ,Support vector machine ,Control and Systems Engineering ,law ,Control theory ,Human–computer interaction ,Component-based software engineering ,Artificial intelligence ,business ,Remote control - Abstract
This paper describes the development of PlatypOUs - an open-source electromyography (EMG)-controlled mobile robot platform that uses the MindRove Brain Computer Interface (BCI) headset as signal acquisition unit, implementing remote control. Simultaneously with the physical mobile robot, simulation environment is also prepared using Gazebo, within the Robot Operating System (ROS) framework, with the same capabilities as the physical device, from the point of view of the ROS. The purpose of the PlatypOUs project is to create a tool for STEM-based education, and it involves two major disciplines: mobile robotics and machine learning, with several sub-areas included in each. The use of the platform and the simulation environment exposes students to hands-on laboratory sessions, which contribute to their progression as engineers. An important feature of our project is that the platform is made up of open-source and easily available commercial hardware and software components. In this paper, an electromyography (EMG) based controller has been developed using support vector machine (SVM) based classification for robot control purposes.
- Published
- 2021
75. Registration Techniques for Clinical Applications of Three-Dimensional Augmented Reality Devices
- Author
-
Ignacio M. Soriano, Alexander B. Henry, Jonathan R. Silva, Christopher M. Andrews, and Michael K. Southworth
- Subjects
lcsh:Medical technology ,Augmented reality (AR) ,Computer science ,Headset ,medical imaging ,Biomedical Engineering ,Image registration ,lcsh:Computer applications to medicine. Medical informatics ,Article ,030218 nuclear medicine & medical imaging ,surgery ,03 medical and health sciences ,Imaging, Three-Dimensional ,0302 clinical medicine ,Human–computer interaction ,Medical imaging ,Humans ,Augmented Reality ,business.industry ,Spatial intelligence ,Tracking system ,General Medicine ,Variety (cybernetics) ,Visualization ,image registration ,lcsh:R855-855.5 ,Surgery, Computer-Assisted ,HoloLens ,lcsh:R858-859.7 ,Augmented reality ,business ,030217 neurology & neurosurgery - Abstract
Many clinical procedures would benefit from direct and intuitive real-time visualization of anatomy, surgical plans, or other information crucial to the procedure. Three-dimensional augmented reality (3D-AR) is an emerging technology that has the potential to assist physicians with spatial reasoning during clinical interventions. The most intriguing applications of 3D-AR involve visualizations of anatomy or surgical plans that appear directly on the patient. However, commercially available 3D-AR devices have spatial localization errors that are too large for many clinical procedures. For this reason, a variety of approaches for improving 3D-AR registration accuracy have been explored. The focus of this review is on the methods, accuracy, and clinical applications of registering 3D-AR devices with the clinical environment. The works cited represent a variety of approaches for registering holograms to patients, including manual registration, computer vision-based registration, and registrations that incorporate external tracking systems. Evaluations of user accuracy when performing clinically relevant tasks suggest that accuracies of approximately 2 mm are feasible. 3D-AR device limitations due to the vergence-accommodation conflict or other factors attributable to the headset hardware add on the order of 1.5 mm of error compared to conventional guidance. Continued improvements to 3D-AR hardware will decrease these sources of error.
- Published
- 2021
76. Autonomous Endoscope Robot Positioning Using Instrument Segmentation With Virtual Reality Visualization
- Author
-
Kateryna Zinchenko and Kai-Tai Song
- Subjects
human-robot cooperation ,0209 industrial biotechnology ,General Computer Science ,Endoscope ,Computer science ,Headset ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,object segmentation ,Virtual reality ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,020901 industrial engineering & automation ,0302 clinical medicine ,General Materials Science ,Computer vision ,minimally invasive surgery ,business.industry ,Autonomous robot control ,General Engineering ,Visualization ,TK1-9971 ,Surgical instrument ,Microsoft Windows ,Robot ,Artificial intelligence ,Electrical engineering. Electronics. Nuclear engineering ,business ,Robotic arm ,artificial neural network - Abstract
This paper presents a method for endoscope’s autonomous positioning by a robotic endoscope holder for minimally invasive surgery. The method improves human-robot cooperation in robot-assisted surgery by allowing the endoscope holder to acknowledge the surgeon’s view projection and navigate the camera without manual control. The real-time prediction of next desired camera location is estimated using segmented instrument’s tip locations from endoscope video and surgeon’s attention focus given by tracked virtual reality headset. To tackle the issue of real-time surgical instrument segmentation for more precise instrument tip localization, we propose the YOLOv3 and ResNet Combined Neural Network. The method showed an 86.6% IoU across MICCAI’17 Endovis datasets with 30 frames per second processing speed. The proposed pipeline was implemented in ROS on Ubuntu with visualization running under Windows operating system in Unity3D. The simulation demonstrates the robotic arm, endoscope, and surgical environment visualized in 3D in the virtual reality headset to provide a stable view of the endoscope and improve the surgeon’s perception of the operating environment.
- Published
- 2021
77. Performance comparison of BCI speller stimuli design
- Author
-
Sunaina Maryam Cherian, N. M. Masoodhu Banu, and T. Sujithra
- Subjects
010302 applied physics ,Information transfer ,medicine.diagnostic_test ,Brain activity and meditation ,Computer science ,Headset ,02 engineering and technology ,Stimulus (physiology) ,Electroencephalography ,021001 nanoscience & nanotechnology ,01 natural sciences ,Field (computer science) ,Event-related potential ,Human–computer interaction ,0103 physical sciences ,medicine ,0210 nano-technology ,Brain–computer interface - Abstract
Brain Computer Interfaces (BCIs) allow a user to control a computer application by acquiring brain activity using Electro Enchephalo Gram (EEG). Non-invasive practical EEG headset available in the market brings more and more research in the field of BCI. BCI applications such as speller device is used both in entertainment field and health industry for typing using brain without using hands. There are a variety of paradigms available for designing a BCI based speller device. Here the primary design goals are to analyze the following parameters: speed in terms of the information transfer rate (ITR) and accuracy of word selection and increasing user friendliness. The common requirement for each method is stimulus design. In most cases the stimulus designs are responsible for the performance factors. This paper analyses the design of those stimuli used in the speller device and the associated Event Related Potential (ERP). The design parameters common to all the methodologies and performance parameters are also systematically discussed. The character selection time with each type of stimuli also has been compared.
- Published
- 2021
78. Building Social Science Simulations for College Students Using VR Headsets
- Author
-
Brendan G. Beal
- Subjects
Class (computer programming) ,Multimedia ,Computer science ,Process (engineering) ,Headset ,Component (UML) ,Debriefing ,Oculus ,Virtual reality ,Set (psychology) ,computer.software_genre ,computer - Abstract
This article outlines the original design and implementation of a virtual reality simulation for use in a college practice class. The simulation functions as a training opportunity and quick assessment of group leadership skills. VR headset makers like HTC, Oculus, or smart phone set-ups provide high-fidelity, immersive experiences, and this simulation was designed from the ground up to provide a unique set of situations for a specific class. In combination with the VR component, an immediate debrief with students afterwards connects the process in order to link theory and practice. The simulation was prototyped and built in five phases. This article covers lessons learned from each of the five phases: starting costs and equipment, partnerships, basic concept design, freelance developer hiring, and recommendations for course implementation.
- Published
- 2021
79. Is There a Place in Human Consciousness Where Surveillance Cannot Go?Noor: A Brain Opera
- Author
-
Ellen Pearlman
- Subjects
Visual Arts and Performing Arts ,Wireless eeg ,Opera ,media_common.quotation_subject ,Headset ,05 social sciences ,050801 communication & media studies ,06 humanities and the arts ,Visual arts ,Computer Science Applications ,0508 media and communications ,060402 drama & theater ,Performing arts ,Consciousness ,Psychology ,Engineering (miscellaneous) ,0604 arts ,Music ,media_common - Abstract
Noor: A Brain Opera is the first fully interactive, immersive brainwave opera, in which a performer wearing a wireless EEG brainwave headset touches, gazes and walks around audience members in a 360° theater while a story is narrated. Her measured emotional states trigger videos, sound and a prerecorded libretto as her emotions are displayed as live time-colored bubbles. The opera rhetorically asks: “Is there a place in human consciousness where surveillance cannot go?” This article discusses the rationale and implementation of the brainwave opera.
- Published
- 2021
80. Augmented Reality (AR) based framework for supporting human workers in flexible manufacturing
- Author
-
Nikos Fousekis, Niki Kousi, George Michalos, Sotiris Makris, Spyridon Koukas, Sotiris Aivaliotis, and Konstantinos Lotsaris
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Headset ,Automotive industry ,Mobile robot ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,020901 industrial engineering & automation ,Operator (computer programming) ,Human–computer interaction ,General Earth and Planetary Sciences ,Robot ,Augmented reality ,State (computer science) ,business ,0105 earth and related environmental sciences ,General Environmental Science - Abstract
This paper presents an Augmented Reality (AR) application that aims to facilitate the operator’s work in an industrial, human-robot collaboration environment with mobile robots. In such a flexible environment, with robots and humans working and moving in the same area, the ease of communication between the two sides is critical and prerequisite. The developed application provides the user with handy tools to interact with the mobile platform, give direct instructions to it and receive information about the robot’s and the broader system’s state, through an AR headset. The communication between the headset and the robot is achieved through a ROS based system, that interconnects the resources. The discussed tool has been deployed and tested in a use case inspired from the automotive industry, assisting the operators during the collaborative assembly tasks.
- Published
- 2021
81. Adaptive Thresholds of EEG Brain Signals for IoT Devices Authentication
- Author
-
Abdelghafar R. Elshenaway and Shawkat K. Guirguis
- Subjects
Authentication ,business.product_category ,convolutional neural networks (CNN) ,hand gestures recognition ,General Computer Science ,business.industry ,Computer science ,Headset ,General Engineering ,human computer interaction (HCI) ,Usability ,Electroencephalography ,NeuroSky ,Password strength ,TK1-9971 ,brain computer interface (BCI) ,Mode (computer interface) ,Human–computer interaction ,Key (cryptography) ,General Materials Science ,Electrical engineering. Electronics. Nuclear engineering ,Electrical and Electronic Engineering ,business ,Headphones ,Gesture - Abstract
In this paper, a new authentication method has been proposed for the Internet of Things (IoT) devices. This method is based on electroencephalography EEG signals, and hand gestures. The proposed EEG signals authentication method used a low price NeuroSky MindWave headset. This was based on choosing the adaptive thresholds of attention and meditation mode for the authentication key. Hand gestures to control authentication processes by using a general camera. To verify that a new authentication method is widely accepted, it must meet two main conditions, security and usability. The evaluation of the prototype usability was based on ISO 9241-11:2018 standards usability model. Results revealed that the proposed method demonstrated the usability of authentication by using EEG signals with the accuracy of 92%, the efficiency of 93%, and user satisfaction is acceptable and satisfying. To evaluate the security of the prototype, we consider the most important three threats related to IoT devices which they are guessing, physical observation, and targeted impersonation. The results showed that the password strength, using the proposed system is stronger than the traditional keyboard. The proposed authentication method also is resistant to target impersonation and physical observation.
- Published
- 2021
82. Cognitive Skill Enhancement System Using Neuro-Feedback for ADHD Patients
- Author
-
Zubaira Naz, Javeria Khan, Amjad Rehman, Muhammad Usman Ghani Khan, Usman Tariq Tariq Masood, Tanzila Saba, and Ibrahim Abunadi
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,Brain activity and meditation ,Headset ,medicine.medical_treatment ,Audiology ,Electroencephalography ,medicine.disease ,Computer Science Applications ,Biomaterials ,Mechanics of Materials ,Modeling and Simulation ,medicine ,Cognitive therapy ,Attention deficit hyperactivity disorder ,Cognitive skill ,Electrical and Electronic Engineering ,Neurofeedback ,Psychology ,Brain–computer interface - Abstract
The National Health Interview Survey (NHIS) shows that there are 13.2% of children at the age of 11 to 17 who are suffering from Attention Deficit Hyperactivity Disorder (ADHD), globally. The treatment methods for ADHD are either psycho-stimulant medications or cognitive therapy. These traditional methods, namely therapy, need a large number of visits to hospitals and include medication. Neurogames could be used for the effective treatment of ADHD. It could be a helpful tool in improving children and ADHD patients’ cognitive skills by using Brain–Computer Interfaces (BCI). BCI enables the user to interact with the computer through brain activity using Electroencephalography (EEG), which can be used to control different computer applications by processing acquired brain signals. This paper proposes a system based on neurofeedback that can improve cognitive skills such as attention level, mediation level, and spatial memory. The proposed system consists of a puzzle game where its complexity increases with each level. EEG signals were acquired using the Neurosky headset; then sent the signals to the designed gaming environment. This neurofeedback system was tested on 10 different subjects, and their performance was calculated using different evaluation measures. The results show that this game improves player overall performance from 74% to 98% by playing each game level.
- Published
- 2021
83. Comparison of Conventional and Virtual Reality Box and Blocks Tests in Upper Limb Amputees: A Case-Control Study
- Author
-
Nasrul Anuar Abd Razak, Nur Afiqah Hashim, and Noor Azuan Abu Osman
- Subjects
030506 rehabilitation ,General Computer Science ,myoelectric ,Computer science ,Headset ,Virtual reality ,computer.software_genre ,Session (web analytics) ,Correlation ,03 medical and health sciences ,0302 clinical medicine ,prosthetic training ,General Materials Science ,Simulation ,General Engineering ,TK1-9971 ,Test (assessment) ,Box and blocks test ,Virtual machine ,Test score ,Task analysis ,virtual reality ,Electrical engineering. Electronics. Nuclear engineering ,0305 other medical science ,computer ,030217 neurology & neurosurgery - Abstract
Previous studies have demonstrated the potential of virtual reality as an effective teaching tool for motor training. Despite the growing interest in this field, only a few studies have determined the validity of the tasks made in virtual environments to real physical environments. This case-control study compares the score of pick and place activity in the real and virtual environment by using the box and blocks test setup on 4 able-bodied and 4 transradial amputees (2 myoelectric prosthetic users and 2 non-prosthetic users). This study integrates the traditional Box and Blocks Test mechanics into gameplay by using the Leap Motion controller and Oculus Rift headset. The participants were instructed to complete the test in both environments randomly for 10 sessions, with 30 minutes of training in each session. Pearson’s correlation interpretation was conducted to investigate the relation between the test’s score with the training duration, also the score obtained in the real and virtual environment. Independent samples t-test was also carried out to compare the score from the different test environments. All participants showed a greater percentage change of test score in the virtual version and better performance was achieved with increasing training duration. Both environments were positively correlated. However, there was a significant difference in the test score obtained in the real and virtual environment for able-bodied t(9) = 18.19, p < 0.05 and myoelectric prosthetic user t(9) = 4.51, p < 0.05, but not for the non-prosthetic user. This study has demonstrated that two different environments showed significantly comparable results in pick and place tasks by individuals with different abilities.
- Published
- 2021
84. Recognizing Emotional States With Wearables While Playing a Serious Game
- Author
-
Dillam Diaz-Romero, Antonio Miguel-Cruz, Nicholas Yee, Adriana Maria Rios Rincon, and Eleni Stroulia
- Subjects
medicine.diagnostic_test ,Computer science ,Speech recognition ,Headset ,020208 electrical & electronic engineering ,Wearable computer ,02 engineering and technology ,Electrooculography ,Computer game ,Random forest ,Support vector machine ,Classifier (linguistics) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Electrical and Electronic Engineering ,Instrumentation ,International Affective Picture System - Abstract
In this study, we propose the use of electroencephalography (EEG), electrooculography (EOG), and kinematic motion data captured through wearable sensors to classify emotional states, while individuals are playing a serious computer game (Whack-a-Mole). Twenty-one participants wore an OpenBCI headset and JINS MEME eyewear while playing the Whack-a-Mole game at three levels of difficulty. We used a variety of classifiers [i.e., a support vector machine (SVM), logistic regression (LR), random forest (RF), and ensemble classifier (EC)] to classify the participants’ emotional states based on their EEG, EOG, and kinematic motion data. The classifiers were trained using the International Affective Picture System (IAPS). The EC and RF showed the best results in terms of their overall performance. Using tenfold cross-validation for all the subjects, the accuracies obtained were 73% for Arousal and 80% for Valence. Our results suggest that EEG and EOG biosignals, as well as kinematic motion data acquired using off-the-shelf wearable sensors in combination with machine-learning techniques such as EC, can be used to classify emotional states, while the individuals were playing the Whack-a-Mole game.
- Published
- 2021
85. BIOMEX-DB: A Cognitive Audiovisual Dataset for Unimodal and Multimodal Biometric Systems
- Author
-
Rigoberto Fonseca-Delgado, Juan Carlos Moreno-Rodriguez, Juan Manuel Ramirez-Cortes, Pilar Gomez-Gil, Rene Arechiga-Martinez, and Juan Carlos Atenco-Vazquez
- Subjects
General Computer Science ,Biometrics ,speaker recognition ,Computer science ,Headset ,Speech recognition ,General Engineering ,Word error rate ,Facial recognition system ,TK1-9971 ,Identifier ,Identification (information) ,General Materials Science ,brain–computer ,Electrical engineering. Electronics. Nuclear engineering ,Set (psychology) ,electroencephalography ,face recognition ,image classification ,Brain–computer interface - Abstract
Multimodal biometric schemes arise as an interesting solution to the multidimensional reinforcement problem for biometric security systems. Along with the performance dimension, these systems should also comply with required levels for other conditions such as permanence, collectability, and circumvention, among others. In response to the demand for a multimodal and synchronous dataset, we introduce in this paper an open-access database of synchronously recorded electroencephalogram signals (EEG), voice signals, and video feed from 51 volunteers, 25 female, 26 male, captured for, but not limited to, biometric purposes. A total of 140 samples were collected from each user when pronouncing single digits in Spanish, giving a total of 7140 instances. EEG signals were captured using a 14-channel Emotiv™ Epoc headset. The resulting set becomes a valuable resource when working on unimodal biometric systems, but significantly more for the evaluation of multimodal variants. Furthermore, the usefulness of the collected signals extends to being exploited by projects in brain-computer interfaces and face recognition to name just a few. As an initial report on data separability of the related samples, five user recognition experiments are presented: a face recognition identifier with an accuracy of 99%, a speaker identification system with accuracy of 94.2%, a bimodal face-speech verification case with Equal Error Rate around 2.64, an EEG identification example, and a bimodal user identification exercise based on EEG and voice modalities with a registered accuracy of 97.6%.
- Published
- 2021
86. QwertyRing
- Author
-
Wei Xiaoying, Zhipeng Li, Yizheng Gu, Chun Yu, Yuanchun Shi, and Zhaoheng Li
- Subjects
Ring (mathematics) ,Focus (computing) ,business.product_category ,Computer Networks and Communications ,Computer science ,Orientation (computer vision) ,business.industry ,Headset ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Index finger ,Human-Computer Interaction ,medicine.anatomical_structure ,Software ,Hardware and Architecture ,Human–computer interaction ,Inertial measurement unit ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,0501 psychology and cognitive sciences ,Computer monitor ,business ,050107 human factors - Abstract
The software keyboard is widely used on digital devices such as smartphones, computers, and tablets. The software keyboard operates via touch, which is efficient, convenient, and familiar to users. However, some emerging technology devices such as AR/VR headsets and smart TVs do not support touch-based text entry. In this paper, we present QwertyRing, a technique that supports text entry on physical surfaces using an IMU (Inertial Measurement Unit) ring. Users wear the ring on the middle phalanx of the index finger and type on any desk-like surface, as if there is a QWERTY keyboard on the surface. While typing, users do not focus on monitoring the hand motions. They receive text feedback on a separate screen, e.g., an AR/VR headset or a digital device display, such as a computer monitor. The basic idea of QwertyRing is to detect touch events and predict users' desired words by the orientation of the IMU ring. We evaluate the performance of QwertyRing through a five-day user study. Participants achieved a speed of 13.74 WPM in the first 40 minutes and reached 20.59 WPM at the end. The speed outperforms other ring-based techniques [24, 30, 45, 68] and is 86.48% of the speed of typing on a smartphone with an index finger. The results show that QwertyRing enables efficient touch-based text entry on physical surfaces.
- Published
- 2020
87. BlinKey
- Author
-
Ming Li, Huadi Zhu, Wenqiang Jin, Srinivasan Murali, and Mingyan Xiao
- Subjects
Password ,Authentication ,Computer Networks and Communications ,Computer science ,business.industry ,Headset ,05 social sciences ,020207 software engineering ,Usability ,02 engineering and technology ,Virtual reality ,Multi-factor authentication ,Login ,Human-Computer Interaction ,Hardware and Architecture ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Eye tracking ,0501 psychology and cognitive sciences ,business ,050107 human factors - Abstract
Virtual Reality (VR) has shown promising potentials in many applications, such as e-business, healthcare, and social networking. Rich information regarding user's activities and their online accounts is stored in VR devices. If it is carelessly unattended, then attackers, including insiders, can make use of the stored information to, for example, perform in-app purchases at the legitimate owner's expenses. Current solutions, mostly following schemes designed for general personal devices, have been proved vulnerable to shoulder-surfing attacks due to the sight blocking caused by the headset. Although there have been efforts trying to fill this gap, they either rely on some highly advanced equipment, such as electrodes to read brainwaves, or introduce heavy cognitive load that has users perform a series of cumbersome authentication tasks. Therefore, an authentication method for VR devices that is robust and convenient is in dire need. In this paper, we present the design, implementation, and evaluation of a two-factor user authentication scheme, BlinKey, for VR devices that are equipped with an eye tracker. A user's secret passcode is a set of recorded rhythms when he/she blinks, together with the unique pupil size variation pattern. We call this passcode as a blinkey, which can be jointly characterized by knowledge-based and biometric features. To examine the performances, BlinKey is implemented on an HTC Vive Pro with a Pupil Labs eye tracker. Through extensive experimental evaluations with 52 participants, we show that our scheme can achieve the average EER as low as 4.0% with only 6 training samples. Besides, it is robust against various types of attacks. BlinKey also exhibits satisfactory usability in terms of login attempts, memorability, and impact of user motions. We also carry out questionnaire-based pre-/post-studies. The survey result indicates that BlinKey is well accepted as a user authentication scheme for VR devices.
- Published
- 2020
88. Running virtual: The effect of virtual reality on exercise
- Author
-
Coleen McClure and Damian Schofield
- Subjects
Heart Rate (HR) ,Headset ,Applied psychology ,heart rate (hr) ,Physical Therapy, Sports Therapy and Rehabilitation ,010501 environmental sciences ,Virtual reality ,Bodily Sensations (BS) ,01 natural sciences ,Virtual Reality (VR) ,Ability to pay ,03 medical and health sciences ,0302 clinical medicine ,Educación Física y Deportiva ,030212 general & internal medicine ,lcsh:Sports medicine ,Exercise ,virtual reality (vr) ,0105 earth and related environmental sciences ,exercise ,Work (physics) ,beats per minute (bpm) ,Beats Per Minute (BPM) ,bodily sensations (bs) ,Psychology ,lcsh:RC1200-1245 - Abstract
Research has shown that exercise among college aged persons has dropped over recent years (Lindahl, 2015; Sheppard, 2016). Many factors could be contributing to this reduction in exercise including: large workloads, the need to work during school, or perhaps technology use. A number of recent studies are showing the benefits of using virtual reality systems in exercise and are demonstrating that the use of such technology can lead to an increase in the number of young adults engaging in exercise. This study focuses on the effects that virtual reality has on heart rate and other bodily sensations during a typical work out. This study also analyses the participants ability to pay less attention to their bodily sensations during exercise when using a virtual reality system. During this experiment, participants were exposed to two different conditions. Condition one being a traditional work out, riding an exercise bike at a middle tension level. Condition two was the same but the participant was wearing a virtual reality headset. The data collected led to the conclusion that working out while wearing a virtual reality headset will lead to a higher heart rate, and in turn can lead to burning more calories during a workout. The study also found participants who wore the virtual reality headset were able to remove themselves from their bodily sensations allowing them to workout longer.
- Published
- 2020
89. Personal audio system for neckband headset with low computational complexity
- Author
-
Jung-Woo Choi and Se-Woon Jeon
- Subjects
Beamforming ,Acoustics and Ultrasonics ,Arts and Humanities (miscellaneous) ,Computational complexity theory ,Matching (graph theory) ,Filter (video) ,Computer science ,Headset ,Acoustics ,Loudspeaker ,Infinite impulse response ,Radiation pattern - Abstract
Personal audio systems have been developed based on various approaches with the goal of synthesizing an isolated sound zone that avoids disturbing others in different locations. In this work, a near-field solution for a neckband headset using three loudspeakers positioned close to each ear is proposed. In particular, it is an attempt to derive a simple multichannel filter for reducing the computational cost in mobile devices. Unlike super-directive beamforming techniques, the controlled radiation pattern is not highly directional but can boost the near-field sound, thereby providing an extra sound level difference between the listener's ear locations and far-field surrounding areas. For this purpose, a multichannel filter is designed using a conventional pressure matching technique for reproducing a target signal at the ear location while suppressing sound radiation to a far-field. It is shown that the optimal filter weights can be successfully approximated in the form of a simple broadside differential array pattern. The simplified filter structure can be realized using only two second-order infinite impulse response filters for driving the middle and two side loudspeakers. Through various simulations and experiments, it is demonstrated that the proposed solution can effectively realize a personal audio system with a minimal loss of sound isolation performance.
- Published
- 2020
90. 'We Didn’t Catch That!' Using Voice Text Input on a Mixed Reality Headset in Noisy Environments
- Author
-
Emily Rickel, Barbara S. Chaparro, Jade A. Lovell, Kelly J. Harris, and Jessyca L. Derby
- Subjects
Medical Terminology ,Human–computer interaction ,Computer science ,Order (business) ,020209 energy ,Headset ,05 social sciences ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,02 engineering and technology ,050107 human factors ,Mixed reality ,Medical Assisting and Transcription - Abstract
The Microsoft HoloLens, a mixed reality head-mounted display (HMD), has been demonstrated in domains such as medicine, engineering, and manufacturing. In order to interact with the device, voice input may be required. Given this range of environments, it is necessary to understand the impact of noise on voice dictation speed and accuracy. In this study, we evaluated the dictation feature of the HoloLens through speed (WPM), accuracy (WER), perceived workload, and perceived usability at three different noise levels: 40 dB, 55 dB, and 70 dB. No differences were found across noise levels in speed (67-75 WPM), or perceived workload. Accuracy and perceived usability worsened in the 70 dB noise condition. Only 37.5% of participants were able to successfully dictate in the 70 dB condition. This study shows that if the HoloLens is to be accepted in environments with high noise levels, improvements to dictation need to be made.
- Published
- 2020
91. Metodología de conexión utilizando NeuroSKY Mindwave MW003 con MATLAB
- Author
-
Bryan Quino Ortiz, Marcia Lorena Hernández Nieto, Aldo R. Sartorius Castellanos, Antonia Zamudio Radilla, and José de Jesús Moreno Vázquez
- Subjects
Process (engineering) ,business.industry ,Computer science ,Headset ,Principal (computer security) ,law.invention ,Bluetooth ,Explication ,law ,Human–computer interaction ,High-level programming language ,Wireless ,business ,MATLAB ,computer ,computer.programming_language - Abstract
En la actualidad el ímpetu por comprender el funcionamiento del encéfalo ha motivado a compañías como Neurosky en crear e innovar diademas para la obtención de señales encefalográfícas de bajo costo y gran exactitud, enfocadas a la venta para todo tipo de usuario. En el presente trabajo se mostrará la metodología de conexión de la diadema Neurosky MindWave MW003 efectuando el proceso de recepción, envío y configuración inalámbrica (Bluetooth) con el computador, haciendo uso de la librería Thinkgear.h impartida por la empresa Neurosky, realizando un explicación breve y concisa para el uso del dispositivo, estableciendo las características, métodos de operación y funciones principales para su conexión, utilizando la herramienta MATLAB R2015B, el proceso se describe sistemáticamente enfocándose a usuarios inexpertos en la resolución de sus dudas, así mismo contribuir al usuario experimentado en lenguajes de alto nivel en la creación de nuevas aplicaciones.
- Published
- 2020
92. Mixed reality system for nondestructive evaluation training
- Author
-
Tam V. Nguyen, Thomas R. Boehnlein, Vamsi Adari, Somaraju Kamma, Tyler Lesthaeghe, and Victoria Kramb
- Subjects
Computer science ,Headset ,05 social sciences ,Wearable computer ,020207 software engineering ,02 engineering and technology ,Virtual reality ,Computer Graphics and Computer-Aided Design ,Mixed reality ,Session (web analytics) ,Human-Computer Interaction ,Computer graphics ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Zoom ,050107 human factors ,Software ,Gesture - Abstract
nondestructive evaluation (NDE) is an analysis technique used to evaluate the properties of a material, component, structure or system without causing damage. In this paper, we introduce a novel mixed reality system for NDE training. In particular, we model and simulate the inspected object and the inspection probe. The operator trainees with the wearable headsets are able to move and zoom the inspected object in a mixed environment, i.e., reality with virtual objects overlaid. In addition, the trainees use their gaze, gesture, and voice to control and interact with the virtual objects within the NDE training session. They can also access the help manual in order to follow the training instruction. The system is successfully operated on HoloLens, the state-of-the-art mixed reality headset. Evaluational results demonstrate that the use of mixed reality training provides significant benefit for the potential technician trainees.
- Published
- 2020
93. Deep Learning Based Pathology Detection for Smart Connected Healthcare
- Author
-
M. Shamim Hossain and Ghulam Muhammad
- Subjects
Pathology ,medicine.medical_specialty ,Authentication ,Mobile edge computing ,Computer Networks and Communications ,business.industry ,Computer science ,Deep learning ,Headset ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Medical services ,Hardware and Architecture ,Server ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Artificial intelligence ,business ,Cloud server ,Software ,Information Systems - Abstract
New generation communication technologies and advanced deep learning models present a tremendous opportunity to develop fast, accurate, and seamless distributed systems in different sectors including the healthcare sector. in this article, we suggest a smart healthcare framework consisting of a pathology detection system, which is developed using deep learning. The pathology can be detected from electroencephalogram signals of a subject. in the framework, a smart EEG headset captures EEG signals and sends them to a mobile edge computing server. The server preprocesses the signals and transmits them to a cloud server. The cloud server does the main processing using deep learning and decides on whether the subject has pathology or not. Clients and stakeholders of the framework are connected via an authentication manager located in the cloud server. Experiment results on a publicly available database confirm the appropriateness of the proposed framework.
- Published
- 2020
94. Remote damage inspection with AR custom headset
- Author
-
Stefano Cuomo, Gian Piero Malfense Fierro, Michele Meo, Meyendorf, Norbert G., Farhangdoust, Saman, and Niezrecki, Christopher
- Subjects
headset ,Applied Mathematics ,remote inspection ,BVID ,remote NDE ,SHM ,stereo vision ,Condensed Matter Physics ,damage detection ,laser ,Electronic, Optical and Magnetic Materials ,Computer Science Applications ,industry 4.0 ,Electrical and Electronic Engineering ,AR - Abstract
Nowadays key factors in high technology industries are digitalization, networking, data management, informatization and automation. The mutual interactions between these factors led toward the quickly evolving industrial revolution called “Industry 4.0” (or IR4) defined as the integration of new technologies within the design, manufacturing, and maintenance processes. This dynamic scenario is made possible by the so-called Internet of Things (IoT). In this context the inspection sector is rapidly upgrading in order to cope with the new technology paradigm of industry. The application of new technologies in NDE can improve the effectiveness, in terms of time and costs, of many inspection processes and include much more data and details exploiting multiple devices and sensing systems (NDE 4.0). In this work a remote damage inspection device is introduced, based on a stereo-laser depth map system connected to a custom headset. A laser speckle pattern is projected on the inspected component and acquired through a stereo cameras system. Damage is detected as a change in the depth map. The detected damage is then superimposed on the structure and streamed to the headset. The proposed idea would be extremely beneficial during the inspection process of large structures to assess whether a damage is present. This, in turn, would make
- Published
- 2022
95. A Pipeline for the Implementation of Immersive Experience in Cultural Heritage Sites in Sicily
- Author
-
Barbera, Roberto, Condorelli, Francesca, Di Gregorio, Giuseppe, Di Piazza, Giuseppe, Farella, Mariella, Lo Bosco, Giosué, Megvinov, Andrey, Pirrone, Daniele, Schicchi, Daniele, and Zora, Antonino
- Subjects
CAVE ,Cultural Heritage ,Headset ,Virtual reality - Abstract
Modern digital technologies allow potentially to explore Cultural Heritage sites in immersive virtual environments. This is surely an advantage for the users that can better experiment and understand a specific site, also before a real visit. This specific approach has gained increasing attention during the extreme conditions of the recent COVID-19 pandemic. In this work, we present the processes that lead to the implementation of an immersive app for different kinds of low and highcost devices, which have been attained in the context of the 3DLab-Sicilia project. 3DLab-Sicilia’s main objective is to sponsor the creation, development, and validation of a sustainable infrastructure that interconnects three main Sicilian centers specialized in augmented and virtual reality. The project gives great importance to the cultural heritage, as well as to the tourism related areas. Despite the presentation of the case study of the Santa Maria La Vetere church, the process of the final app implementation guided by the general pipeline here presented is general and can be applied to other cultural heritage sites.
- Published
- 2022
- Full Text
- View/download PDF
96. Can virtual reality headsets be used safely as a distraction method for paediatric orthopaedic patients? A feasibility study
- Author
-
O Kattan, K Shepherd, Y Shanmugharaj, and M Kokkinakis
- Subjects
Male ,medicine.medical_specialty ,Adolescent ,Headset ,Virtual reality ,Anxiety ,Likert scale ,Phlebotomy ,Distraction ,Medicine ,Humans ,Child ,Venipuncture ,business.industry ,Virtual Reality ,General Medicine ,Anxiety Disorders ,Casts, Surgical ,Orthopedics ,Child, Preschool ,Orthopedic surgery ,Physical therapy ,Feasibility Studies ,Surgery ,Female ,medicine.symptom ,business ,Risk assessment - Abstract
Introduction Virtual reality (VR) has been shown to decrease pain and anxiety in clinical areas. The purpose of this study was to assess the feasibility of ‘Rescape DR.VR Junior’ headset as a distraction method for paediatric orthopaedic patients. Methods An internal risk assessment by medical engineers to determine its safety deemed the device safe to be used only in the venepuncture and plaster rooms, while further investigation is needed to establish its safety in the operating theatre/anaesthetic room. A total of 32 children (age range: 2–15 years) took the option to use the device while they underwent venepuncture or a cast procedure. Anxiety scores, measured on a Likert scale, were collected pre and postprocedure. Participants were asked if they would use the device again. Subjective feedback was also collated from the supervising clinical staff. Results A total of 66% (21) showed a reduction in anxiety scores, 28% (9) had no change in score; all scores being mild, 1–3 on the Likert scale; 6% (2) showed an increase in postprocedure score. All participants stated they would use the device again. One patient declined the device. Health professionals also gave positive subjective feedback and they would all use it again with their paediatric patients. No adverse events were recorded. Conclusion The ‘Rescape DR.VR Junior’ headset has been found to be a safe and feasible distraction method for use in children in the venepuncture and plaster room. Further research is required to assess its safety and effectiveness in other clinical areas, including the paediatric orthopaedic operating theatre.
- Published
- 2022
97. Life Cycle Assessment of a Virtual Reality Device.
- Author
-
Andrae, Anders S. G.
- Subjects
- *
VIRTUAL reality equipment , *PRODUCT life cycle assessment , *ELECTRIC power , *INTEGRATED circuits , *HOUSEHOLD electronics , *HEADSETS - Abstract
Virtual reality (VR) is one of the strongest trends for future communication systems. Considering the amounts of VR devices expected to be produced in the coming years, it is relevant to estimate their potential environmental impacts under certain conditions. For the first time, screening life cycle assessment (LCA) single score results are presented for a contemporary VR headset. The weighted results are dependent much on the source of the gold and the electric power used in production. Theoretically, using recycled gold for the VR subparts would be very beneficial seen from an environmental damage cost standpoint. Using low environmental impact electric power in the final assembly of the VR headset, in the final assembly of integrated circuits, and in the preceding wafer processing would also be worthwhile. Distribution of the final product is more pronounced than for other consumer electronics. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
98. Life Cycle Assessment of a Virtual Reality Device.
- Author
-
Anders, Andrae S. G.
- Subjects
- *
VIRTUAL reality , *TELECOMMUNICATION systems - Abstract
Virtual reality (VR) is one of the strongest trends for future communication systems. Considering the amounts of VR devices expected to be produced in the coming years, it is relevant to estimate their potential environmental impacts under certain conditions. For the first time, screening life cycle assessment (LCA) single score results are presented for a contemporary VR headset. The weighted results are dependent much on the source of the gold and the electric power used in production. Theoretically, using recycled gold for the VR subparts would be very beneficial seen from an environmental damage cost standpoint. Using low environmental impact electric power in the final assembly of the VR headset, in the final assembly of integrated circuits, and in the preceding wafer processing would also be worthwhile. Distribution of the final product is more pronounced than for other consumer electronics [ABSTRACT FROM AUTHOR]
- Published
- 2017
99. Optical Wireless Channel Simulation for Communications Inside Aircraft Cockpits
- Author
-
Stephanie Sahuguede, Anne Julien-Vergonjanne, Lilian Aveneau, Damien Sauveron, Pierre Combeau, Hervé Boeglen, Steve Joumessi-Demeffo, XLIM (XLIM), Université de Limoges (UNILIM)-Centre National de la Recherche Scientifique (CNRS), Systèmes et Réseaux Intelligents (XLIM-SRI), Université de Limoges (UNILIM)-Centre National de la Recherche Scientifique (CNRS)-Université de Limoges (UNILIM)-Centre National de la Recherche Scientifique (CNRS), Synthèse et analyse d'images (XLIM-ASALI), and Mathématiques & Sécurité de l'information (XLIM-MATHIS)
- Subjects
business.product_category ,Headphones , Wireless communication , Aircraft , Optical receivers , Optical saturation , Atmospheric modeling , Optical sensors ,business.industry ,Computer science ,Headset ,Electrical engineering ,Optical power ,02 engineering and technology ,7. Clean energy ,Atomic and Molecular Physics, and Optics ,Cockpit ,020210 optoelectronics & photonics ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Optical wireless ,Wireless ,[INFO]Computer Science [cs] ,Radio frequency ,[MATH]Mathematics [math] ,business ,ComputingMilieux_MISCELLANEOUS ,Headphones - Abstract
Communications inside an aircraft cockpit are currently based on wired connections especially for the audio headsets used by the pilots. A wireless headset would be an advantage in terms of comfort and flexibility but the use of classical radio frequencies is limited by interference and security issues. Optical wireless communication technology is an option for headset connectivity. Indeed, as optical beams are confined, this technology provides robustness against the risk of hacking, thus increasing security. In addition, the use of optical waves ensures the absence of radio-frequency disturbances. Using simulation, this article presents a thorough study of the optical wireless channel behavior inside the cockpit of an aircraft by considering a headset worn by a pilot possibly in motion and an access point at the ceiling. The impact of the characteristics of the environment model, such as the level of geometric description, the reflectivity of materials and for the first time, the ambient noise induced by the sun, is highlighted. System performance is evaluated in terms of optimal half-power angles and the necessary optical power of the light sources.
- Published
- 2020
100. Accuracy, recording interference, and articulatory quality of headsets for ultrasound recordings
- Author
-
Michael Pucher, Lorenzo Spreafico, Jan Luttenberger, and Nicola Klingler
- Subjects
Linguistics and Language ,3d printed ,Computer science ,Headset ,Speech recognition ,02 engineering and technology ,Settore L-LIN/01 - Glottologia e Linguistica ,Interference (wave propagation) ,01 natural sciences ,Language and Linguistics ,Settore L-LIN/02 - Didattica delle Lingue Moderne ,Quality (physics) ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Articulatory phonetics ,Ultrasound tongue imaging ,010301 acoustics ,Visual marker ,business.industry ,Communication ,Ultrasound ,020206 networking & telecommunications ,Computer Science Applications ,Formant ,Modeling and Simulation ,Computer Vision and Pattern Recognition ,Objective evaluation ,business ,Software - Abstract
In this paper we evaluate the accuracy, recording interference, and articulatory quality of two different ultrasound probe stabilization headsets: a metallic Ultrasound Stabilisation Headset (USH) and UltraFit, a recently developed headset that is 3D printed in Nylon. To evaluate accuracy, we recorded three native speakers of German with different head sizes using an optical marker tracking system that provides sub-millimeter tracking accuracy (NaturalPoint OptiTrack Expression). The speakers had to read C1V1C2V1/2 non-words (to diminish lexical influences) in three conditions: wearing the USH headset, wearing the UltraFit headset, and without a headset. To estimate the relative headset movement, we measured the movement between tracked points on the probe, headset, and speaker’s nose. By also tracking visual marker points on the speaker’s lip and chin, we compared the movement of the outer articulators with and without a headset and, thereby, measured how the headsets interfere with the articulatory space of the speaker. Additionally, we computed the differences in tongue profiles at the acoustic midpoint of V1 under the three conditions and evaluated the articulatory recording quality with a distance index and an area index. In the final evaluation, we also compared formant measurements of recordings with and without headsets. With this objective evaluation we provide a systematic analysis of different headsets for Ultrasound Tongue Imaging (UTI) and also contribute to the discussion of using UTI stabilization headsets for recording natural speech. We show that both headsets have a similar accuracy, with the USH performing slightly better overall but introducing the largest error for one speaker, and that the UltraFit headset shows more flexibility during recordings. Each headset influences the lip opening differently. Concerning the tongue movement, there are no significant differences between different sessions, showing the stability of both headsets during the recordings. Acoustic analysis of formant differences in vowels revealed that the USH headset has a larger influence on formant production than the UltraFit headset.
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.