2,436 results on '"Headset"'
Search Results
152. Don’t Block the Ground: Reducing Discomfort in Virtual Reality with an Asymmetric Field-of-View Restrictor
- Author
-
Evan Suma Rosenberg, George S Bailey, Thomas A. Stoffregen, and Fei Wu
- Subjects
education.field_of_study ,Computer science ,Headset ,Controller (computing) ,Population ,Visibility (geometry) ,Field of view ,Virtual reality ,computer.software_genre ,Virtual machine ,education ,computer ,Simulation ,Block (data storage) - Abstract
Although virtual reality has been gaining in popularity, users continue to report discomfort during and after use of VR applications, and many experience symptoms associated with motion sickness. To mitigate this problem, dynamic field-of-view restriction is a common technique that has been widely implemented in commercial VR games. Although artificially reducing the field-of-view during movement can improve comfort, the standard restrictor is typically implemented using a symmetric circular mask that blocks imagery in the periphery of the visual field. This reduces users’ visibility of the virtual environment and can negatively impact their subjective experience. In this paper, we propose and evaluate a novel asymmetric field-of-view restrictor that maintains visibility of the ground plane during movement. We conducted a remote user study that sampled from the population of VR headset owners. The experiment used a within-subjects design that compared the ground-visible restrictor, the traditional symmetric restrictor, and a control condition without FOV restriction. Participation required navigating through a complex maze-like environment using a controller during three separate virtual reality sessions conducted at least 24 hours apart. Results showed that ground-visible FOV restriction offers benefits for user comfort, postural stability, and subjective sense of presence. Additionally, we found no evidence of negative drawbacks to maintaining visibility of the ground plane during FOV restriction, suggesting that the proposed technique is superior for experienced users compared to the widely used symmetric restrictor.
- Published
- 2021
153. An LSL-Middleware Prototype for VR/AR Data Collection
- Author
-
Kangsoo Kim, Qile Wang, Qinqi Zhang, Roghayeh Barmaki, and Vincent J. Beardsley
- Subjects
Task (computing) ,Data collection ,Match moving ,Computer science ,Human–computer interaction ,Middleware ,Headset ,Augmented reality ,Virtual reality ,Mixed reality - Abstract
Multimodal data allows great opportunity in research and interaction in virtual/augmented reality (VR/AR) experiences for measuring human behavior. However, it is challenging to collect, coordinate, and synchronize high-volume of data while preserving the high frame-rate and the quality. Lab Streaming Layer (LSL) is an open-source framework that allows various types of multimodal data to be synchronously collected. In this work, we push the boundaries of the LSL framework by introducing an open-Source MultimOdal framewOrk for Tracking Hardware (SMOOTH)—an LSL-based middleware. The SMOOTH on top of the LSL framework supports real-time data collection and streaming using VR/AR hardware, which the LSL does not currently support, such as Microsoft Azure Kinect and Windows Mixed Reality headset and controllers. We also conducted a preliminary evaluation to understand the effectiveness of the SMOOTH and how well it performed on the task of collecting synchronized image, depth, infrared, audio, and 3D motion tracking data qualitatively.
- Published
- 2021
154. Validating the wearable MUSE headset for EEG spectral analysis and Frontal Alpha Asymmetry
- Author
-
Cedric Cannard, Arnaud Delorme, and Helané Wahbeh
- Subjects
Data collection ,medicine.diagnostic_test ,business.industry ,Computer science ,Headset ,Spectral density ,Wearable computer ,Pattern recognition ,Alpha (navigation) ,Electroencephalography ,Radio spectrum ,medicine ,Artificial intelligence ,Neurofeedback ,business - Abstract
EEG power spectral density (PSD), the individual alpha frequency (IAF) and the frontal alpha asymmetry (FAA) are all EEG spectral measures that have been widely used to evaluate cognitive and attentional processes in experimental and clinical settings, and that can be used for real-world applications (e.g., remote EEG monitoring, brain-computer interfaces, neurofeedback, neuromodulation, etc.). Potential applications remain limited by the high cost, low mobility, and long preparation times associated with high-density EEG recording systems. Low-density wearable systems address these issues and can increase access to larger and diversified samples. The present study tested whether a low-cost, 4-channel wearable EEG system (the MUSE) could be used to quickly measure continuous EEG data, yielding similar frequency components compared to research a grade EEG system (the 64-channel BIOSEMI Active Two). We compare the spectral measures from MUSE EEG data referenced to mastoids to those from BIOSEMI EEG data with two different references for validation. A minimal amount of data was deliberately collected to test the feasibility for real-world applications (EEG setup and data collection being completed in under 5 min). We show that the MUSE can be used to examine power spectral density (PSD) in all frequency bands, the individual alpha frequency (IAF; i.e., peak alpha frequency and alpha center of gravity), and frontal alpha asymmetry. Furthermore, we observed satisfying internal consistency reliability in alpha power and asymmetry measures recorded with the MUSE. Estimating asymmetry on PAF and CoG frequencies did not yield significant advantages relative to the traditional method (whole alpha band). These findings should advance human neurophysiological monitoring using wearable neurotechnologies in large participant samples and increase the feasibility of their implementation in real-world settings.
- Published
- 2021
155. Evaluation of photography using head-mounted display technology (ICAPS) for district Trachoma surveys
- Author
-
Fahd Naufal, Michael Saheb Kashaf, Christopher Bradley, Christopher J. Brady, Sheila K. West, Jeremiah Ngondi, Harran Mkocha, Robert W. Massof, George Kabona, and Meraf A. Wolle
- Subjects
Male ,Rural Population ,Bacterial Diseases ,Eye Diseases ,RC955-962 ,Surveys ,Tanzania ,Medical Conditions ,Surveys and Questionnaires ,Arctic medicine. Tropical medicine ,Photography ,Prevalence ,Medicine and Health Sciences ,Grading (education) ,Child ,Diagnostic Techniques and Procedures ,Skin ,Software Engineering ,Cameras ,Test (assessment) ,Infectious Diseases ,Trachoma ,Optical Equipment ,Research Design ,Child, Preschool ,Engineering and Technology ,Female ,Anatomy ,Integumentary System ,Public aspects of medicine ,RA1-1270 ,Research Article ,Neglected Tropical Diseases ,Computer and Information Sciences ,Headset ,Optical head-mounted display ,Equipment ,Research and Analysis Methods ,Computer Software ,Ocular System ,medicine ,Humans ,Communication Equipment ,Survey Research ,business.industry ,Field Tests ,Public Health, Environmental and Occupational Health ,Infant ,Biology and Life Sciences ,Eyelids ,medicine.disease ,Tropical Diseases ,Visualization ,Ophthalmology ,Optometry ,Eyes ,Observational study ,Cell Phones ,business ,Head - Abstract
Background As the prevalence of trachoma declines worldwide, it is becoming increasingly expensive and challenging to standardize graders in the field for surveys to document elimination. Photography of the tarsal conjunctiva and remote interpretation may help alleviate these challenges. The purpose of this study was to develop, and field test an Image Capture and Processing System (ICAPS) to acquire hands-free images of the tarsal conjunctiva for upload to a virtual reading center for remote grading. Methodology/Principal findings This observational study was conducted during a district-level prevalence survey for trachomatous inflammation—follicular (TF) in Chamwino, Tanzania. The ICAPS was developed using a Samsung Galaxy S8 smartphone, a Samsung Gear VR headset, a foot pedal trigger and customized software allowing for hands-free photography. After a one-day training course, three trachoma graders used the ICAPS to collect images from 1305 children ages 1–9 years, which were expert-graded remotely for comparison with field grades. In our experience, the ICAPS was successful at scanning and assigning barcodes to images, focusing on the everted eyelid with adequate examiner hand visualization, and capturing images with sufficient detail to grade TF. The percentage of children with TF by photos and by field grade was 5%. Agreement between grading of the images compared to the field grades at the child level was kappa = 0.53 (95%CI = 0.40–0.66). There were ungradable images for at least one eye in 199 children (9.1%), with more occurring in children ages 1–3 (18.5%) than older children ages 4–9 (4.2%) (χ2 = 145.3, p, Author summary Trachoma is the leading infectious cause of blindness worldwide, caused by the bacterium Chlamydia trachomatis. Programs targeting trachoma elimination in endemic regions largely rely on periodic prevalence surveys to monitor progress, but training field graders requires active cases, which is becoming challenging as prevalence declines. Photography of the tarsal conjunctiva with remote interpretation via telemedicine may serve as a more auditable, effective, and cost-efficient method for surveys. We developed and evaluated the Image Capture and Processing System (ICAPS), a smartphone-based, hands-free, head-mounted camera system (Samsung Galaxy S8 with custom app, Samsung Gear VR headset, and a Bluetooth-linked foot pedal trigger). The ICAPS was easy to use in challenging field conditions, was able to upload images from Tanzania and link images to field data. The percentage of TF was 5% by both field grade and photo grade, with agreement kappa = 0.53. Additional field training and enhanced certification of photographers may help reduce the proportion of ungradable images; further research on reasons for mismatch of grades between field and photo is needed.
- Published
- 2021
156. Detection of Mental Stress through EEG Signal in Virtual Reality Environment
- Author
-
Krzysztof Smółka, Grzegorz Zwoliński, and Dorota Kamińska
- Subjects
TK7800-8360 ,Computer Networks and Communications ,Computer science ,Speech recognition ,Headset ,Virtual reality ,Electroencephalography ,Convolutional neural network ,eye movement desensitization and reprocessing (EMDR) ,mental stress detection ,medicine ,electroencephalography (EEG) ,Electrical and Electronic Engineering ,affective computing ,virtual reality (VR) ,medicine.diagnostic_test ,business.industry ,Deep learning ,deep learning ,Support vector machine ,machine learning ,Hardware and Architecture ,Control and Systems Engineering ,Multilayer perceptron ,Signal Processing ,Artificial intelligence ,Electronics ,business ,Stroop effect - Abstract
This paper investigates the use of an electroencephalogram (EEG) signal to classify a subject’s stress level while using virtual reality (VR). For this purpose, we designed an acquisition protocol based on alternating relaxing and stressful scenes in the form of a VR interactive simulation, accompanied by an EEG headset to monitor the subject’s psycho-physical condition. Relaxation scenes were developed based on scenarios created for psychotherapy treatment utilizing bilateral stimulation, while the Stroop test worked as a stressor. The experiment was conducted on a group of 28 healthy adult volunteers (office workers), participating in a VR session. Subjects’ EEG signal was continuously monitored using the EMOTIV EPOC Flex wireless EEG head cap system. After the session, volunteers were asked to re-fill questionnaires regarding the current stress level and mood. Then, we classified the stress level using a convolutional neural network (CNN) and compared the classification performance with conventional machine learning algorithms. The best results were obtained considering all brain waves (96.42%) with a multilayer perceptron (MLP) and Support Vector Machine (SVM) classifiers.
- Published
- 2021
157. A NEUROSCIENCE APPROACH REGARDING STUDENT ENGAGEMENT IN THE CLASSES OF MICROCONTROLLERS DURING THE COVID19 PANDEMIC
- Author
-
Iuliana Marin
- Subjects
FOS: Computer and information sciences ,Computer science ,Process (engineering) ,business.industry ,Headset ,Mindset ,Student engagement ,Personalized learning ,Test (assessment) ,Computer Science - Computers and Society ,Software ,Arduino ,Computers and Society (cs.CY) ,ComputingMilieux_COMPUTERSANDEDUCATION ,business ,Neuroscience - Abstract
The process of teaching has been greatly changed by the COVID-19 pandemic. It is possible that studying will not resemble anymore the process known by the previous generations of students. As the current generations learn by doing and use their intuition, new platforms need to be involved in the teaching process. The current paper proposes a new method to keep the students engaged while learning by involving neuroscience during the classes of Microcontrollers. Arduino and Raspberry Pi boards are studied at the course of Microcontrollers using online simulation environments. The Emotiv Insight headset is used by the professor during the theoretical and practical hours of the Microcontrollers course. The analysis performed on the brainwaves generated by the headset provides numerical values for the mood, focus, stress, relaxation, engagement, excitement and interest levels of the professor. The approaches used during teaching were inquiry-based learning, game-based learning and personalized learning. In this way, professors can determine how to improve the connection with their students based on the use of technology and virtual simulation platforms. The results of the test show that the game-based learning was be best approach because students had to become problem solves and start to use the software skills which they will need as future software engineers. The emphasis is put on mastering the mindset by having to choose their actions and to experiment along the way. According to their achievement, students receive experience points in a gamified environment. Professors need to adjust to a new era of teaching and refine their practices and learning philosophy. They need to be able to use virtual platforms with ease, as well as to engage with their students in order to determine and satisfy their needs., 14th International Conference of Education, Research and Innovation (ICERI2021)
- Published
- 2021
158. Face-Mic
- Author
-
Payton Walker, Yingying Chen, Jian Liu, Jiadi Yu, Xiangyu Xu, Nitesh Saxena, Yi Wu, Cong Shi, and Tianfang Zhang
- Subjects
Computer science ,business.industry ,Headset ,Deep learning ,Eavesdropping ,Virtual reality ,Facial muscles ,medicine.anatomical_structure ,Dynamics (music) ,Human–computer interaction ,Identity (object-oriented programming) ,medicine ,Augmented reality ,Artificial intelligence ,business - Abstract
Augmented reality/virtual reality (AR/VR) has extended beyond 3D immersive gaming to a broader array of applications, such as shopping, tourism, education. And recently there has been a large shift from handheld-controller dominated interactions to headset-dominated interactions via voice interfaces. In this work, we show a serious privacy risk of using voice interfaces while the user is wearing the face-mounted AR/VR devices. Specifically, we design an eavesdropping attack, Face-Mic, which leverages speech-associated subtle facial dynamics captured by zero-permission motion sensors in AR/VR headsets to infer highly sensitive information from live human speech, including speaker gender, identity, and speech content. Face-Mic is grounded on a key insight that AR/VR headsets are closely mounted on the user's face, allowing a potentially malicious app on the headset to capture underlying facial dynamics as the wearer speaks, including movements of facial muscles and bone-borne vibrations, which encode private biometrics and speech characteristics. To mitigate the impacts of body movements, we develop a signal source separation technique to identify and separate the speech-associated facial dynamics from other types of body movements. We further extract representative features with respect to the two types of facial dynamics. We successfully demonstrate the privacy leakage through AR/VR headsets by deriving the user's gender/identity and extracting speech information via the development of a deep learning-based framework. Extensive experiments using four mainstream VR headsets validate the generalizability, effectiveness, and high accuracy of Face-Mic.
- Published
- 2021
159. Vision-Based Communication System for Patients with Amyotrophic Lateral Sclerosis
- Author
-
Neven Saleh and Aya Tarek
- Subjects
business.product_category ,Computer science ,business.industry ,Headset ,Process (computing) ,Wearable computer ,Communications system ,medicine.disease ,Software ,Human–computer interaction ,medicine ,User interface ,Amyotrophic lateral sclerosis ,business ,Headphones - Abstract
Patients with amyotrophic lateral sclerosis (ALS) require an external communication device. The system takes on different characteristics depending on the stage of the disease. This research provides a vision-based communication system for end-stage patients who can communicate with their eyes. Hardware and software were used to create the communication system. A headset with a wearable glass, sensitive camera, and near-infrared light was used to implement eye-tracking technology. The commands were displayed on a computer screen. The software for the system was created with the eye-tracking and calibration process in consideration. Furthermore, smart software for the user interface was created, which included six critical needs to be reported. The system has been tested and has yielded convenient results. In this way, patients with ALS can have a dependable, safe, non-invasive, and low-cost communication device to help them with their daily activities.
- Published
- 2021
160. Driving-Induced Neurological Biomarkers in an Advanced Driver-Assistance System
- Author
-
Se Jin Park, Seo Young, and Iqram Hussain
- Subjects
Male ,medicine.medical_specialty ,Automobile Driving ,physiological biomarker ,Computer science ,Headset ,TP1-1185 ,Workload ,Electroencephalography ,Biochemistry ,Article ,Analytical Chemistry ,Physical medicine and rehabilitation ,medicine ,advanced driver assistance system (ADAS) ,Humans ,Electrical and Electronic Engineering ,Instrumentation ,Resting state fMRI ,medicine.diagnostic_test ,Speed limit ,Chemical technology ,driving simulator ,Driving simulator ,Accidents, Traffic ,Cognition ,electroencephalogram ,Atomic and Molecular Physics, and Optics ,Delta wave ,mental workload ,Biomarkers - Abstract
Physiological signals are immediate and sensitive to neurological changes resulting from the mental workload induced by various driving environments and are considered a quantifying tool for understanding the association between neurological outcomes and driving cognitive workloads. Neurological assessment, outside of a highly-equipped clinical setting, requires an ambulatory electroencephalography (EEG) headset. This study aimed to quantify neurological biomarkers during a resting state and two different scenarios of driving states in a virtual driving environment. We investigated the neurological responses of seventeen healthy male drivers. EEG data were measured in an initial resting state, city-roadways driving state, and expressway driving state using a portable EEG headset in a driving simulator. During the experiment, the participants drove while experiencing cognitive workloads due to various driving environments, such as road traffic conditions, lane changes of surrounding vehicles, the speed limit, etc. The power of the beta and gamma bands decreased, and the power of the delta waves, theta, and frontal theta asymmetry increased in the driving state relative to the resting state. Delta-alpha ratio (DAR) and delta-theta ratio (DTR) showed a strong correlation with a resting state, city-roadways driving state, and expressway driving state. Binary machine-learning (ML) classification models showed a near-perfect accuracy between the resting state and driving state. Moderate classification performances were observed between the resting state, city-roadways state, and expressway state in multi-class classification. An EEG-based neurological state prediction approach may be utilized in an advanced driver-assistance system (ADAS).
- Published
- 2021
- Full Text
- View/download PDF
161. A2W: Context-Aware Recommendation System for Mobile Augmented Reality Web Browser
- Author
-
Kit Yung Lam, Pan Hui, Lik Hang Lee, and Department of Computer Science
- Subjects
Workstation ,Computer science ,business.industry ,Headset ,education ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Recommender system ,113 Computer and information sciences ,Mixed reality ,law.invention ,Digital media ,World Wide Web ,law ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Augmented reality ,Web navigation ,business - Abstract
Augmented Reality (AR) offers new capabilities for blurring the boundaries between physical reality and digital media. However, the capabilities of integrating web contents and AR remain underexplored. This paper presents an AR web browser with an integrated context-aware AR-to-Web content recommendation service named as A2W browser, to provide continuously user-centric web browsing experiences driven by AR headsets. We implement the A2W browser on an AR headset as our demonstration application, demonstrating the features and performance of A2W framework. The A2W browser visualizes the AR-driven web contents to the user, which is suggested by the content-based filtering model in our recommendation system. In our experiments, 20 participants with the adaptive UIs and recommendation system in A2W browser achieve up to 30.69% time saving compared to smartphone conditions. Accordingly, A2W-supported web browsing on workstations facilitates the recommended information leading to 41.67% faster reaches to the target information than typical web browsing.
- Published
- 2021
162. Investigating Player Experience in Virtual Reality Games via Remote Experimentation
- Author
-
Ivan Ip and Penny Sweetser
- Subjects
Player experience ,Human–computer interaction ,Headset ,media_common.quotation_subject ,ComputingMilieux_PERSONALCOMPUTING ,Immersion (virtual reality) ,Curiosity ,Virtual reality ,Psychology ,Two stages ,media_common - Abstract
This research explores player experience of virtual reality (VR) games through two stages of study. In both stages, we employed the Player Experience Inventory (PXI), a validated tool designed to evaluate player experience. In Stage 1, player experience of VR games was investigated via an online survey with 100 participants. We found that Audio-Visual Appeal, Immersion, and Ease of Control contributed most to player experience in VR games. We found no relationship between player experience and age, time spent playing, VR experience, or VR headset. Stage 2 used remote experimentation to compare VR and non-VR games with 10 participants. We found that differences in player experience can be explained by the Immersion, Progress Feedback, and Curiosity constructs of the PXI.
- Published
- 2021
163. ModularHMD: A Reconfigurable Mobile Head-Mounted Display Enabling Ad-hoc Peripheral Interactions with the Real World
- Author
-
Yoshifumi Kitamura, Maakito Inoue, Kazuki Takashima, Kazuyuki Fujita, Kiyoshi Kiyokawa, and Isamu Endo
- Subjects
Interaction device ,Mode (computer interface) ,Situation awareness ,business.industry ,Computer science ,Human–computer interaction ,Headset ,Optical head-mounted display ,Modular design ,business ,Effective solution - Abstract
We propose ModularHMD, a new mobile head-mounted display concept, which adopts a modular mechanism and allows a user to perform ad-hoc peripheral interaction with real-world devices or people during VR experiences. ModularHMD is comprised of a central HMD and three removable module devices installed in the periphery of the HMD cowl. Each module has four main states: occluding, extended VR view, video see-through (VST), and removed/reused. Among different combinations of module states, a user can quickly setup the necessary HMD forms, functions, and real-world visions for ad-hoc peripheral interactions without removing the headset. For instance, an HMD user can see her surroundings by switching a module into the VST mode. She can also physically remove a module to obtain direct peripheral visions of the real world. The removed module can be reused as an instant interaction device (e.g., touch keyboards) for subsequent peripheral interactions. Users can end the peripheral interaction and revert to a full VR experience by re-mounting the module. We design ModularHMD’s configuration and peripheral interactions with real-world objects and people. We also implement a proof-of-concept prototype of ModularHMD to validate its interactions capabilities through a user study. Results show that ModularHMD is an effective solution that enables both immersive VR and ad-hoc peripheral interactions.
- Published
- 2021
164. An Open-Source, Wheelchair Accessible and Immersive Driving Simulator for Training People with Spinal Cord Injury
- Author
-
A. Massone, Serena Ricci, A. Bellitto, Maura Casadio, Angelo Basteris, Filippo Gandolfi, Andrea Canessa, Torricelli, Diego, Akay, Metin, and Pons, Jose L.
- Subjects
Rehabilitation ,Event (computing) ,Proof of concept ,Computer science ,Human–computer interaction ,Headset ,medicine.medical_treatment ,Controller (computing) ,medicine ,Driving simulator ,Accessibility ,USable - Abstract
Independence is one of the greatest achievements for people with Spinal Cord Injury (SCI). Indeed, mobility represents a big challenge that needs to be addressed, also considering that road accidents are frequently the cause of SCI. Immersive Virtual Reality (VR) combined with a driving simulator may provide a realistic experience, helping users to relearn driving and overcome the traumatic event. The aim of this project was to implement a wheelchair accessible, immersive driving simulator for the training and assessment of SCI people. Here we present a proof of concept of an open-source, VR compatible, driving simulator. The system combines a VR headset with an adaptive driving controller and a VR scenario. Starting from CARLA, an open simulator for autonomous driving, we created driving scenarios designed to fit the needs of SCI rehabilitation. Also, we defined future developments required to create a device usable for the assessment of cognitive and motor abilities.
- Published
- 2021
165. Compelling AR Earthquake Simulation with AR Screen Shaking
- Author
-
Naoya Isoyama, Kiyoshi Kiyokawa, Hideaki Uchiyama, Johannes Schirm, and Setthawut Chotchaicharin
- Subjects
business.product_category ,Computer science ,Headset ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Shake ,Virtual reality ,Human-centered computing ,GeneralLiterature_MISCELLANEOUS ,Earthquake simulation ,Human–computer interaction ,Real-time simulation ,Augmented reality ,business ,Headphones - Abstract
Our goal is to simulate earthquakes in a familiar environment, for example in one’s own office, helping users to take the simulation more seriously. We propose an AR earthquake simulation using a video see-through VR headset, use real earthquake data, and implement a novel AR screen shake technique, realistically shaking the entire view of the user. We run a user study (n=25), where participants experienced an earthquake both in a VR scene and two AR scenes with and without the AR screen shake technique. Our results suggest that both AR scenes offer a more compelling experience compared to the VR scene, and the AR screen shake improved immediacy and was preferred by most participants. This showed us how virtual safety training can greatly benefit from an AR implementation, motivating us to further explore this approach for the case of earthquakes.
- Published
- 2021
166. Hybrid Conference Experiences in the ARENA
- Author
-
Anthony Rowe, Ivan Liang, Michael W. Farb, Nuno Pereira, Eric Riebling, and Edward Lu
- Subjects
Computer graphics ,Human–computer interaction ,Computer science ,Headset ,Oculus ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Augmented reality ,Fork (file system) ,Android (operating system) ,Architecture ,Mixed reality - Abstract
We propose supporting hybrid conference experiences using the Augmented Reality Edge Network Architecture (ARENA). ARENA is a platform based on web technologies that simplifies the creation of collaborative mixed reality for standard Web Browsers (Chrome, Firefox) in VR, Headset AR/VR Browsers (Magic Leap, Hololens, Oculus Quest 2), and mobile AR (WebXR Viewer for iOS, Chrome with experimental flags for Android, and our own custom WebXR fork for iOS). We use a 3D scan of the conference venue as the backdrop environment for remote users and a model to stage various AR interactions for in-person users. Remote participants can use VR in a browser or a VR headset to navigate the scene. In-person participants can use AR headsets or mobile AR through WebXR browsers to see and hear remote users. ARENA can scale up to hundreds of users in the same scene and provides audio and video with spatial sound that can more closely capture real-world interactions.
- Published
- 2021
167. Patients Prefer a Virtual Reality Approach Over a Similarly Performing Screen-Based Approach for Continuous Oculomotor-Based Screening of Glaucomatous and Neuro-Ophthalmological Visual Field Defects
- Author
-
Amit Bhongade, James John, Rohit Saxena, Radhika Tandon, Remco J. Renken, Dharam Raj, Tapan K. Gandhi, Frans W. Cornelissen, Rijul Saurabh Soans, Clinical Cognitive Neuropsychiatry Research Program (CCNP), and Perceptual and Cognitive Neuroscience (PCN)
- Subjects
CORTEX ,Computer science ,Headset ,Neurosciences. Biological psychiatry. Neuropsychiatry ,Virtual reality ,Task (project management) ,TRACKING ,User experience design ,perimetry ,DEFICITS ,SYSTEMS ,STANDARD AUTOMATED PERIMETRY ,user experience ,Computer vision ,Set (psychology) ,Original Research ,neuro-ophthalmology ,LESIONS ,business.industry ,General Neuroscience ,Eye movement ,visual field defects ,Visual field ,TIME ,eye movements ,glaucoma ,YOUNG ,Eye tracking ,virtual reality ,cross-correlogram ,Artificial intelligence ,EYE-MOVEMENTS ,business ,RC321-571 ,Neuroscience - Abstract
Standard automated perimetry (SAP) is the gold standard for evaluating the presence of visual field defects (VFDs). Nevertheless, it has requirements such as prolonged attention, stable fixation, and a need for a motor response that limit application in various patient groups. Therefore, a novel approach using eye movements (EMs) – as a complementary technique to SAP – was developed and tested in clinical settings by our group. However, the original method uses a screen-based eye-tracker which still requires participants to keep their chin and head stable. Virtual reality (VR) has shown much promise in ophthalmic diagnostics – especially in terms of freedom of head movement and precise control over experimental settings, besides being portable. In this study, we set out to see if patients can be screened for VFDs based on their EM in a VR-based framework and if they are comparable to the screen-based eyetracker. Moreover, we wanted to know if this framework can provide an effective and enjoyable user experience (UX) compared to our previous approach and the conventional SAP. Therefore, we first modified our method and implemented it on a VR head-mounted device with built-in eye tracking. Subsequently, 15 controls naïve to SAP, 15 patients with a neuro-ophthalmological disorder, and 15 glaucoma patients performed three tasks in a counterbalanced manner: (1) a visual tracking task on the VR headset while their EM was recorded, (2) the preceding tracking task but on a conventional screen-based eye tracker, and (3) SAP. We then quantified the spatio-temporal properties (STP) of the EM of each group using a cross-correlogram analysis. Finally, we evaluated the human–computer interaction (HCI) aspects of the participants in the three methods using a user-experience questionnaire. We find that: (1) the VR framework can distinguish the participants according to their oculomotor characteristics; (2) the STP of the VR framework are similar to those from the screen-based eye tracker; and (3) participants from all the groups found the VR-screening test to be the most attractive. Thus, we conclude that the EM-based approach implemented in VR can be a user-friendly and portable companion to complement existing perimetric techniques in ophthalmic clinics.
- Published
- 2021
168. Secret-Key Agreement by Asynchronous EEG over Authenticated Public Channels
- Author
-
Jelica Radomirovic, Zoran Banjac, Meiran Galis, Milan Milosavljević, Aleksandar Jevremović, and Aleksej Makarov
- Subjects
information reconciliation ,Computer science ,CASCADE ,Science ,QC1-999 ,Speech recognition ,Headset ,General Physics and Astronomy ,Electroencephalography ,Astrophysics ,Article ,medicine ,Randomness tests ,EEG ,medicine.diagnostic_test ,Physics ,key distillation ,Wisconsin Card Sorting Test ,QB460-466 ,Task (computing) ,Asynchronous communication ,advantage distillation ,NIST ,Bit (key) ,Communication channel - Abstract
In this paper, we propose a new system for a sequential secret key agreement based on 6 performance metrics derived from asynchronously recorded EEG signals using an EMOTIV EPOC+ wireless EEG headset. Based on an extensive experiment in which 76 participants were engaged in one chosen mental task, the system was optimized and rigorously evaluated. The system was shown to reach a key agreement rate of 100%, a key extraction rate of 9%, with a leakage rate of 0.0003, and a mean block entropy per key bit of 0.9994. All generated keys passed the NIST randomness test. The system performance was almost independent of the EEG signals available to the eavesdropper who had full access to the public channel.
- Published
- 2021
169. 616 Innovative £50 headset and free app sent by post to manage glue ear -the most common childhood hearing loss- when services were closed during the C19 pandemic
- Author
-
Isobel Fitzgerald O’Connor, Tamsin Mary Holland Brown, Colin J Morley, and Jessica Bewick
- Subjects
Remote Consultation ,business.product_category ,Hearing loss ,business.industry ,Headset ,medicine.disease ,Bone conduction ,Quality of life (healthcare) ,otorhinolaryngologic diseases ,medicine ,Active listening ,Medical emergency ,medicine.symptom ,Grommet ,business ,Headphones - Abstract
Background Hearing loss from glue ear affects ∼1 in 10 children starting school in UK/Europe. Of all children globally with a hearing loss, fewer than 10% of children have access to hearing aids: affordable solutions are needed. Studies showed children with OME hear better with bone conducting headsets. During COVID-19 we investigated whether children with glue ear (also known as Otitis Media with Effusion, OME) without access to audiology or grommet surgery during the Covid pandemic, could be aided remotely with £50 bone conduction kits and the HearGlueEar app. Objectives Could families pair and set up a product set (requiring Bluetooth connectivity) themselves Could children’s quality of life be improved with remotely managed hearing support. Can glue ear be successfully managed remotely. Does this management affect the number of grommet operations required? Methods Starting July 2020, during COVID-19, children aged 3–11 years with OME and on a grommet waiting list were invited to a single arm, prospective study. They received the kit, instructions and HearGlueEar app by post. By 3 weeks parents were asked to charge and pair the devices, attend a remote consultation and complete an OMQ-14 questionnaire. Remote follow-up lasted 3 months. Results 82% (26 children) of those waiting for grommet operation list at the time of first lockdown in 2020 joined the study. Children experienced more challenging listening situations during the pandemic with remote learning, social distancing and masks obscuring lip reading. Families and the children felt empowered to manage their child’s condition at home and school. 100% of families set up the product set remotely without professional help. Although some families needed additional support through the study therefore contact with a professional to trouble shoot was important. Quality of life (OMQ-14) responses were 90% positive. Comments included: ‘Other people have said, wow his speech is clearer.’, ‘It is making a real difference at home.’, ‘He said over and over again, ‘I can hear everybody, wow.’, ‘It is no exaggeration to say this has made an astronomical improvement to his quality of life’. ‘She is getting on really well with the headphones - pairing them with the iPad at home is simply brilliant.’ One child said ‘I can hear my best friend again’. 20% of those in the study avoided grommet operations: either choosing this management option as an alternative or successfully supporting their child’s hearing until the glue ear self-resolved. Conclusions Posting a bone conduction kit, HearGlueEar app and remote consultation is an effective management option for children with glue ear. This reduced the need for some grommet operations affording cost-savings and relieved hospital waiting lists. Children’s hearing was supported at home and at school as well as challenges experienced in the pandemic with on-line education, social distancing and communicating with face coverings. https://medrxiv.org/cgi/content/short/2021.01.21.21249496v1
- Published
- 2021
170. Predicting the Health Impacts of Commuting Using EEG Signal Based on Intelligent Approach
- Author
-
Mhd Saeed Sharif, Cynthia H.Y. Fu, and Madhav Raj Theeng Tamang
- Subjects
medicine.diagnostic_test ,Artificial neural network ,Headset ,Work (physics) ,medicine ,Cognition ,Telehealth ,Electroencephalography ,Duration (project management) ,Psychology ,Mental health ,Cognitive psychology - Abstract
Commuting to work is an everyday activity for many which can have a significant effect on our health. Commuting on regular basis can be a cause of chronic stress which is linked to poor mental health, high blood pressure, heart rate, and exhaustion. This research investigates the neurophysiological and psychological impact of commuting in real-time, by analyzing brain waves and applying machine learning. The participants were healthy volunteers with mean age of 30 years. Portable electroencephalogram (EEG) data were acquired as a measure of stress level. EEG data were acquired from each participant using non-invasive NeuroSky MindWave headset for 5 continuous activities during their commute to work. This approach allowed effects to be measured during and following the period of commuting. The results indicate that whether the duration of commute was low or large, when participants were in a calm or relaxed state the bio-signal alpha band exceeded beta band whereas beta band was higher than alpha band when participants were stressed due to their commute. Very promising results have been achieved with an accuracy of 97.5% using Feed-forward neural network. This work focuses on the development of an intelligent model that helps to predict the impact of commuting on participants. In addition, the result obtained from the Positive and Negative Affect Schedule also suggests that participants experience a considerable rise in stress after their commute. For modelling of cognitive and semantic processes underlying social behavior, the most of the recent research projects are still based on individuals, while our research focuses on approaches addressing groups as a complete cohort. This study recorded the experience of commuters with a special focus on the use and limitation of emerging computing technologies in telehealth sensors.
- Published
- 2021
171. In Situ Visualization for 3D Ultrasound-Guided Interventions with Augmented Reality Headset
- Author
-
Vincenzo Ferrari, Sara Condino, Nadia Cattari, Fabrizio Cutolo, and Mauro Ferrari
- Subjects
3D ultrasound ,Augmented reality ,Head-mounted display ,High-precision manual task ,In-depth guidance ,Technology ,QH301-705.5 ,Computer science ,Headset ,Optical head-mounted display ,Bioengineering ,In situ visualization ,in-depth guidance ,Article ,medicine ,Computer vision ,Biology (General) ,medicine.diagnostic_test ,business.industry ,augmented reality ,Visualization ,head-mounted display ,high-precision manual task ,Ultrasound imaging ,Direct vision ,Artificial intelligence ,business - Abstract
Augmented Reality (AR) headsets have become the most ergonomic and efficient visualization devices to support complex manual tasks performed under direct vision. Their ability to provide hands-free interaction with the augmented scene makes them perfect for manual procedures such as surgery. This study demonstrates the reliability of an AR head-mounted display (HMD), conceived for surgical guidance, in navigating in-depth high-precision manual tasks guided by a 3D ultrasound imaging system. The integration between the AR visualization system and the ultrasound imaging system provides the surgeon with real-time intra-operative information on unexposed soft tissues that are spatially registered with the surrounding anatomic structures. The efficacy of the AR guiding system was quantitatively assessed with an in vitro study simulating a biopsy intervention aimed at determining the level of accuracy achievable. In the experiments, 10 subjects were asked to perform the biopsy on four spherical lesions of decreasing sizes (10, 7, 5, and 3 mm). The experimental results showed that 80% of the subjects were able to successfully perform the biopsy on the 5 mm lesion, with a 2.5 mm system accuracy. The results confirmed that the proposed integrated system can be used for navigation during in-depth high-precision manual tasks.
- Published
- 2021
- Full Text
- View/download PDF
172. A Pilot Study using Covert Visuospatial Attention as an EEG-based Brain Computer Interface to Enhance AR Interaction
- Author
-
Cassandra Scheirer, Nataliya Kosmyna, Qiuxuan Wu, Yujie Wang, Pattie Maes, and Chi-Yun Hu
- Subjects
medicine.diagnostic_test ,Computer science ,Covert ,Human–computer interaction ,Headset ,Interface (computing) ,medicine ,Eye movement ,Focusing attention ,Augmented reality ,Electroencephalography ,Brain–computer interface - Abstract
In this work we propose a prototype which combines an existing augmented reality (AR) headset, the Microsoft HoloLens 2, with an electroencephalogram (EEG) Brain-Computer Interface (BCI) system based on covert visuospatial attention (CVSA) – a process of focusing attention on different regions of the visual field without overt eye movements. In this work we did not rely on any stimulus-driven responses. Fourteen participants were able to test the system over the course of two days. To the best of our knowledge, this system is the first AR EEG-BCI integrated prototype that explores the complementary features of the AR headset like HoloLens 2 and the CVSA paradigm.
- Published
- 2021
173. Project Ariel: An Open Source Augmented Reality Headset for Industrial Applications
- Author
-
Alvaro Cassinelli, Vincent Ta, James O Campbell, and Damien Constantine Rompapas
- Subjects
Form factor (design) ,Open source ,Work (electrical) ,Human–computer interaction ,Computer science ,Headset ,Work flow ,Augmented reality ,Adaptation (computer science) ,Personal protective equipment - Abstract
Some of the biggest challenges in applying Augmented Reality (AR) technologies to the industry floor are in the form factor, and safety requirements of the head worn display. This includes alleviating issues such as peripheral view occlusion, and adaptation to personal protective equipment. In this work we present the design of Project Ariel, an Open Source 3D printable display specifically designed for use in industrial environments. It is our hope that with this technology, the average tradesman can utilize the powerful visualizations AR has to offer, significantly improving their daily work flow.
- Published
- 2021
174. VR as a 3D modelling tool in engineering design applications
- Author
-
Daria Vlah, Vanja Čok, and Uroš Urbas
- Subjects
Engineering drawing ,Technology ,razvoj produktov ,Computer science ,QH301-705.5 ,Headset ,QC1-999 ,engineering design ,CAD ,Context (language use) ,Virtual reality ,virtualna resničnost ,površinsko modeliranje ,General Materials Science ,Biology (General) ,product development ,Instrumentation ,QD1-999 ,Fluid Flow and Transfer Processes ,udc:658.512.2 ,business.industry ,Process Chemistry and Technology ,Physics ,General Engineering ,Usability ,Engineering (General). Civil engineering (General) ,CAD modeliranje ,Computer Science Applications ,Chemistry ,surface modelling ,New product development ,Key (cryptography) ,virtual reality ,CAD modelling ,TA1-2040 ,konstruiranje ,Engineering design process ,business - Abstract
The study aims to explore the usefulness of existing VR 3D modelling tools for use in mechanical engineering. Previous studies have investigated the use of VR 3D modelling tools in conceptual phases of the product development process. Our objective was to find out if VR tools are useful in creating advanced freeform CAD models that are part of the embodiment design phase in the context of mechanical design science. Two studies were conducted. In the preliminary study, the group of participants modelled a 3D part in a standard desktop CAD application, which provided information about the key characteristics that must be satisfied to obtain a solid model from a surface model. In the research study conducted with a focus group of participants, who were firstly trained in the use of VR, the same part was modelled using a VR headset. The results were analysed and the fulfilment of key characteristics in the use of VR was evaluated. It was found that using VR tools provides a fast way to create complex part geometries, however, it has certain drawbacks. Finally, the ease of use and specific features of the VR technology were discussed.
- Published
- 2021
175. Waveguide combiner technologies enabling small form factor mixed reality headset architectures
- Author
-
Bernard C. Kress
- Subjects
Form factor (design) ,Focus (computing) ,business.industry ,Computer science ,Headset ,Electrical engineering ,Wearable computer ,Use case ,business ,Field (computer science) ,Mixed reality ,Small form factor - Abstract
For the past decade, optics and display hardware developments for mixed reality and smart glasses were merely a shot in the dark, providing enough display immersion and visual comfort for developers to build up apps, especially for the enterprise field. Today, as universal use cases for consumer emerge such as co-presence, digital twin and remote conferencing, new optical functionalities are required to enable such experiences. It is not only a race to smaller form factor and light weight devices for large field of view (FOV) and lower power, but the requirements are also on additional display and sensing features specifically tuned to implement such new universal use cases. Broad acceptance of wearable displays especially in the consumer field is contingent on enabling these new display and sensing requirements in small form factors and low power. This talk will focus on waveguide combiner technologies and how these architectures have evolved over the past years to address such new requirements
- Published
- 2021
176. Remote Virtual Reality Teaching: Closing an Educational Gap During a Global Pandemic
- Author
-
Daniel Young, Francis J. Real, Matthew Zackoff, and Rashmi D Sahay
- Subjects
Coronavirus disease 2019 (COVID-19) ,Headset ,education ,Pilot Projects ,Virtual reality ,computer.software_genre ,Pediatrics ,Session (web analytics) ,Videoconferencing ,Pandemic ,Medicine ,Humans ,Prospective Studies ,Child ,Pandemics ,Medical education ,Modalities ,business.industry ,SARS-CoV-2 ,Virtual Reality ,COVID-19 ,Infant ,General Medicine ,Facilitator ,Pediatrics, Perinatology and Child Health ,business ,computer - Abstract
OBJECTIVE Resident physicians are expected to recognize patients requiring escalation of care on day 1 of residency, as outlined by the Association of American Medical Colleges. Opportunities for medical students to assess patients at the bedside or through traditional simulation-based medical education have decreased because of coronavirus disease 2019 restrictions. Virtual reality (VR) delivered remotely via video teleconferencing may address this educational gap. METHODS A prospective pilot study targeting third-year pediatric clerkship students at a large academic children’s hospital was conducted from April to December 2020. Groups of 6 to 15 students participated in a 1.5-hour video teleconferencing session with a physician facilitator donning a VR headset and screen sharing interactive VR cases of a hospitalized infant with respiratory distress. Students completed surveys assessing the immersion and tolerability of the virtual experience and reported its perceived effectiveness to traditional educational modalities. Comparisons were analyzed with binomial testing. RESULTS Participants included third-year medical students on their pediatric clerkship. A total of 140 students participated in the sessions, with 63% completing the survey. A majority of students reported VR captured their attention (78%) with minimal side effects. Students reported remote VR training as more effective (P < .001) than reading and online learning and equally or more effective (P < .001) than didactic teaching. Most students (80%) rated remote VR as less effective than bedside teaching. CONCLUSIONS This pilot reveals the feasibility of remote group clinical training with VR via a video conferencing platform, addressing a key experience gap while navigating coronavirus disease 2019 limitations on training.
- Published
- 2021
177. A VR-Based Simulator Using Motion Feedback of a Real Powered Wheelchair for Evaluation of Autonomous Navigation Systems
- Author
-
Motoki Shino, Kazuto Futawatari, and Hiroshi Yoshitake
- Subjects
Wheelchair ,Computer science ,Autonomous Navigation System ,Headset ,Perception ,media_common.quotation_subject ,Powered wheelchairs ,Virtual reality ,Motion (physics) ,Simulation ,media_common - Abstract
Autonomous navigation systems for powered wheelchairs are drawing attention to support older adults in moving outdoors. It is important to evaluate how the users behave and feel when they experience autonomous navigation for their development. VR-based simulators are used to evaluate user's behavior and subjective assessment in various scenarios without putting anybody at risk. However, it is difficult to reproduce realistic motion feedback in the simulator using motion platforms. Moreover, this physical feedback influences user's perception of self-motion, which affects their behavior and feeling. In this paper, we proposed a novel simulator that gives real motion feedback while simulating various scenarios by combining a virtual reality headset and a real powered wheelchair. Experiment results showed that subjective assessment of comfort during autonomous navigation differed between the proposed simulator and a simulator without motion feedback, indicating that perception of the wheelchair's behavior was enhanced by real motion feedback.
- Published
- 2021
178. A brain-sensing fragrance diffuser for mental state regulation using electroencephalography
- Author
-
Po-Chih Kuo, Yang Chen Lin, Shang-Lin Yu, and An-Yu Zhuang
- Subjects
Olfactory perception ,medicine.diagnostic_test ,Computer science ,Brain activity and meditation ,Human–computer interaction ,Headset ,Mental state ,Classifier (linguistics) ,medicine ,Electroencephalography ,Preference ,Brain–computer interface - Abstract
Human brain studies have shown that olfactory perception can regulate emotion and attention networks and prevent depressed mental states. Fragrance diffusers have been used as the potential appliance to reconcile mental conditions and achieve stress relief in daily life. Although perceiving fragrances are a complicated and subjective experience, studies have shown that it is possible to reveal the personal preference of fragrance from brain activity measured by electroencephalography (EEG). Moreover, using EEG to detect neural/mental states and apply them to human-machine interfaces has also been investigated for years. Therefore, this pilot study has two aims: (1) to identify users’ preference for fragrances from EEG; (2) to develop a personalized fragrance diffuser, Aroma Box, which can detect three mental states from EEG when a user feels depressed, stressed, or drowsily and then release fragrances in real-time to help user recover from the abnormal states. To achieve this goal, we first extracted the features and built a classifier to identify the user’s preference for fragrances from EEG. Then we calculated the indicators of brain states based on the EEG frequency analysis. Based on our preliminary experimental results, we deployed our algorithms in an in-house developed diffuser with a consumer 32-channel EEG headset, which has been further implemented in a real-life working environment and evaluated its efficacy by two users.
- Published
- 2021
179. Acceptance of a Smartphone-Based Visual Field Screening Platform for Glaucoma: Pre-Post Study
- Author
-
Luc Geurts, Vero Vanden Abeele, Sisay Bekele, and Esmael Kedir Nida
- Subjects
genetic structures ,Headset ,Visual impairment ,Medicine (miscellaneous) ,Glaucoma ,Health Informatics ,Unified theory of acceptance and use of technology ,mHealth acceptance ,medicine ,ophthalmic ,Medical diagnosis ,mHealth ,glaucoma screening ,Original Paper ,mobile phone ,business.industry ,mhealth for eye care ,medicine.disease ,eye ,eye diseases ,mhealth ,Computer Science Applications ,Test (assessment) ,ophthalmology ,glaucoma ,Optometry ,visual ,medicine.symptom ,Rural area ,UTAUT ,business - Abstract
Background Glaucoma, the silent thief of sight, is a major cause of blindness worldwide. It is a burden for people in low-income countries, specifically countries where glaucoma-induced blindness accounts for 15% of the total incidence of blindness. More than half the people living with glaucoma in low-income countries are unaware of the disease until it progresses to an advanced stage, resulting in permanent visual impairment. Objective This study aims to evaluate the acceptability of the Glaucoma Easy Screener (GES), a low-cost and portable visual field screening platform comprising a smartphone, a stereoscopic virtual reality headset, and a gaming joystick. Methods A mixed methods study that included 24 eye care professionals from 4 hospitals in Southwest Ethiopia was conducted to evaluate the acceptability of GES. A pre-post design was used to collect perspectives before and after using the GES by using questionnaires and semistructured interviews. A Wilcoxon signed-rank test was used to determine the significance of any change in the scores of the questionnaire items (two-tailed, 95% CI; α=.05). The questionnaire and interview questions were guided by the Unified Theory of Acceptance and Use of Technology. Results Positive results were obtained both before and after use, suggesting the acceptance of mobile health solutions for conducting glaucoma screening by using a low-cost headset with a smartphone and a game controller. There was a significant increase (two-tailed, 95% CI; α=.05) in the average scores of 86% (19/22) of postuse questionnaire items compared with those of preuse questionnaire items. Ophthalmic professionals perceived GES as easy to use and as a tool that enabled the conduct of glaucoma screening tests, especially during outreach to rural areas. However, positive evaluations are contingent on the accuracy of the tool. Moreover, ophthalmologists voiced the need to limit the tool to screening only (ie, not for making diagnoses). Conclusions This study supports the feasibility of using a mobile device in combination with a low-cost virtual reality headset and classic controller for glaucoma screening in rural areas. GES has the potential to reduce the burden of irreversible blindness caused by glaucoma. However, further assessment of its sensitivity and specificity is required.
- Published
- 2021
180. A Comparative Investigation of Eye Fixation-based 4-Class Emotion Recognition in Virtual Reality Using Machine Learning
- Author
-
Lim Jia Zheng, James Mountstephens, and Jason Teo
- Subjects
Computer science ,business.industry ,Headset ,Emotion classification ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Usability ,Virtual reality ,Machine learning ,computer.software_genre ,Class (biology) ,Random forest ,Support vector machine ,Feature (machine learning) ,Artificial intelligence ,business ,computer - Abstract
Research on emotion recognition that relies purely on eye-tracking data is very limited although the usability of eye-tracking technology has great potential for emotional recognition. This paper proposes a novel approach for 4-class emotion classification using eye-tracking data solely in virtual reality (VR) with machine learning algorithms. We classify emotions into four specific classes using VR stimulus. Eye fixation data was used as the emotional-relevant feature in this investigation. A presentation of 3600 videos, which contains four different sessions, was played in VR to evoke the user’s emotions. The eye-tracking data was collected and recorded using an add-on eye-tracker in the VR headset. Three classifiers were used in the experiment, which are k-nearest neighbor (KNN), random forest (RF), and support vector machine (SVM). The findings showed that RF has the best performance among the classifiers, and achieved the highest accuracy of 80.55%.
- Published
- 2021
181. The Use of Counting Peaks Method for the Purpose of Smoothing Filtering Efficiency Assessment in Analysis of Electroencephalography Data
- Author
-
Barbara Grochowicz, Aleksandra Kawala-Sterniuk, Jarosław Zygarlicki, Natalia Browarska, Marcin Kaminski, and Mariusz Pelc
- Subjects
business.product_category ,medicine.diagnostic_test ,business.industry ,Computer science ,Headset ,Pattern recognition ,Filter (signal processing) ,Electroencephalography ,Automation ,Moving average ,medicine ,Artificial intelligence ,business ,OpenBCI ,Smoothing ,Headphones - Abstract
In this paper an innovative counting peaks method applied for the purpose of efficiency assessment of smoothing filtering applied on electroencephalography data was presented. The proposed method gave promising results and can be treated as an introductory method for intelligent machine-based reasoning. The analysed data were obtained from the open source data-base and recorded with the OpenBCI EEG headset. The comparison focused on Moving Average and Savitzky-Golay (SG) filter. The best results were obtained with the use of the Moving Average filtering.
- Published
- 2021
182. Pilot Study on Using Innovative Counting Peaks Method for Assessment Purposes of the EEG Data Recorded from a Single-Channel Non-Invasive Brain-Computer Interface
- Author
-
Aleksandra Kawala-Sterniuk, Michal Niemczynowicz, Jarosław Zygarlicki, Mariusz Pelc, Malgorzata Zygarlicka, and Natalia Browarska
- Subjects
business.product_category ,Channel (digital image) ,medicine.diagnostic_test ,business.industry ,Computer science ,Headset ,Pattern recognition ,Electroencephalography ,Visualization ,Moving average ,medicine ,Artificial intelligence ,business ,Headphones ,Smoothing ,Brain–computer interface - Abstract
This paper presents an innovative method for counting peaks on EEG signal. In this particular case it was applied for the purpose of efficiency assessment of smoothing filtering. The analysed data recorded with the NeuroSky Mindwave EEG headset, obtained from the open source data-base. The best results gave Moving Average filtering with the score of over 96% peaks coverage. Promising results of this methods are consistent with the conclusions based on visual assessment. This method can be treated as an introductory method for intelligent machine-based reasoning.
- Published
- 2021
183. A Liver Electrosurgery Simulator Developed by Unity Engine
- Author
-
Deyu Kong, Xuejun Zhang, Hongjie Zeng, Jingxian Chen, Xianfu Xu, and Yini Wei
- Subjects
Electrosurgery ,Operating environment ,Computer science ,business.industry ,medicine.medical_treatment ,Headset ,Usability ,Virtual reality ,medicine ,Medical training ,Positive attitude ,business ,Simulation ,Haptic technology - Abstract
Virtual surgery has been widely used in medical training for safety and cost reasons. The hepatectomy simulator we proposed in this paper is a virtual surgical system designed to simulate the resection of liver with electric knife. It can help medical students familiarize themselves with surgical procedures and improve their skills in repeatable training. Liver model used in this study was reconstructed from real CT images, position-based dynamics has been used to make the model deformable. By combining VR headset with force feedback device, a more realistic operating environment is presented. According to the evaluation results, users all have a positive attitude towards the usefulness and usability of the simulator. Existing problems of the simulator have been discussed in this paper, as well as directions for further research.
- Published
- 2021
184. Human Teleoperation - A Haptically Enabled Mixed Reality System for Teleultrasound
- Author
-
Amir Hossein Hadi Hosseinabadi, David Black, Septimiu E. Salcudean, and Yas Oloumi Yazdi
- Subjects
Computer science ,business.industry ,Headset ,Teleoperation ,Robotics ,Virtual device ,Augmented reality ,Computer vision ,Artificial intelligence ,Virtual reality ,business ,Mixed reality ,Haptic technology - Abstract
Current teleguidance methods include verbal guidance and robotic teleoperation, which present tradeoffs between precision and latency versus flexibility and cost. We present a novel concept of "human teleoperation" which bridges the gap between these two methods. A prototype teleultrasound system was implemented which shows the concept’s efficacy. An expert remotely "teloperates" a person (the follower) wearing a mixed reality headset by controlling a virtual ultrasound probe projected into the person’s scene. The follower matches the pose and force of the virtual device with a real probe. The pose, force, video, ultrasound images, and 3-dimensional mesh of the scene are fed back to the expert. In this control framework, the input and the actuation are carried out by people, but with near robot-like latency and precision. This allows teleguidance that is more precise and fast than verbal guidance, yet more flexible and inexpensive than robotic teleoperation. The system was subjected to tests that show its effectiveness, including mean teleoperation latencies of 0.27 seconds and errors of 7 mm and 6◦ in pose tracking. The system was also tested with an expert ultrasonographer and four patients and was found to improve the precision and speed of two teleultrasound procedures.
- Published
- 2021
185. Controlling a Mouse Pointer with a Single-Channel EEG Sensor
- Author
-
Alberto J. Molina-Cantero, Fernando Gómez-Bravo, Santiago Berrazueta-Alvarado, Juan A. Castro-García, Raúl Jiménez-Naharro, and R. Lopez-Ahumada
- Subjects
Emotion assessment ,Computer science ,Interface (computing) ,Headset ,Movement ,Pointing device ,TP1-1185 ,Biochemistry ,Cursor (databases) ,Article ,2D cursor control ,Analytical Chemistry ,Joystick ,blinks ,Humans ,Attention ,Computer vision ,Blinks ,Electrical and Electronic Engineering ,Instrumentation ,Brain–computer interface ,HCI ,business.industry ,Chemical technology ,Usability ,Electroencephalography ,Fitts’ model ,Atomic and Molecular Physics, and Optics ,attention ,Brain-Computer Interfaces ,Trajectory ,Artificial intelligence ,business ,emotion assessment ,33 Ciencias Tecnológicas - Abstract
Goals: The purpose of this study was to analyze the feasibility of using the information obtained from a one-channel electro-encephalography (EEG) signal to control a mouse pointer. We used a low-cost headset, with one dry sensor placed at the FP1 position, to steer a mouse pointer and make selections through a combination of the user’s attention level with the detection of voluntary blinks. There are two types of cursor movements: spinning and linear displacement. A sequence of blinks allows for switching between these movement types, while the attention level modulates the cursor’s speed. The influence of the attention level on performance was studied. Additionally, Fitts’ model and the evolution of the emotional states of participants, among other trajectory indicators, were analyzed. (2) Methods: Twenty participants distributed into two groups (Attention and No-Attention) performed three runs, on different days, in which 40 targets had to be reached and selected. Target positions and distances from the cursor’s initial position were chosen, providing eight different indices of difficulty (IDs). A self-assessment manikin (SAM) test and a final survey provided information about the system’s usability and the emotions of participants during the experiment. (3) Results: The performance was similar to some brain–computer interface (BCI) solutions found in the literature, with an averaged information transfer rate (ITR) of 7 bits/min. Concerning the cursor navigation, some trajectory indicators showed our proposed approach to be as good as common pointing devices, such as joysticks, trackballs, and so on. Only one of the 20 participants reported difficulty in managing the cursor and, according to the tests, most of them assessed the experience positively. Movement times and hit rates were significantly better for participants belonging to the attention group. (4) Conclusions: The proposed approach is a feasible low-cost solution to manage a mouse pointer
- Published
- 2021
- Full Text
- View/download PDF
186. Seeing Thru Walls: Visualizing Mobile Robots in Augmented Reality
- Author
-
Tom Drummond, Elizabeth A. Croft, Morris Gu, Wesley P. Chan, and Akansel Cosgun
- Subjects
FOS: Computer and information sciences ,business.product_category ,Computer science ,Headset ,Visibility (geometry) ,Mobile robot ,Visualization ,Computer Science - Robotics ,Human–computer interaction ,Robot ,Augmented reality ,Roaming ,business ,Robotics (cs.RO) ,Headphones - Abstract
We present an approach for visualizing mobile robots through an Augmented Reality headset when there is no line-of-sight visibility between the robot and the human. Three elements are visualized in Augmented Reality: 1) Robot's 3D model to indicate its position, 2) An arrow emanating from the robot to indicate its planned movement direction, and 3) A 2D grid to represent the ground plane. We conduct a user study with 18 participants, in which each participant are asked to retrieve objects, one at a time, from stations at the two sides of a T-junction at the end of a hallway where a mobile robot is roaming. The results show that visualizations improved the perceived safety and efficiency of the task and led to participants being more comfortable with the robot within their personal spaces. Furthermore, visualizing the motion intent in addition to the robot model was found to be more effective than visualizing the robot model alone. The proposed system can improve the safety of automated warehouses by increasing the visibility and predictability of robots., Accepted at RO-MAN 2021 "30th IEEE International Conference on Robot and Human Interactive Communication", 6 pages, 5 figures, 5 Tables
- Published
- 2021
187. Augmented reality representation of virtual user avatars moving in a virtual representation of the real world at their respective real world locations
- Author
-
Christoph Leuze and Matthias Leuze
- Subjects
business.product_category ,Computer science ,Human–computer interaction ,Headset ,Virtual representation ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Augmented reality ,Representation (arts) ,business ,Flight simulator ,Mobile device ,Airplane ,Avatar - Abstract
In this work we present an augmented reality (AR) application that allows a user with an AR display to watch another user, flying an airplane in the Microsoft Flight Simulator 2020 (MSFS), at their respective location in the real world. To do that, we take the location data of a virtual 3D airplane model in a virtual representation of the world of a user playing MSFS, and stream it via a server to a mobile device. The mobile device user can then see the same 3D airplane model at exactly that real world location, that corresponds to the location of the virtual 3D airplane model in the virtual representation of the world. The mobile device user can also see the avatar movement updated according to the 3D airplane movement in the virtual world. We implemented the application on both a cellphone and a see-through headset.
- Published
- 2021
188. Reverse Pass-Through VR
- Author
-
Joel Hegland, Brian Wheelwright, Douglas Robert Lanman, and Nathan Matsuda
- Subjects
Human–computer interaction ,Computer science ,Headset ,Autostereoscopy ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Eye contact ,Virtual reality - Abstract
We introduce reverse pass-through VR, wherein a three-dimensional view of the wearer’s eyes is presented to multiple outside viewers in a perspective-correct manner, with a prototype headset containing a world-facing light field display. This approach, in conjunction with existing video (forward) pass-through technology, enables more seamless interactions between people with and without headsets in social or professional contexts. Reverse pass-through VR ties together research in social telepresence and copresence, autostereoscopic displays, and facial capture to enable natural eye contact and other important non-verbal cues in a wider range of interaction scenarios.
- Published
- 2021
189. Real-time Quantitative Visual Inspection using Extended Reality
- Author
-
Jason Paul Connelly, Sriram Narasimhan, Zaid Abbas Al-Sabbag, and Chul Min Yeum
- Subjects
Visual inspection ,Pixel ,business.industry ,Computer science ,Headset ,Process (computing) ,Polygon mesh ,Computer vision ,Segmentation ,Artificial intelligence ,Overlay ,business ,Graphical user interface - Abstract
In this study, we propose a technique for quantitative visual inspection that can quantify structural damage using extended reality (XR). The XR headset can display and overlay graphical information on the physical space and process the data from the built-in camera and depth sensor. Also, the device permits accessing and analyzing image and video stream in real-time and utilizing 3D meshes of the environment and camera pose information. By leveraging these features for the XR headset, we build a workflow and graphic interface to capture the images, segment damage regions, and evaluate the physical size of damage. A deep learning-based interactive segmentation algorithm called f-BRS was deployed to precisely segment damage regions through the XR headset. A ray-casting algorithm is implemented to obtain 3D locations corresponding to the pixel locations of the damage region on the image. The size of the damage region is computed from the 3D locations of its boundary. The performance of the proposed method is demonstrated through a field experiment at an in-service bridge where spalling damage is present at its abutment. The experiment shows that the proposed method provides sub-centimeter accuracy for the size estimation.
- Published
- 2021
190. Effects of using headset-delivered virtual reality in road safety research : A systematic review of empirical studies
- Author
-
Vankov, Daniel, Jankovszky, David, Vankov, Daniel, and Jankovszky, David
- Abstract
To reduce serious crashes, contemporary research leverages opportunities provided by technology. A potentially higher added value to reduce road trauma may be hidden in utilising emerging technologies, such as headset-delivered virtual reality (VR). However, there is no study to analyse the application of such VR in road safety research systematically. Using the PRISMA protocol, our study identified 39 papers presented at conferences or published in scholarly journals. In those sources, we found evidence of VR's applicability in studies involving different road users (drivers, pedestrians, cyclists and passengers). A number of articles were concerned with providing evidence around the potential adverse effects of VR, such as simulator sickness. Other work compared VR with conventional simulators. VR was also contributing to the emerging field of autonomous vehicles. However, few studies leveraged the opportunities that VR presents to positively influence the involved road users' behaviour. Based on our findings, we identified pathways for future research.
- Published
- 2021
191. Mixed reality: the next step in critical emergency calls?
- Author
-
De Jonghe, Emilio (author) and De Jonghe, Emilio (author)
- Abstract
Unfortunately, accidents and emergencies happen every day. Today, emergency services only rely on auditory information from the caller; to dispatch the right tools and instruct the caller. Imagine if the dispatcher gets extra visual information and sensors to their disposal. Or that the caller receives immersive, self-evident instructions right in front of their eyes. So they can provide aid without taking their hands or their attention away from the emergency. That is all possible with EVU. EVU, Emergency Vision Unit is a mixed reality helmet to professionally deal with critical emergency calls while the first responders are on their way. EVU provides visualisation of instructions in real-time to the caller during an emergency and gives the dispatcher real-time visual information to assist., Integrated Product Design
- Published
- 2021
192. COGITO in Space
- Author
-
Daniela de Paulis
- Subjects
Radio telescope ,Computer science ,Computer graphics (images) ,media_common.quotation_subject ,Headset ,Outer space ,Antenna (radio) ,Virtual reality ,Search for extraterrestrial intelligence ,media_common ,Radio astronomy ,Radio wave - Abstract
COGITO in Space is an experiential narrative sending thoughts into outer space as radio waves. The project exists both as a mobile installation and as performative event staged inside the cabin of the Dwingeloo radio telescope in The Netherlands. For both versions of the project, a team composed by three neuroscientists prepare the subject with a lab grade electroencephalogram (EEG) device and a virtual reality (VR) headset, showing an experimental video of the Earth seen from space. The brain activity stimulated by the video is recorded and simultaneously transmitted into space in real time, using the antenna of the Dwingeloo radio telescope., The 2021 Assembly of the Order of the Octopus - Participant Talks - Session I: The Humanities & SETI
- Published
- 2021
- Full Text
- View/download PDF
193. OCR-Based Assistive System for Blind People
- Author
-
Anisha Jana, Navneet Kumar Prajapati, Tanya Anand, S. Krithiga, and Brahmjot Kaur
- Subjects
Visually impaired ,Computer science ,business.industry ,Headset ,Detector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Speech synthesis ,computer.software_genre ,Raspberry pi ,Obstacle ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Tesseract ,Computer vision ,Ultrasonic sensor ,Artificial intelligence ,business ,computer - Abstract
This paper presents a prototype which aims at helping the visually impaired people by providing them assistance with the most basic activity such as shopping at the supermarket. The system consists of a Raspberry Pi interfaced with a camera, a headset and an ultrasonic sensor. The camera captures the content in the text image. The image is recognized which is then fed to a speech synthesizer converting text into speech for it to be heard by the user. The ultrasonic sensor acts as an obstacle detector.
- Published
- 2021
194. Localization and Navigation System for Blind Persons Using Stereo Vision and a GIS
- Author
-
Moncef Aharchi and M.’hamed Ait Kbir
- Subjects
Stereopsis ,Geographic information system ,business.industry ,Computer science ,Human–computer interaction ,Headset ,Assisted GPS ,Navigation system ,Context (language use) ,business ,Stereo camera ,Portable computer - Abstract
Loss of vision caused by infectious diseases has decreased significantly; however aging will increase the risk that more people acquire vision impairment. Visual information is the basis of most navigation tasks; a person is considered visually impaired when he has no appropriate information on the surrounding environment. With the latest evolution of digital technologies, the assistance provided to visually disabled people during their mobility can be improved. In this context, we propose a system to help the visually impaired move quickly and to know their environment. Indoors, the system uses a stereoscopic camera, a portable computer, and a headset to direct and help visually impaired persons navigate comfortably and securely in familiar and unfamiliar environments. Outdoors, a GPS is used as a positioning method to keep the visually impaired person on the right path; with its dynamic routing and rerouting capabilities, it provides the user with an optimized path. The system can work on an outdoor and indoor environment. A stereoscopic camera is used to detect visual indicators that are used to trace and validate user navigation, provide accurate indoor location measurements, and recognize objects in front of the user. This article is mainly focused on this system and detailed outlines description.
- Published
- 2021
195. Electronic Assistant for Impaired People
- Author
-
Andrii Andriiovych Pakhomov and Roman Petrovych Sahan
- Subjects
Audio signal ,business.industry ,Computer science ,Headset ,Communications system ,Signal ,Microcontroller ,GSM ,Global Positioning System ,ComputerSystemsOrganization_SPECIAL-PURPOSEANDAPPLICATION-BASEDSYSTEMS ,Computer vision ,Artificial intelligence ,General Packet Radio Service ,business - Abstract
For people with serious visual impairments, a system is proposed that helps to identify obstacles and call for help in an emergency situation. The system is based on a microcontroller and optical, acoustic and electric sensors connected to it, as well as GPS and GSM modules. Modules interact with a person using voice communication. According to the World Health Organization (WHO), 39 million people worldwide are blind and 246 million are visually impaired [1]. People with partial or complete loss of vision face many problems in their daily lives, especially the problem of movement and orientation in the field. A blind person usually uses a traditional stick to improve their mobility. However, the stick cannot provide a person with information beyond his reach. There are smart sticks that use one camera, or several video cameras mounted on the stick to capture images. Captured video images are resized, further processed and converted into acoustic or vibration signals. In such systems, the frequency of the warning signal correlates with the pixel orientation of the camcorder. There are also systems that use ultrasonic sensors to detect interference. The value of the distance to the obstacle, measured by a sound wave, is transmitted to the microcontroller, which sends a sound signal through the speaker. The disadvantages of such systems are the inability to detect hidden obstacles that are dangerous to the visually impaired, such as stairs, pits, puddles, and so on. The proposed system solves these problems by combining the capabilities of acoustic and optical sensors, as well as a water sensor. Support for a person in a difficult situation is also provided by establishing a telephone connection with a trustee. The GPS location information is received by the GPS module and the microcontroller sends this information via the GSM module to the specified contact number. The system consists of a microcontroller (control of the electronic assistant), a sensor system that receives information about the location of a person and obstacles in its path, an effector system that sends a person acoustic and vibration signals about detected obstacles, as well as a communication system. connects: 1) two ultrasonic sensors to detect obstacles located in front at knee height and at head height; 2) infrared sensor to detect stairs and terrain; 3) water sensor to detect puddles. The sensors collect data in real time and send it to the microcontroller for processing. After processing the sensory information, the microcontroller sends vibrations and acoustic signals to the person, respectively, on the vibrators installed in the stick head, and on the Bluetooth headset. The system is powered by a recessed battery (not shown). This article proposes a system that helps a visually impaired person to reach their destination safely. The system uses a variety of sensors to detect interference and warn of interference with an audible signal and vibration. The intensity of the sound signal and vibration increase when a person approaches an obstacle. The GPS module tracks the user's location. In case of a dangerous situation, the GSM / GPRS module establishes a connection between a blind person and a trustee.
- Published
- 2021
196. Brain Controlled Lego NXT Mindstorms 2.0 Platform
- Author
-
Rosca Sebastian Daniel, Sibisanu Remus Constantin, Leba Monica, and Panaite Arun Fabian
- Subjects
medicine.diagnostic_test ,business.industry ,Computer science ,Headset ,Entertainment industry ,Brain control ,Electroencephalography ,Signal ,Smart toys ,medicine ,OpenBCI ,business ,Portable EEG ,Computer hardware - Abstract
The increased need to perform a different control adapted to various areas such as: rehabilitation, entertainment, smart toys have created a new demand for small sized low-cost electroencephalogram (EEG) acquisition devices. Also, there is a need to develop portable EEG devices that can be used for long term, without the need for further operations such as the short-time replacement of signal capture electrodes or frequent rehydration in the case of semiconductor polymers-based sensors. In this paper, we propose the use of an OpenBCI neural headset whose structure has been designed and 3D printed, equipped with 16 reusable electroencephalogram electrodes that provide 16 channels of acquisition of EEG signals from the user to control a Lego NXT MindStorms 2.0 wheeled platform programmed in C# that accepts brain control signals as input.
- Published
- 2021
197. Emotion Recognition using Electroencephalography in Response to High Dynamic Range Videos
- Author
-
Majid Riaz, Junaid Mir, and Muhammad Majid
- Subjects
Support vector machine ,medicine.diagnostic_test ,Computer science ,Headset ,Speech recognition ,Feature extraction ,medicine ,Electroencephalography ,Valence (psychology) ,Affective computing ,High dynamic range ,Arousal - Abstract
Emotions are the human’s internal feelings that result in physiological and physical changes influencing behaviour and thought. Emotion recognition is critical in affective computing for user modelling and human-computer interaction. Previously, user’s emotional states have been recognized by measuring physiological activities in response to various audiovisual standard dynamic range (SDR) emotional stimuli. High dynamic range (HDR) content provides a better visual and immersive experience than SDR stimuli due to more brightness, enhanced contrast, and saturated colors. However, the impact of HDR multimedia on emotion elicitation and recognition has not been studied yet. In this paper, four audio-visual HDR stimuli were selected that evoke discrete emotions in each quadrant of the arousal-valence plane. Electroencephalography (EEG) signals of 27 subjects were recorded while watching HDR stimuli using a commercially available Emotiv Insight headset. Time-domain features from 5 channels of EEG signals are extracted and selected to classify emotions in two arousal and valence states using a support vector machine (SVM) classifier. Classification accuracies of 80.55% and 70.37% are achieved for arousal and valence, respectively. Our findings show that HDR videos can act as powerful stimuli for emotion recognition.
- Published
- 2021
198. Control of an Electric Wheelchair Using Multimodal Biosignals and Machine Learning
- Author
-
Gursel Alici and Siobhan O'Brien
- Subjects
business.product_category ,Computer science ,Dongle ,Headset ,USB ,law.invention ,Wheelchair ,law ,Joystick ,Multiple-classification ripple-down rules ,Control system ,business ,Headphones ,Simulation - Abstract
While the current design of electric wheelchairs is very well established, people with hand disability or motor impairment are limited in their independence. The incorporation of multiple bio-signals in commercial electric wheelchair control systems would increase functionality of the assistive devices for people with restricted or erratic hand movement, consequently increasing the quality of life of such users. This study reports on the implementation of a multimodal bio-signal control system for an electric wheelchair. In this design, wheelchair control via the standard joystick was replaced by a wireless EEG headset with four direction modes - forward, turn (left), turn (right), and neutral/no movement. Control of the movement of the chair is obtained through the analysis of EEG signals, EOG signals and some EMG signals, measured simultaneously. Data is gathered from a minimum number of electrodes, six in total, residing on the low cost commercial Emotiv EPOC+ EEG Headset. Bio-signals are sent from the EPOC+ to the USB dongle in a computer, which filters and processes the data using Python scripts. A moving average gradient thresholding Multiple Classification Ripple Down Rules paradigm has been implemented to ascertain the intended movement direction of the user. The movement intention class is sent via Wi-Fi to a Raspberry Pi with custom PCB which is wired into the joystick plug. The multi-modal bio-signal system proves highly effective, obtaining a classification accuracy of 97.7% over three different trials, though seven instances of false positives were observed, prompting future optimisation of the classification system.
- Published
- 2021
199. Virtual reality to promote wellbeing in persons with dementia: A scoping review
- Author
-
Lora Appel, Suad Ali, Tanya Narag, Krystyna Mozeson, Zain Pasat, Ani Orchanian-Cheff, and Jennifer L Campos
- Subjects
headset ,wellbeing ,HMD ,quality of life ,ADL ,iADL ,Review Articles ,Virtual reality ,dementia - Abstract
Virtual Reality (VR) technologies have increasingly been considered potentially valuable tools in dementia-related research and could serve as non-pharmacological therapy to improve quality of life (QoL) and wellbeing for persons with dementia (PwD). In this scoping review, we summarize peer-reviewed articles published up to Jan-21, 2021, on the use of VR to promote wellbeing in PwD. Eighteen manuscripts (reporting on 19 studies) met the inclusion criteria, with a majority published in the past 2 years. Two reviewers independently coded the articles regarding A) intended clinical outcomes and effectiveness of the interventions, B) study sample (characteristics of the participants), C) intervention administration (by whom, what setting), D) experimental methods (design/instruments), and E) technical properties of the VR-systems (hardware/devices and software/content). Emotional outcomes were by far the most common objectives of the interventions, reported in seventeen (89.5%) of the included articles. Outcomes addressing social engagement and personhood in PwD have not been thoroughly explored using VR. Based on the positive impact of VR, future opportunities lie in identifying special features and customization of the hardware/software to afford the most benefit to different sub-groups of the target population. Overall, this review found that VR represents a promising tool for promoting wellbeing in PwD, with positive or neutral impact reported on emotional, social, and functional aspects of wellbeing.
- Published
- 2021
200. A Comparative Study of Drowsiness Detection From Eeg Signals Using Pretrained CNN Models
- Author
-
S S Poorna, Madhavarapu Srinivasa Sai Bhargav, Chigurupati Naveen, K Anuraj, Budhi Veera Bharath Chandra, and Mahapatra Medha Sampath Kumar
- Subjects
education.field_of_study ,medicine.diagnostic_test ,Computer science ,business.industry ,Headset ,Population ,Pattern recognition ,Electroencephalography ,Signal ,Convolutional neural network ,Time–frequency analysis ,medicine ,Artificial intelligence ,education ,business - Abstract
Drowsiness has become one of the major causes of road accidents now-a-days. In order to alleviate this issue, a system has been developed, which uses electroencephalogram (EEG) signals to detect drowsiness with sufficient reliability. This experiment was conducted on a small population and the EEG signals were acquired using a 14-channel wireless headset, while they were in a virtual driving environment. To extract the eye closures, the EEG signal was segmented, and pre-processed. Further the scalograms which describes the time-frequency characteristics of these segments were taken. Pretrained Convolutional Neural Network based architectures viz. ResNet-152, ResNet101, VGG16, VGG19, AlexNet were used to distinguish three states of the driver namely “Sleepy or Drowsy”, “Asleep” and “Awake”.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.