20,293 results on '"Augmented reality"'
Search Results
2. The Metaverse Flywheel: Creating Value across Physical and Virtual Worlds.
- Author
-
Ritala, Paavo, Ruokonen, Mika, and Kostis, Angelos
- Subjects
SHARED virtual environments ,VALUE creation ,VIRTUAL reality ,AUGMENTED reality ,INNOVATION adoption - Abstract
This study presents a metaverse flywheel model providing insights into how the emerging layered modular architecture of the metaverse can enable new types of value-creation opportunities for organizations. Based on interviews with early metaverse adopters and innovators, this article identifies three key metaverse affordances: prospection of future conditions, persistence of editable and evolving virtual spaces, and integration between virtual and physical worlds. The findings enhance the nascent metaverse literature by highlighting that new organizing logics are required for metaverse-specific value creation, which goes beyond the previous generation's isolated 3D models and other interfaces. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. The use of social media augmented reality for engaging parents and educating children about road safety
- Author
-
Zamani, Naser
- Published
- 2024
4. Exploring the Impact of the Gamified Metaverse on Knowledge Acquisition and Library Anxiety in Academic Libraries.
- Author
-
Pradorn Sureephong, Suepphong Chernbumroong, Supicha Niemsup, Pipitton Homla, Kannikar Intawong, and Kitti Puritat
- Subjects
- *
SCHOOL environment , *QUALITATIVE research , *T-test (Statistics) , *ACADEMIC libraries , *HEALTH occupations students , *STATISTICAL sampling , *LIBRARIANS , *QUESTIONNAIRES , *UNDERGRADUATES , *RANDOMIZED controlled trials , *QUANTITATIVE research , *INFORMATION technology , *PRE-tests & post-tests , *MOTIVATION (Psychology) , *VIRTUAL reality , *LIBRARY public services , *RESEARCH methodology , *COLLEGE teacher attitudes , *COMMUNICATION , *LIBRARY orientation , *STUDENT attitudes , *INTERPERSONAL relations , *AUGMENTED reality , *GAMIFICATION , *USER interfaces , *ACCESS to information , *DIGITAL libraries ,ANXIETY prevention - Abstract
This paper investigates the potential of the Gamified Metaverse as a platform for promoting library services. The study compares the effectiveness of a traditional library program with a Metaverse- based library program in terms of knowledge acquisition and library anxiety. The research also examines students' perceptions of implementing gamification within the context of the Gamified Metaverse platform. A mixed-methods approach was adopted, including pre- and post-test analysis, statistical analysis, and qualitative data collection. The results indicate that both the traditional and Metaverse-based library programs effectively increased the participants' knowledge, with no significant difference between the two approaches. However, the Metaverse-based program was found to be less effective in facilitating interaction with librarians and reducing library anxiety. Additionally, students expressed positive perceptions of implementing gamification in the Gamified Metaverse platform, finding it engaging and motivating. These findings contribute to the understanding of the effect of the Metaverse as a tool for promoting library services and enhancing knowledge acquisition. However, it is not as effective in reducing library anxiety, particularly in terms of interaction with librarians and staff. It should be noted that the platform may have limitations such as high costs and potential side effects of virtual reality, making it more suitable as an additional tool for promoting library services, taking into account its feasibility and potential benefits for specific student populations and larger libraries. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Augmented reality for inclusive growth in education: the challenges
- Author
-
Mkwizu, Kezia Herman and Bordoloi, Ritimoni
- Published
- 2024
- Full Text
- View/download PDF
6. Augmented reality: the key to unlock customer engagement potential
- Author
-
Ganesan, Muruganantham and Kumar, B. Dinesh
- Published
- 2024
- Full Text
- View/download PDF
7. Applying virtual reality and augmented reality to the tourism experience: a comparative literature review
- Author
-
Bretos, María A., Ibáñez-Sánchez, Sergio, and Orús, Carlos
- Published
- 2024
- Full Text
- View/download PDF
8. Feasibility and Effectiveness of Augmented Reality Assistance System for Pancreatic Surgery (ARAS-P)
- Author
-
Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI), University of Kaiserslautern-Landau, and Dr.med Dr. habil Gregor A. Stavrou, Head of the Department
- Published
- 2024
9. AR vs In Person Simulation for Medical Workplace Training
- Author
-
Thomas Caruso, Clinical Associate Professor
- Published
- 2024
10. Utilization of facial fat grafting augmented reality guidance system in facial soft tissue defect reconstruction.
- Author
-
Liu, Kai, Chen, Siyi, Wang, Xudong, Ma, Zhihui, and Shen, Steve G.F.
- Abstract
Background: Successfully restoring facial contours continues to pose a significant challenge for surgeons. This study aims to utilize head-mounted display-based augmented reality (AR) navigation technology for facial soft tissue defect reconstruction and to evaluate its accuracy and effectiveness, exploring its feasibility in craniofacial surgery. Methods: Hololens 2 was utilized to construct the AR guidance system for facial fat grafting. Twenty artificial cases with facial soft tissue defects were randomly assigned to Group A and Group B, undergoing filling surgeries with the AR guidance system and conventional methods, respectively. All postoperative three-dimensional models were superimposed onto virtual plans to evaluate the accuracy of the system versus conventional filling methods. Additionally, procedure completion time was recorded to assess system efficiency relative to conventional methods. Results: The error in facial soft tissue defect reconstruction assisted by the system in Group A was 2.09 ± 0.56 mm, significantly lower than the 3.23 ± 1.15 mm observed with conventional methods in Group B (p < 0.05). Additionally, the time required for facial defect filling reconstruction using the system in Group A was 25.45 ± 2.58 min, markedly shorter than the 37.05 ± 3.34 min needed with conventional methods in Group B (p < 0.05). Conclusion: The visual navigation offered by the fat grafting AR guidance system presents obvious advantages in facial soft tissue defect reconstruction, facilitating enhanced precision and efficiency in these filling procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Tactile sensitivity alters textile touch perception.
- Author
-
Mehta, Sunidhi, Holásková, Ida, and Walker, Matthew
- Subjects
- *
TEXTURED woven textiles , *AUGMENTED reality , *LIKERT scale , *SHARED virtual environments , *VIRTUAL reality - Abstract
Tactility plays a crucial role in our interactions with the physical world including our ability to differentiate textile textures and their associated comfort. There is an increasing focus on digitally interactive haptic experiences enabling consumers to feel virtual objects realistically. This could revolutionize how we experience textiles in e-commerce platform, virtual and augmented reality, and shape the future of textiles in the metaverse. In this study, we examined the impact of tactile sensitivity on touch perception of a large nonhomogeneous sample of 22 textile swatches. The tactile sensitivity was studied using four factors: assessors' "subject-matter expertise", "frequency of performing handiwork", "frequency of working with textiles", and "familiarity of textile textures". The participants noted their tactile assessment of eight touch attributes of textile swatches on a 5-point Likert scale. Through predictive modeling, we analyzed the effect of tactile sensitivity on participants' tactile assessment scores. Our key findings revealed that participants' tactile sensitivity significantly influenced their perception of the textile textures. Notably, the "frequency of working with textiles" had the most substantial impact on participants' tactile ratings followed by their familiarity with textile textures. Interestingly, the perceptual differences of isotropy attribute were significant in all the cases. Overall, there was no significant difference in the tactile ratings between textile experts and non-experts, except for nine occurrences, four of which were related to perceptual differences in roughness of the woven fabrics. Conversely, the two groups had no statistically significant differences at all in their perceptions of hairiness, scratchiness, and uniformity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Light field imaging technology for virtual reality content creation: A review.
- Author
-
Khan, Ali, Hossain, Md. Moinul, Covaci, Alexandra, Sirlantzis, Konstantinos, and Xu, Chuanlong
- Subjects
- *
DEPTH perception , *AUGMENTED reality , *OPTICAL sensors , *USER experience , *CALIBRATION - Abstract
The light field (LF) imaging technique can capture 3D scene information in 4D by recording both 2D intensity and 2D direction of incoming light rays. Due to this capability, LF has shown a great interest in virtual reality (VR) and augmented reality (AR) for enhanced immersion, improved depth perception and reconstruction of realistic 3D environments. This paper presents a comprehensive review of LF imaging technology and other approaches used for VR content creation. The applications of LF technology beyond VR and AR are also discussed. The challenges and limitations of other approaches for VR content creation are examined. State‐of‐the‐art research has focused on how VR experiences benefit from LF technology and identified the challenges to creating comfortable, immersive and realistic VR content such as (1) image size and resolution, (2) processing speed, (3) precise calibration and (4) depth reconstruction. Recommendations that can be considered for creating immersive VR content are provided to enhance user experience. These recommendations aim to contribute to developing more comfortable and realistic VR content, extending the potential applications of LF imaging technology in diverse fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Development and assessment of case-specific physical and augmented reality simulators for intracranial aneurysm clipping.
- Author
-
Civilla, Lorenzo, Dodier, Philippe, Palumbo, Maria Chiara, Redaelli, Alberto C.L., Koenigshofer, Markus, Unger, Ewald, Meling, Torstein R., Velinov, Nikolay, Rössler, Karl, and Moscato, Francesco
- Subjects
INTRACRANIAL aneurysms ,AUGMENTED reality ,TEST validity ,SYNTHETIC training devices ,ANEURYSMS - Abstract
Background: Microsurgical clipping is a delicate neurosurgical procedure used to treat complex Unruptured Intracranial Aneurysms (UIAs) whose outcome is dependent on surgeon's experience. Simulations are emerging as excellent complements to standard training, but their adoption is limited by the realism they provide. The aim of this study was to develop and validate a microsurgical clipping simulator platform. Methods: Physical and holographic simulators of UIA clipping have been developed. The physical phantom consisted of a 3D printed hard skull and five (n = 5) rapidly interchangeable, perfused and fluorescence compatible 3D printed aneurysm silicone phantoms. The holographic clipping simulation included a real-time finite-element-model of the aneurysm sac, allowing interaction with a virtual clip and its occlusion. Validity, usability, usefulness and applications of the simulators have been assessed through clinical scores for aneurysm occlusion and a questionnaire study involving 14 neurosurgical residents (R) and specialists (S) for both the physical (
p ) and holographic (h ) simulators by scores going from 1 (very poor) to 5 (excellent). Results: The physical simulator allowed to replicate successfully and accurately the patient-specific anatomy. UIA phantoms were manufactured with an average dimensional deviation from design of 0.096 mm and a dome thickness of 0.41 ± 0.11 mm. The holographic simulation executed at 25–50 fps allowing to gain unique insights on the anatomy and testing of the application of several clips without manufacturing costs. Aneurysm closure in the physical model evaluated by fluorescence simulation and post-operative CT revealed Raymond 1 (full) occlusion respectively in 68.89% and 73.33% of the cases. For both the simulators content validity, construct validity, usability and usefulness have been observed, with the highest scores observed in clip selection usefulness Rp =4.78, Sp =5.00 and Rh =4.00, Sh =5.00 for the printed and holographic simulators. Conclusions: Both the physical and the holographic simulators were validated and resulted usable and useful in selecting valid clips and discarding unsuitable ones. Thus, they represent ideal platforms for realistic patient-specific simulation-based training of neurosurgical residents and hold the potential for further applications in preoperative planning. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
14. Influence of next-generation artificial intelligence on headache research, diagnosis and treatment: the junior editorial board members' vision – part 1.
- Author
-
Petrušić, Igor, Ha, Woo-Seok, Labastida-Ramirez, Alejandro, Messina, Roberta, Onan, Dilara, Tana, Claudio, and Wang, Wei
- Subjects
- *
HEADACHE diagnosis , *HEADACHE treatment , *COMBINATION drug therapy , *PSYCHOLOGICAL burnout , *DIFFUSION of innovations , *ARTIFICIAL intelligence , *NATURAL language processing , *PATIENT care , *MEDICAL research , *PUBLIC health , *QUALITY assurance , *ALGORITHMS , *MEDICAL practice , *EMPLOYEES' workload - Abstract
Artificial intelligence (AI) is revolutionizing the field of biomedical research and treatment, leveraging machine learning (ML) and advanced algorithms to analyze extensive health and medical data more efficiently. In headache disorders, particularly migraine, AI has shown promising potential in various applications, such as understanding disease mechanisms and predicting patient responses to therapies. Implementing next-generation AI in headache research and treatment could transform the field by providing precision treatments and augmenting clinical practice, thereby improving patient and public health outcomes and reducing clinician workload. AI-powered tools, such as large language models, could facilitate automated clinical notes and faster identification of effective drug combinations in headache patients, reducing cognitive burdens and physician burnout. AI diagnostic models also could enhance diagnostic accuracy for non-headache specialists, making headache management more accessible in general medical practice. Furthermore, virtual health assistants, digital applications, and wearable devices are pivotal in migraine management, enabling symptom tracking, trigger identification, and preventive measures. AI tools also could offer stress management and pain relief solutions to headache patients through digital applications. However, considerations such as technology literacy, compatibility, privacy, and regulatory standards must be adequately addressed. Overall, AI-driven advancements in headache management hold significant potential for enhancing patient care, clinical practice and research, which should encourage the headache community to adopt AI innovations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Compression of room impulse responses for compact storage and fast low-latency convolution.
- Author
-
Jälmby, Martin, Elvander, Filip, and van Waterschoot, Toon
- Subjects
IMPULSE response ,AUGMENTED reality ,VIRTUAL reality ,COMPUTER simulation ,QUALITY standards - Abstract
Room impulse responses (RIRs) are used in several applications, such as augmented reality and virtual reality. These applications require a large number of RIRs to be convolved with audio, under strict latency constraints. In this paper, we consider the compression of RIRs, in conjunction with fast time-domain convolution. We consider three different methods of RIR approximation for the purpose of RIR compression and compare them to state-of-the-art compression. The methods are evaluated using several standard objective quality measures, both channel-based and signal-based. We also propose a novel low-rank-based algorithm for fast time-domain convolution and show how the convolution can be carried out without the need to decompress the RIR. Numerical simulations are performed using RIRs of different lengths, recorded in three different rooms. It is shown that compression using low-rank approximation is a very compelling option to the state-of-the-art Opus compression, as it performs as well or better than on all but one considered measure, with the added benefit of being amenable to fast time-domain convolution. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Augmented reality- virtual reality wartime training of reserve prehospital teams: a pilot study.
- Author
-
Kaim, Arielle, Milman, Efrat, Zehavi, Eyal, Harel, Amnon, Mazor, Inbal, Jaffe, Eli, and Adini, Bruria
- Subjects
VIRTUAL reality ,AUGMENTED reality ,NATIONAL parks & reserves ,SYNTHETIC training devices ,EMERGENCY medical technicians - Abstract
Background: In the realm of trauma response preparation for prehospital teams, the combination of Augmented Reality (AR) and Virtual Reality (VR) with manikin technologies is growing in importance for creating training scenarios that closely mirror potential real-life situations. The pilot study focused on training of airway management and intubation for trauma incidents, based on a Trauma AR-VR simulator involving reserve paramedics of the National EMS service (Magen David Adom) who had not practiced for up to six years, activated during the Israel-Gaza conflict (October 2023). The trauma simulator merges the physical and virtual realms by utilizing a real manikin and instruments outfitted with sensors. This integration enables a precise one-to-one correspondence between the physical and virtual environments. Considering the importance of enhancing the preparedness of the reserve paramedics to support the prehospital system in Israel, the study aims to ascertain the impact of AR-VR Trauma simulator training on the modification of key perceptual attitudes such as self-efficacy, resilience, knowledge, and competency among reserve paramedics in Israel. Methods: A quantitative questionnaire was utilized to gauge the influence of AR-VR training on specific psychological and skill-based metrics, including self-efficacy, resilience, medical knowledge, professional competency, confidence in performing intubations, and the perceived quality of the training experience in this pilot study. The methodology entailed administering a pre-training questionnaire, delivering a targeted 30-minute AR-VR training session on airway management techniques, and collecting post-training data through a parallel questionnaire to measure the training's impact. Fifteen reserve paramedics were trained, with a response rate of 80% (n = 12) in both measurements. Results: Post-training evaluations indicated a significant uptick in all measured areas, with resilience (3.717±0.611 to 4.008±0.665) and intubation confidence (3.541±0.891 to 3.833±0.608) showing particularly robust gains. The high rating (4.438±0.419 on a scale of 5) of the training quality suggests positive response to the AR-VR integration for the enhancement of medical training, Conclusions: The application of AR-VR in the training of reserve paramedics demonstrates potential as a key tool for their swift mobilization and efficiency in crisis response. This is particularly valuable for training when quick deployment of personnel is necessary, training resources are diminished, and 'all hands on deck' is necessary. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Transmedia skill derived from the process of converting films into educational games with augmented reality and artificial intelligence.
- Author
-
Del Moral-Pérez, M. Esther, López-Bouzas, Nerea, and Castañeda-Fernández, Jonathan
- Subjects
CHILDREN'S films ,EARLY childhood education ,EDUCATIONAL games ,ARTIFICIAL intelligence ,VIRTUAL reality - Abstract
Transmedia skill, derived from the process of converting films into educational games using augmented reality and artificial intelligence, involves employing various languages and mediums to adapt an original narrative to another format. This transmedia practice presents an opportunity to cultivate diverse skills in teacher training by transforming film narratives into educational games with Augmented Reality (AR) and Artificial Intelligence (AI). Moreover, these educational games enable student engagement in missions or challenges, enhancing their engagement with educational activities. Thus, this research stems from an Innovation project implemented in the Degree in Early Childhood Education (N=77) with two groups of university students who developed 24 educational games in physical and digital formats. The objectives are: 1) to compare the transmedia process adopted by both groups when converting children's animation films into games, some in digital format and others combining physical and digital resources; and 2) to analyze their transmedia skill reflected in the games developed. The methodology adopted is non-experimental empirical, with a descriptive and comparative nature. Two instruments were designed and validated, one to analyze the transmedia process followed in each case and another to ascertain the level of transmedia skill of the university students. The results reveal that both groups chose different creative approaches to gamify the films, expanding their stories by leveraging the potential of AR and AI to create interactive characters and settings. From this, their transmedia capability could be inferred. Thus, the use of digital applications to collaboratively design games—utilizing film narratives—represented an innovation in their training, holistically enhancing various skills. In conclusion, this experience presents an opportunity to increase the transmedia skills of future educators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Development and pilot validation of laser path anchor (LPA): A novel tool for augmented reality surgical navigation.
- Author
-
Qi, Ziyu, Zhang, Jiashu, Chen, Xiaolei, Nimsky, Christopher, and Bopp, Miriam H. A.
- Subjects
AUGMENTED reality ,LASERS ,PARALLELISM (Linguistics) ,ANALYTIC geometry ,SURGEONS - Abstract
Augmented reality navigation (ARN) enhances localization and operation by overlaying virtual guides onto realworld surgical fields. The study introduces a laser path anchor (LPA), a novel tool for ARN in surgery aimed at mitigating depth perception challenges. The LPA anchors the virtual planned path onto the physical laser beam, ensuring accurate puncture from any perspective. The tool consists of a rack, a primary laser emitter, an indicating laser emitter, and an AR marker with intrinsic and extrinsic properties validated through calibration and testing. Intrinsic validation involved measuring the parallelism between the indicating and primary laser using Cartesian graph screens. Extrinsic validation assessed the alignment and shortest distance between the virtual path and primary lasers. Results showed an angle deviation of 0.08° between the indicating and primary laser and 1.44° ± 0.47° between the virtual path and primary lasers, with a shortest distance of 7.60 mm ± 2.50 mm. The usability test demonstrated the LPA's effectiveness in guiding needle insertions without altering the perspective or distracting the surgeon. Despite some limitations, the LPA enhances the user's perception of linear objects and eliminates the need for perspective adjustments during puncture, warranting further development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Prototype development of a multi-user augmented reality framework for collaborative medical visualization in shared environments.
- Author
-
Matyash, Ivan, Graf, Raphael Lutz Y., Bohn, Stefan, Neumuth, Thomas, and Rockstroh, Max
- Subjects
PROTOTYPES ,AUGMENTED reality ,QUESTIONNAIRES ,COGNITIVE load ,COMPUTER users - Abstract
This study developed and evaluated a prototype for augmented reality (AR) interaction among multiple users sharing a three-dimensional hologram within a shared physical space. A systems requirements analysis was conducted to determine necessary interactions, followed by an iterative implementation plan with testing at each step. The application for the HoloLens 2 (HL2) headset was developed using Unity for 3D design, C# for programming, and Photon for multiplayer capabilities. The prototype was evaluated in a user study utilizing custom and NASA TLX questionnaires, comparing participants' self-reported task success with recorded data to assess the accuracy of selfassessments. The prototype effectively facilitated multi-user interactions within a shared environment, achieving acceptable synchronization for two users. Most participants successfully completed tasks, with TLX scores indicating low cognitive and mechanical load during AR task execution. This study demonstrates AR's viability for displaying and manipulating complex three-dimensional structures within a collaborative medical context among multiple individuals in the same physical space. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. On-demand mitral valve morphometrics during surgical repair.
- Author
-
Grizelj, Andela, Sharan, Lalith, Karck, Matthias, De Simone, Raffaele, Romano, Gabriele, and Engelhardt, Sandy
- Subjects
MITRAL valve ,MORPHOMETRICS ,AUGMENTED reality ,CARDIAC surgery ,AORTA surgery - Abstract
Mitral valve regurgitation is one of the most common heart valve diseases that can occur when the structural composition of the mitral valve is affected. Mitral valve repair, typically performed in a minimally invasive setting, is a complex surgery that aims to reinstate the structural integrity of the valve and thereby restore normal valve function. The current practice lacks quantification of geometrical changes and interactive 3D visualization that could enhance the understanding of pathomorphological changes and impact of surgical correction. This work presents functionalities from smartMVR, a quantitative tool to measure, compare, and analyse changes in mitral valve geometry from data captured on-demand using an RGB-D camera. The feasibility of the approach is demonstrated on a silicone replica of a pathological mitral valve that is iteratively corrected using common surgical techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Exploring mouse necropsy through augmented reality: developing a web application for enhanced learning and visualization.
- Author
-
Atmaca, Hasan Tarik
- Subjects
WEB-based user interfaces ,VETERINARY pathology ,ANIMAL carcasses ,AUGMENTED reality ,LABORATORY mice - Abstract
Necropsy, the examination of animal carcasses to determine the cause of death, is an essential skill for many professionals. Traditional training methods, however, are costly and time-consuming. The article suggests that Web-based Augmented Reality (WebAR) can offer an immersive and cost-effective training experience for laboratory animal necropsy. It describes using photogrammetry techniques to create a virtual necropsy environment consisting of 10 necropsy steps. A questionnaire was used to evaluate the usability, educational value, and drawbacks of the designed application by students who tested it. The paper outlines best practices for developing WebAR simulations, including high-fidelity 3D models and interactive elements. Additionally, it presents methods for creating new WebAR applications using specific programs or scripts. This paper highlights the potential benefits of WebAR for laboratory animal necropsy training, emphasizing its accessibility, cost-effectiveness, and scalability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Fostering the AR illusion: a study of how people interact with a shared artifact in collocated augmented reality.
- Author
-
Jifan Yang, Bednarski, Steven, Bullock, Alison, Harrap, Robin, MacDonald, Zack, Moore, Andrew, and Nicholas Graham, T. C.
- Subjects
AUGMENTED reality ,FATIGUE (Physiology) ,AWARENESS ,GAMES ,ATTENTION - Abstract
Augmented Reality (AR) is a technology that overlays virtual objects on a physical environment. The illusion afforded by AR is that these virtual artifacts can be treated like physical ones, allowing people to view them from different perspectives and point at them knowing that others see them in the same place. Despite extensive research in AR, there has been surprisingly little research into how people embrace this AR illusion, and in what ways the illusion breaks down. In this paper, we report the results of an exploratory, mixed methods study with six pairs of participants playing the novel Sightline AR game. The study showed that participants changed physical position and pose to view virtual artifacts from different perspectives and engaged in conversations around the artifacts. Being able to see the real environment allowed participants to maintain awareness of other participants’ actions and locus of attention. Players largely entered the illusion of interacting with a shared physical/virtual artifact, but some interactions broke the illusion, such as pointing into space. Some participants reported fatigue around holding their tablet devices and taking on uncomfortable poses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Compact freeform near-eye display system design enabled by optical-digital joint optimization.
- Author
-
Xu, Huiming, Yang, Tong, Cheng, Dewen, Wang, Yongtian, Yuan, Qun, and Mao, Xianglong
- Subjects
DEEP learning ,DISPLAY systems ,SYSTEMS design ,INDUSTRIALISM ,AUGMENTED reality ,HEAD-mounted displays ,VIRTUAL reality - Abstract
The near-eye display (NED) systems, designed to project content into the human eye, are pivotal in the realms of augmented reality (AR) and virtual reality (VR), offering users immersive experiences. A small volume is the key for a fashionable, easy-to-wear, comfortable NED system for industrial and consumer use. Freeform surfaces can significantly reduce the system volume and weight while improving the system specifications. However, great challenges still exist in further reducing the volume of near-eye display systems as there is also a limit when using only freeform optics. This paper introduces a novel method for designing compact freeform NED systems through a powerful optical-digital joint design. The method integrates a geometrical freeform optical design with deep learning of an image compensation neural network, addressing off-axis nonsymmetric structures with complex freeform surfaces. A design example is presented to demonstrate the effectiveness of the proposed method. Specifically, the volume of a freeform NED system is reduced by approximately 63% compared to the system designed by the traditional method, while still maintaining high-quality display performance. The proposed method opens a new pathway for the design of a next-generation ultra-compact NED system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. GaitKeeper: An AI-Enabled Mobile Technology to Standardize and Measure Gait Speed.
- Author
-
Davey, Naomi, Harte, Gillian, Boran, Aidan, Mc Elwaine, Paul, and Kennelly, Seán P.
- Subjects
- *
WALKING speed , *MEDICAL technology , *ARTIFICIAL intelligence , *MOBILE health , *HEALTH status indicators - Abstract
Gait speed is increasingly recognized as an important health indicator. However, gait analysis in clinical settings often encounters inconsistencies due to methodological variability and resource constraints. To address these challenges, GaitKeeper uses artificial intelligence (AI) and augmented reality (AR) to standardize gait speed assessments. In laboratory conditions, GaitKeeper demonstrates close alignment with the Vicon system and, in clinical environments, it strongly correlates with the Gaitrite system. The integration of a cloud-based processing platform and robust data security positions GaitKeeper as an accurate, cost-effective, and user-friendly tool for gait assessment in diverse clinical settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Gait and Balance Assessments with Augmented Reality Glasses in People with Parkinson's Disease: Concurrent Validity and Test–Retest Reliability.
- Author
-
van Bergem, Jara S., van Doorn, Pieter F., Hoogendoorn, Eva M., Geerse, Daphne J., and Roerdink, Melvyn
- Subjects
- *
PARKINSON'S disease , *EQUILIBRIUM testing , *TEST validity , *AUGMENTED reality , *TIME series analysis , *STATISTICAL reliability - Abstract
State-of-the-art augmented reality (AR) glasses record their 3D pose in space, enabling measurements and analyses of clinical gait and balance tests. This study's objective was to evaluate concurrent validity and test–retest reliability for common clinical gait and balance tests in people with Parkinson's disease: Five Times Sit To Stand (FTSTS) and Timed Up and Go (TUG) tests. Position and orientation data were collected in 22 participants with Parkinson's disease using HoloLens 2 and Magic Leap 2 AR glasses, from which test completion durations and durations of distinct sub-parts (e.g., sit to stand, turning) were derived and compared to reference systems and over test repetitions. Regarding concurrent validity, for both tests, an excellent between-systems agreement was found for position and orientation time series (ICC(C,1) > 0.933) and test completion durations (ICC(A,1) > 0.984). Between-systems agreement for FTSTS (sub-)durations were all excellent (ICC(A,1) > 0.921). TUG turning sub-durations were excellent (turn 1, ICC(A,1) = 0.913) and moderate (turn 2, ICC(A,1) = 0.589). Regarding test–retest reliability, the within-system test–retest variation in test completion times and sub-durations was always much greater than the between-systems variation, implying that (sub-)durations may be derived interchangeably from AR and reference system data. In conclusion, AR data are of sufficient quality to evaluate gait and balance aspects in people with Parkinson's disease, with valid quantification of test completion durations and sub-durations of distinct FTSTS and TUG sub-parts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. RGBTSDF: An Efficient and Simple Method for Color Truncated Signed Distance Field (TSDF) Volume Fusion Based on RGB-D Images.
- Author
-
Li, Yunqiang, Huang, Shuowen, Chen, Ying, Ding, Yong, Zhao, Pengcheng, Hu, Qingwu, and Zhang, Xujie
- Subjects
- *
AUGMENTED reality , *ROBOTICS , *MEMORY , *DETECTORS , *NOISE - Abstract
RGB-D image mapping is an important tool in applications such as robotics, 3D reconstruction, autonomous navigation, and augmented reality (AR). Efficient and reliable mapping methods can improve the accuracy, real-time performance, and flexibility of sensors in various fields. However, the currently widely used Truncated Signed Distance Field (TSDF) still suffers from the problem of inefficient memory management, making it difficult to directly use it for large-scale 3D reconstruction. In order to address this problem, this paper proposes a highly efficient and accurate TSDF voxel fusion method, RGBTSDF. First, based on the sparse characteristics of the volume, an improved grid octree is used to manage the whole scene, and a hard coding method is proposed for indexing. Second, during the depth map fusion process, the depth map is interpolated to achieve a more accurate voxel fusion effect. Finally, a mesh extraction method with texture constraints is proposed to overcome the effects of noise and holes and improve the smoothness and refinement of the extracted surface. We comprehensively evaluate RGBTSDF and similar methods through experiments on public datasets and the datasets collected by commercial scanning devices. Experimental results show that RGBTSDF requires less memory and can achieve real-time performance experience using only the CPU. It also improves fusion accuracy and achieves finer grid details. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Applications of Augmented Reality in Neuro-Oncology: A Case Series.
- Author
-
Dellaretti, Marcos, Figueiredo, Hian P.G., Soares, André G., Froes, Luiz E.V., Gomes, Fernando Cotrim, and Faraj, Franklin
- Subjects
- *
COMPUTER-assisted surgery , *AUGMENTED reality , *INTRACRANIAL tumors , *PATIENT safety , *CRANIOTOMY , *MICROSURGERY ,TUMOR surgery - Abstract
Augmented reality (AR) is a technological tool that superimposes two-dimensional virtual images onto three-dimensional real-world scenarios through the integration of neuronavigation and a surgical microscope. The aim of this study was to demonstrate our initial experience with AR and to assess its application in oncological neurosurgery. This is a case series with 31 patients who underwent surgery at Santa Casa BH for the treatment of intracranial tumors in the period from March 4, 2022, to July 14, 2023. The application of AR was evaluated in each case through three parameters: whether the virtual images auxiliated in the incision and craniotomy and whether the virtual images aided in intraoperative microsurgery decisions. Of the 31 patients, 5 patients developed new neurological deficits postoperatively. One patient died, with a mortality rate of 3.0%. Complete tumor resection was achieved in 22 patients, and partial resection was achieved in 6 patients. In all patients, AR was used to guide the incision and craniotomy in each case, leading to improved and precise surgical approaches. As intraoperative microsurgery guidance, it proved to be useful in 29 cases. The application of AR seems to enhance surgical safety for both the patient and the surgeon. It allows a more refined immediate operative planning, from head positioning to skin incision and craniotomy. Additionally, it helps decision-making in the intraoperative microsurgery phase with a potentially positive impact on surgical outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Visual guidance method for artificial assembly in visual blind areas based on augmented reality.
- Author
-
Zheng, Yizhen, Li, Yuefeng, Wu, Wentao, Meng, Fanwei, and Chen, Changyu
- Abstract
Manual assembly in blind areas is impeded by unknown assembly area paths and the assembler's obstructed vision, which greatly affects assembly efficiency and accuracy. To solve this type of difficult blind area assembly problem, a visual guidance method for blind area manual assembly using augmented reality assistance combined with path planning and pose solving has been proposed. In the proposed framework, a Denavit-Hartenberg (D-H) parameter model of the human arm and the arm shape angle was first constructed to solve the forward and inverse kinematics, and the changes in joint angles during motion were obtained. Using the designed Rapidly exploring Random Tree*-Artificial Potential Field (RRT*-APF) algorithm, feasible collision-free arm movement paths were then searched in the assembly area, and the path points and scene models containing assembly position information were entered into augmented reality devices for visualization. At the same time, to obtain the real-time status of the arm end during blind area assembly, the Inertial Measurement Unit (IMU) fusion-extended Kalman filtering algorithm was used to analyze and solve the spatial geometric pose of the simplified rigid link model of the arm, thus achieving assisted positioning of the end of the arm in blind area scenarios. Finally, a comprehensive experiment was conducted by constructing a blind area assembly test scenario. Experimental results showed that this method can significantly improve the efficiency of blind area assembly operations and effectively reduce assembly error rates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Development and Usability Evaluation of Augmented Reality Content for Light Maintenance Training of Air Spring for Electric Multiple Unit.
- Author
-
Kim, Kyung-Sik and Kim, Chul-Su
- Subjects
ELECTRIC multiple units ,VISUAL learning ,AIR pressure ,AUGMENTED reality ,FLUID flow ,AIR suspension for automobiles - Abstract
The air spring for railway vehicles uses the air pressure inside the bellows to absorb vibration and shock to improve ride comfort and adjust the height of the underframe with a leveling valve to control stable driving of the train. This study developed augmented reality content that proposes a novel visual technology to effectively support the training of air spring maintenance tasks. In this study, a special effect algorithm that displays the dispersion and diffusion of fluid, and an algorithm that allows objects to be rotated at various angles, were proposed to increase the visual learning effect of fluid flow for maintenance. The FDG algorithm can increase the training effect by visualizing the leakage of air at a specific location when the air spring is damaged. In addition, the OAR algorithm allows an axisymmetric model, which is difficult to rotate by gestures, to be rotated at various angles, using a touch cube. Using these algorithms, maintenance personnel can effectively learn complex maintenance tasks. The UMUX and CSUQ surveys were conducted with 40 railway maintenance workers to evaluate the effectiveness of the developed educational content. The results showed that the UMUX, across 4 items, averaged as score of 81.56. Likewise, the CSUQ survey score, consisting of 19 questions in 4 categories, was very high, at 80.83. These results show that this AR content is usable for air spring maintenance and field training support. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Modifications to ArduSub That Improve BlueROV SITL Accuracy and Design of Hybrid Autopilot.
- Author
-
Ng, Patrick and Krieg, Michael
- Subjects
DEGREES of freedom ,AUGMENTED reality ,AUTOMATIC control systems ,NONLINEAR systems ,VEHICLE models - Abstract
Improvements to ArduSub for the BlueROV2 (BROV2) Heavy, necessary for accurate simulation and autonomous controller design, were implemented and validated in this work. The simulation model was made more accurate with new data obtained from real-world testing and values from the literature. The manual control algorithm in the BROV2 firmware was replaced with one compatible with automatic control. In a Robot Operating System (ROS), a proportional–derivative (PD) controller to assist augmented reality (AR) pilots in controlling angular degrees of freedom (DOF) of the vehicle was implemented. Open-loop testing determined the yaw hydrodynamic model of the vehicle. A general mathematical method to determine PD gains as a function of the desired closed-loop performance was outlined. Testing was carried out in the updated simulation environment. Step response testing found that a modified derivative gain was necessary. Comparable real-world results were obtained using settings determined in the simulation environment. Frequency response testing of the modified yaw control law discovered that the bandwidth of the nonlinear system had a one-to-one correspondence with the desired closed-loop natural frequency of a simplified linear approximation. The control law was generalized for angular DOF and linear DOF were operated with open-loop control. A full six-DOF simulated dive demonstrated excellent tracking. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Augmented Reality Training on Combat Sport: Improving the Quality of Physical Fitness and Technical Performance of Young Athletes.
- Author
-
Usra, Meirizal, Lesmana, Irfan Benizar, Octara, Kevin, Bayu, Wahyu Indra, Badau, Adela, Ishak, Asmadi, and Setiawan, Edi
- Subjects
COMBAT sports ,PHYSICAL fitness ,SPORTS & technology ,AUGMENTED reality ,PHYSICAL training & conditioning ,ATHLETES ,MEDICINE balls ,PERFORMANCES - Abstract
Copyright of Retos: Nuevas Perspectivas de Educación Física, Deporte y Recreación is the property of Federacion Espanola de Asociaciones de Docentes de Educacion Fisica and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
32. The use of CNNs in VR/AR/MR/XR: a systematic literature review.
- Author
-
Cortes, David, Bermejo, Belen, and Juiz, Carlos
- Abstract
This study offers a systematic literature review on the application of Convolutional Neural Networks in Virtual Reality, Augmented Reality, Mixed Reality, and Extended Reality technologies. We categorise these applications into three primary classifications: interaction, where the networks amplify user engagements with virtual and augmented settings; creation, showcasing the networks’ ability to assist in producing high-quality visual representations; and execution, emphasising the optimisation and adaptability of apps across diverse devices and situations. This research serves as a comprehensive guide for academics, researchers, and professionals in immersive technologies, offering profound insights into the cross-disciplinary realm of network applications in these realities. Additionally, we underscore the notable contributions concerning these realities and their intersection with neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Temporally enhanced graph convolutional network for hand tracking from an egocentric camera.
- Author
-
Cho, Woojin, Ha, Taewook, Jeon, Ikbeom, Jeon, Jinwoo, Kim, Tae-Kyun, and Woo, Woontack
- Abstract
We propose a robust 3D hand tracking system in various hand action environments, including hand-object interaction, which utilizes a single color image and a previous pose prediction as input. We observe that existing methods deterministically exploit temporal information in motion space, failing to address realistic diverse hand motions. Also, prior methods paid less attention to efficiency as well as robust performance, i.e., the balance issues between time and accuracy. The Temporally Enhanced Graph Convolutional Network (TE-GCN) utilizes a 2-stage framework to encode temporal information adaptively. The system establishes balance by adopting an adaptive GCN, which effectively learns the spatial dependency between hand mesh vertices. Furthermore, the system leverages the previous prediction by estimating the relevance across image features through the attention mechanism. The proposed method achieves state-of-the-art balanced performance on challenging benchmarks and demonstrates robust results on various hand motions in real scenes. Moreover, the hand tracking system is integrated into a recent HMD with an off-loading framework, achieving a real-time framerate while maintaining high performance. Our study improves the usability of a high-performance hand-tracking method, which can be generalized to other algorithms and contributes to the usage of HMD in everyday life. Our code with the HMD project will be available at . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Augmented reality navigation in external ventricular drain insertion—a systematic review and meta-analysis.
- Author
-
Buwaider, Ali, El-Hajj, Victor Gabriel, Iop, Alessandro, Romero, Mario, C Jean, Walter, Edström, Erik, and Elmi-Terander, Adrian
- Abstract
External ventricular drain (EVD) insertion using the freehand technique is often associated with misplacements resulting in unfavorable outcomes. Augmented Reality (AR) has been increasingly used to complement conventional neuronavigation. The accuracy of AR guided EVD insertion has been investigated in several studies, on anthropomorphic phantoms, cadavers, and patients. This review aimed to assess the current knowledge and discuss potential benefits and challenges associated with AR guidance in EVD insertion. MEDLINE, EMBASE, and Web of Science were searched from inception to August 2023 for studies evaluating the accuracy of AR guidance for EVD insertion. Studies were screened for eligibility and accuracy data was extracted. The risk of bias was assessed using the Cochrane Risk of Bias Tool and the quality of evidence was assessed using the Newcastle-Ottawa-Scale. Accuracy was reported either as the average deviation from target or according to the Kakarla grading system. Of the 497 studies retrieved, 14 were included for analysis. All included studies were prospectively designed. Insertions were performed on anthropomorphic phantoms, cadavers, or patients, using several different AR devices and interfaces. Deviation from target ranged between 0.7 and 11.9 mm. Accuracy according to the Kakarla grading scale ranged between 82 and 96%. Accuracy was higher for AR compared to the freehand technique in all studies that had control groups. Current evidence demonstrates that AR is more accurate than free-hand technique for EVD insertion. However, studies are few, the technology developing, and there is a need for further studies on patients in relevant clinical settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Immersive analytics with augmented reality in meteorology: an exploratory study on ontology and linked data.
- Author
-
Ouedraogo, Inoussa, Nguyen, Huyen, and Bourdot, Patrick
- Abstract
Although Augmented Reality (AR) has been extensively studied in supporting Immersive Analytics (IA), there are still many challenges in visualising and interacting with big and complex datasets. To deal with these datasets, most AR applications utilise NoSQL databases for storing and querying data, especially for managing large volumes of unstructured or semi-structured data. However, NoSQL databases have limitations in their reasoning and inference capabilities, which can result in insufficient support for certain types of queries. To fill this gap, we aim to explore and evaluate whether an intelligent approach based on ontology and linked data can facilitate visual analytics tasks with big datasets on AR interface. We designed and implemented a prototype of this method for meteorological data analytics. An experiment was conducted to evaluate the use of a semantic database with linked data compared to a conventional approach in an AR-based immersive analytics system. The results significantly highlight the performance of semantic approach in helping the users analysing meteorological datasets and their subjective appreciation in working with the AR interface, which is enhanced with ontology and linked data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. The impact of virtual and augmented reality on presence, user experience and performance of Information Visualisation.
- Author
-
Gronowski, Ashlee, Arness, David Caelum, Ng, Jing, Qu, Zhonglin, Lau, Chng Wei, Catchpoole, Daniel, and Nguyen, Quang Vinh
- Abstract
The fast growth of virtual reality (VR) and augmented reality (AR) head-mounted displays provides a new medium for interactive visualisations and visual analytics. Presence is the experience of consciousness within extended reality, and it has the potential to increase task performance. This project studies the impact that a sense of presence has on data visualisation performance and user experience under AR and VR conditions. A within-subjects design recruited 38 participants to complete interactive visualisation tasks within the novel immersive data analytics system for genomic data in AR and VR, and measured speed, accuracy, preference, presence, and user satisfaction. Open-ended user experience responses were also collected. The results implied that VR was more conducive to efficiency, effectiveness, and user experience as well as offering insight into possible cognitive load benefits for VR users. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Usability of visualizing position and orientation deviations for manual precise manipulation of objects in augmented reality.
- Author
-
Zhang, Xiaotian, He, Weiping, Billinghurst, Mark, Qin, Yunfei, Yang, Lingxiao, Liu, Daisong, and Wang, Zenglei
- Abstract
Manual precise manipulation of objects is an essential skill in everyday life, and Augmented Reality (AR) is increasingly being used to support such operations. In this study, we investigate whether detailed visualizations of position and orientation deviations are helpful for AR-assisted manual precise manipulation of objects. We developed three AR instructions with different visualizations of deviations: the logical deviation baseline instruction, the precise numerical deviations-based instruction, and the intuitive color-mapped deviations-based instruction. All three instructions visualized the required directions for manipulation and the logical values of whether the object met the accuracy requirements. Additionally, the latter two instructions provided detailed visualizations of deviations through numerical text and color-mapping respectively. A user study was conducted with 18 participants to compare the three AR instructions. The results showed that there were no significant differences found in speed, accuracy, perceived ease-of-use, and perceived workload between the three AR instructions. We found that the visualizations of the required directions for manipulation and the logical values of whether the object met the accuracy requirements were sufficient to guide manual precise manipulation. The detailed visualizations of the real-time deviations could not improve the speed and accuracy of manual precise manipulation, and although they could improve the perceived ease-of-use and user experience, the effects were not significant. Based on the results, several recommendations were provided for designing AR instructions to support precise manual manipulation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Viewpoint-sharing method with reduced motion sickness in object-based VR/AR collaborative virtual environment.
- Author
-
Tserenchimed, Tuvshintulga and Kim, Hyungki
- Abstract
We propose a viewpoint-sharing method with reduced motion sickness in an object-based remote collaborative virtual environment (CVE). The method is designed with an assumption of asymmetric, object-based CVE where collaborators use non-homogeneous devices, such as immersive virtual reality head-mounted display (VR HMD) and tablet-based augmented reality (AR), and simultaneously interact with 3D virtual objects. Therefore, collaborators interact with different interfaces such as virtual reality (VR) users relying on controllers for virtual locomotion and object manipulation, while AR users perform physical locomotion and multi-touch input for object manipulation. The proposed viewpoint-sharing method allows both users to observe and manipulate the objects in interest from the shared point of view, enabling participants to interact with the objects without the need for virtual/physical locomotion. While viewpoint-sharing, instead of changing point of view, the proposed method performs seamless object transformation to provide a shared point of view, reducing motion sickness and associated discomfort. From our user experiment, the viewpoint-share condition resulted in a 35.47% faster task completion time than the baseline condition which is without proposed viewpoint-sharing. The advantage of viewpoint-sharing regarding system usability was significant, while task workloads were similar in the baseline and viewpoint-sharing conditions. We expect that the proposed viewpoint-sharing method allows users to quickly, efficiently, and collaboratively communicate in an object-based CVE, and represents a step forward in the development of effective remote, asymmetric CVE. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Advances in display technology: augmented reality, virtual reality, quantum dot-based light-emitting diodes, and organic light-emitting diodes.
- Author
-
Kang, Jihoon, Baek, Geun Woo, Lee, Jun Yeob, Kwak, Jeonghun, and Park, Jae-Hyeung
- Subjects
LIGHT emitting diodes ,AUGMENTED reality ,VIRTUAL reality ,LED displays ,QUANTUM dots - Abstract
Virtual reality, augmented reality, quantum dot light-emitting diodes, and organic light-emitting diodes have progressed over the last two years. Key achievements in these displays are discussed in terms of device performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Optimizing assembly processes with augmented reality: a case study on TurtleBots.
- Author
-
Mingyu Wu, Ye Sheng Koh, Che Fai Yeong, Kai Woon Goh, Dares, Marvin, Eileen Su Lee Ming, Holderbaum, William, and Shahrizal Sunar, Mohd
- Subjects
INDUSTRIAL robots ,AUGMENTED reality ,INDUSTRIAL applications ,COMPARATIVE studies - Abstract
Augmented reality (AR) technology is revolutionizing traditional assembly processes, offering intuitive and interactive guidance that significantly enhances operational efficiency and accuracy. This study investigates the impact of AR on the assembly of Turtlebots, a complex task representative of industrial applications. Through a comparative analysis involving traditional paper manuals, modified paper manuals, and AR-based manuals, the benefits of AR integration are quantitatively assessed. Participants utilizing AR-based manuals completed the Turtlebot assembly 21.72% faster than those using traditional paper manuals, with a notable reduction in assembly time from an average of 03:00:40 to 02:21:26. Furthermore, the incidence of assembly errors significantly decreased, with AR manual users making an average of 2.25 errors compared to 5 by paper manual users. These findings underscore the potential of AR to expedite complex assembly tasks and enhance the accuracy of these processes. The study highlights the novel application of AR in improving both the speed and quality of assembly in an industrial context, demonstrating AR’s role as a pivotal technology for the future of manufacturing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. ARGV: 3D genome structure exploration using augmented reality.
- Author
-
Drogaris, Chrisostomos, Zhang, Yanlin, Zhang, Eric, Nazarova, Elena, Sarrazin-Gendron, Roman, Wilhelm-Landry, Sélik, Cyr, Yan, Majewski, Jacek, Blanchette, Mathieu, and Waldispühl, Jérôme
- Subjects
- *
CHROMOSOME structure , *AUGMENTED reality , *MOBILE apps , *CELL phones , *VIRTUAL reality - Abstract
Over the past two decades, scientists have increasingly realized the importance of the three-dimensional (3D) genome organization in regulating cellular activity. Hi-C and related experiments yield 2D contact matrices that can be used to infer 3D models of chromosome structure. Visualizing and analyzing genomes in 3D space remains challenging. Here, we present ARGV, an augmented reality 3D Genome Viewer. ARGV contains more than 350 pre-computed and annotated genome structures inferred from Hi-C and imaging data. It offers interactive and collaborative visualization of genomes in 3D space, using standard mobile phones or tablets. A user study comparing ARGV to existing tools demonstrates its benefits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Event-related pupillary response-based authentication system using eye-tracker add-on augmented reality glasses for individual identification.
- Author
-
Sangin Park, Jihyeon Ha, and Laehyun Kim
- Subjects
PUPILLARY reflex ,EVOKED potentials (Electrophysiology) ,AUGMENTED reality ,SUPPORT vector machines ,BIOMETRIC identification - Abstract
This study aimed at developing a noncontact authentication system using event-related pupillary response (ErPR) epochs in an augmented reality (AR) environment. Thirty participants were shown in a rapid serial visual presentation consisting of familiar and unknown human photographs. ErPR was compared with event-related potential (ERP). ERP and ErPR amplitudes for familiar faces were significantly larger compared with those for stranger faces. The ERP-based authentication system exhibited perfect accuracy using a linear support vector machine classifier. A quadratic discriminant analysis classifier trained using ErPR features achieved high accuracy (97%) and low false acceptance (0.03) and false rejection (0.03) rates. The correlation coefficients between ERP and ErPR amplitudes were 0.452-0.829, and the corresponding Bland--Altman plots showed a fairly good agreement between them. The ErPR-based authentication system allows noncontact authentication of persons without the burden of sensor attachment via low-cost, noninvasive, and easily implemented technology in an AR environment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Public participation in urban design with augmented reality technology based on indicator evaluation.
- Author
-
Yuchen Wang and Yin-Shan Lin
- Subjects
URBAN planning ,URBAN policy ,IMMERSIVE design ,AUGMENTED reality ,RESEARCH personnel - Abstract
Decision-making processes in traditional urban design approaches are mainly top-down. Such processes have defects including not only taking a long time to examine design results but also leading to irreversible impacts after design implementation. Policymakers and researchers stress the importance of collaborating with different stakeholders in the process of urban design policy and guideline making in order to minimize these negative impacts. However, introducing public participation into urban design from the bottom up is challenging, especially when the process involves abstract urban design concepts such as indicators. This paper explores a new workflow aimed at enhancing public participation to cooperate in urban design work with the help of a newly designed platform tool powered by mobile augmented-reality technologies. The platform is intuitive to use and displays scenes of potential urban design results by superimposing the virtual models onto real-world environments on mobile devices. The public stakeholders are provided with this platform on-site to evaluate the initial values of urban design indicators by interacting with the prototype design along with an immersive experience. They can also grow familiar with the concepts of the given indicators during this process, which helps them better understand the implications of guidelines in future published urban design drafts and estimate the potential results. Their feedback is collected, which can help urban designers further optimize the indicators in urban design guideline making in order to improve their rationality. This process of urban design involving public participation is repeatable, which makes it possible to continuously adjust the design results. A user study was conducted to examine the platform's usability and its ability to enhance public familiarity with the concepts of given indicators and their willingness to participate in urban design evaluation. The study also attests to the possibility of a workflow that integrates public feedback with the urban design process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Augmented reality books: in-depth insights into children's reading engagement.
- Author
-
Alhamad, Kawla, Manches, Andrew, and McGeown, Sarah
- Subjects
CHILDREN'S books ,AUGMENTED reality ,ELECTRONIC books ,THEMATIC analysis ,CHILDREN'S literature ,IMAGINATION - Abstract
Children's reading engagement is associated with the quality of their reading experiences and outcomes; however, research to date has only examined children's reading engagement within the context of traditional print books or digital texts. Augmented Reality represents a hybrid reading experience, where traditional paper books are augmented with digital features (e.g., animations, sounds, comprehension questions). This is the first study to examine children's perspectives and experiences of AR books, within the context of reading engagement. In total, 38 demographically diverse children (aged 8-10, 21 male, 17 English as an Additional Language, 14 ethnicities, nine with teacherreported reading difficulties) from the UK participated. After reading an AR book, children participated in interviews about their reading engagement. Deductive (themes) and inductive (subthemes) approaches to thematic analysis were used, examining children's AR reading experiences within the context of their behavioral, cognitive, affective and social engagement. The majority of children found AR books easy to use, and provided examples of how AR books supported their behavioral engagement (e.g., desire to read more/extend reading practices), altered their cognitive engagement (e.g., reading strategies, visual representation/use of imagination, comprehension monitoring), influenced their affective engagement: (e.g., diverse positive feelings), and social engagement (e.g., prompted interaction and discussion), providing examples suggesting similarities and differences with traditional print books. This paper provides novel in-depth insights into children's perspectives and experiences of AR books, and provides a foundation for researchers, educators, and AR book designers interested in better supporting children's reading experiences and outcomes with AR books. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Transforming simulation in healthcare to enhance interprofessional collaboration leveraging big data analytics and artificial intelligence.
- Author
-
Guraya, Salman Yousuf
- Subjects
DATA analytics ,ARTIFICIAL intelligence ,INTERPROFESSIONAL collaboration ,MEDICAL education ,MEDICAL personnel ,INTERPROFESSIONAL education - Abstract
Simulation in healthcare, empowered by big data analytics and artificial intelligence (AI), has the potential to drive transformative innovations towards enhanced interprofessional collaboration (IPC). This convergence of technologies revolutionizes medical education, offering healthcare professionals (HCPs) an immersive, iterative, and dynamic simulation platform for hands-on learning and deliberate practice. Big data analytics, integrated in modern simulators, creates realistic clinical scenarios which mimics real-world complexities. This optimization of skill acquisition and decision-making with personalized feedback leads to life-long learning. Beyond clinical training, simulation-based AI, virtual reality (VR), and augmented reality (AR) automated tools offer avenues for quality improvement, research and innovation, and team working. Additionally, the integration of VR and AR enhances simulation experience by providing realistic environments for practicing high-risk procedures and personalized learning. IPC, crucial for patient safety and quality care, finds a natural home in simulation-based education, fostering teamwork, communication, and shared decision-making among diverse HCP teams. A thoughtful integration of simulation-based medical education into curricula requires overcoming its barriers such as professional silos and stereo-typing. There is a need for a cautious implantation of technology in clinical training without overly ignoring the real patient-based medical education. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. The impact of AR online shopping experience on customer purchase intention: An empirical study based on the TAM model.
- Author
-
Guo, Chunrong and Zhang, Xiaodong
- Subjects
- *
CONSUMER behavior , *ONLINE shopping , *TECHNOLOGY Acceptance Model , *INDUSTRIAL capacity , *AUGMENTED reality - Abstract
Augmented Reality (AR) offers a rich business format, convenient applications, great industrial potential, and strong commercial benefits. The integration of AR technology with online shopping has brought tremendous changes to e-commerce. The Technology Acceptance Model (TAM) is a mature model for assessing consumer acceptance of new technologies, and applying it to evaluate the impact of AR online shopping experiences on consumer purchase intention is an urgently needed area of research. Firstly, the typical applications of AR in online shopping were reviewed, and the connotations and experiences of AR online shopping were summarized. Secondly, using the five types of AR online shopping experiences as antecedent variables, and perceived ease of use and perceived usefulness as intermediate variables, a theoretical model was constructed to explore the impact of AR online shopping experiences on customer purchase intentions, followed by an empirical study. Finally, suggestions were proposed for optimizing the online shopping experience to enhance purchase intentions. The article expands the application scenarios of the Technology Acceptance Model and enriches the theory of consumer behavior in Metaverse e-commerce. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Energy-efficient dynamic 3D metasurfaces via spatiotemporal jamming interleaved assemblies for tactile interfaces.
- Author
-
An, Siqi, Li, Xiaowen, Guo, Zengrong, Huang, Yi, Zhang, Yanlin, and Jiang, Hanqing
- Subjects
PEOPLE with visual disabilities ,AUGMENTED reality ,ARRAY processing ,VISUAL education ,ENERGY consumption - Abstract
Inspired by the natural shape-morphing abilities of biological organisms, we introduce a strategy for creating energy-efficient dynamic 3D metasurfaces through spatiotemporal jamming of interleaved assemblies. Our approach, diverging from traditional shape-morphing techniques reliant on continuous energy inputs, utilizes strategically jammed, paper-based interleaved assemblies. By rapidly altering their stiffness at various spatial points and temporal phases during the relaxation of the soft substrate through jamming, we enable the formation of refreshable, intricate 3D shapes with a desirable load-bearing capability. This process, which does not require ongoing energy consumption, ensures energy-efficient and lasting shape displays. Our theoretical model, linking buckling deformation to residual pre-strain, underpins the inverse design process for an array of interleaved assemblies, facilitating the creation of diverse 3D configurations. This metasurface holds notable potential for tactile displays, particularly for the visually impaired, heralding possibilities in visual impaired education, haptic feedback, and virtual/augmented reality applications. This paper introduces a load-bearing 3D dynamic metasurface that alters the stiffness of interleaved assemblies at various spatial points and temporal phases through jamming. This approach does not require continuous energy input and was demonstrated as a tactile display for the visually impaired. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Extended reality training for mass casualty incidents: a systematic review on effectiveness and experience of medical first responders.
- Author
-
del Carmen Cardós-Alonso, María, Otero-Varela, Lucía, Redondo, María, Uzuriaga, Miriam, González, Myriam, Vazquez, Tatiana, Blanco, Alberto, Espinosa, Salvador, and Cintora-Sanz, Ana María
- Subjects
- *
MASS casualties , *MEDICAL information storage & retrieval systems , *RESEARCH funding , *EDUCATIONAL outcomes , *CINAHL database , *DESCRIPTIVE statistics , *VIRTUAL reality , *SYSTEMATIC reviews , *MEDLINE , *DISASTERS , *ATTITUDES of medical personnel , *EMERGENCY medical personnel , *AUGMENTED reality - Abstract
Introduction: Mass casualty incidents (MCI) are unforeseeable and complex events that occur worldwide, therefore enhancing the training that medical first responders (MFRs) receive is fundamental to strengthening disaster preparedness and response. In recent years, extended reality (XR) technology has been introduced as a new approach and promising teaching technique for disaster medicine education. Objective: To assess the effectiveness of XR simulation as a tool to train MFRs in MCIs, and to explore the perception and experience of participants to these new forms of training. Design: Systematic review. Methods: This systematic review was conducted in accordance with the "Preferred reporting items for systematic reviews and meta-analyses" (PRISMA) statement. Four databases were searched (MEDLINE, EMBASE, CINAHL and LILACs) using a comprehensive search strategy to identify relevant articles, and MetaQAT was used as a study quality assessment tool. Data from included studies was not pooled for meta-analysis due to heterogeneity. Extracted data was synthesised in a narrative, semi-quantitative manner. Results: A total of 18 studies were included from 8 different countries. Studies encompassed a variety of participants (e.g., nurses, paramedics, physicians), interventions (virtual, mixed and augmented reality), comparators (comparison between two groups and single groups with pre-post evaluation), and outcomes (effectiveness and MFR perception). The synthesis of data indicated that XR was an effective tool for prehospital MCI training by means of improved triage accuracy, triage time, treatment accuracy, performance correctness and/or knowledge acquired. These XR systems were well perceived by MFRs, who expressed their interest and satisfaction towards this learning experience and emphasized its usefulness and relevance. Conclusion: This research supports the usefulness and significance of XR technology that allows users to enhance their skills and confidence when facing forthcoming disasters. The findings summarize recommendations and suggestions for the implementation, upgrade and/or assessment of this novel and valuable teaching method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Archaeometa: leveraging blockchain for secure and scalable virtual museums in the metaverse.
- Author
-
Aziz, Omer, Farooq, Muhammad Shoaib, khelifi, Adel, and Shoaib, Mahdia
- Subjects
- *
VIRTUAL museums , *DIGITAL transformation , *BLOCKCHAINS , *AUGMENTED reality , *CULTURAL property - Abstract
The rapid evolution of the digital landscape has catalyzed the integration of blockchain technology within the domain of cultural heritage, particularly in virtual museums within the Metaverse. This study introduces ArchaeoMeta, a novel framework designed to leverage blockchain technology to enhance security, authenticity, and visitor interaction in a virtual museum environment. Utilizing smart contracts deployed on the Ethereum Sepolia testnet, the framework manages visitor interactions and secures digital artifacts, addressing challenges associated with scalability and user experience under varying loads. The performance evaluation involved simulating user interactions, scaling up to ten thousand concurrent users, to assess the impact on transaction latency, gas usage, and blockchain size. Findings reveal significant scalability challenges, as transaction latency and blockchain size increased with the number of users, highlighting areas for optimization in managing high user traffic within the blockchain infrastructure. This study contributes to the understanding of blockchain applications in cultural heritage, suggesting that while ArchaeoMeta offers a robust platform for virtual museums, enhancements in scalability through layer-2 solutions or alternative blockchain platforms are essential for its practical implementation. The framework sets a precedent for future research in the convergence of blockchain technology and cultural heritage preservation, promising a transformative impact on how digital cultural experiences are curated and consumed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Evaluating the real-world usability of BCI control systems with augmented reality: a user study protocol.
- Author
-
Dillen, Arnau, Omidi, Mohsen, Díaz, María Alejandra, Ghaffari, Fakhreddine, Roelands, Bart, Vanderborght, Bram, Romain, Olivier, and De Pauw, Kevin
- Subjects
AUGMENTED reality ,BRAIN-computer interfaces ,HUMAN-robot interaction ,EYE tracking ,USER experience - Abstract
Brain-computer interfaces (BCI) enable users to control devices through their brain activity. Motor imagery (MI), the neural activity resulting from an individual imagining performing a movement, is a common control paradigm. This study introduces a user-centric evaluation protocol for assessing the performance and user experience of an MI-based BCI control system utilizing augmented reality. Augmented reality is employed to enhance user interaction by displaying environment-aware actions, and guiding users on the necessary imagined movements for specific device commands. One of the major gaps in existing research is the lack of comprehensive evaluation methodologies, particularly in real-world conditions. To address this gap, our protocol combines quantitative and qualitative assessments across three phases. In the initial phase, the BCI prototype's technical robustness is validated. Subsequently, the second phase involves a performance assessment of the control system. The third phase introduces a comparative analysis between the prototype and an alternative approach, incorporating detailed user experience evaluations through questionnaires and comparisons with non-BCI control methods. Participants engage in various tasks, such as object sorting, picking and placing, and playing a board game using the BCI control system. The evaluation procedure is designed for versatility, intending applicability beyond the specific use case presented. Its adaptability enables easy customization to meet the specific user requirements of the investigated BCI control application. This user-centric evaluation protocol offers a comprehensive framework for iterative improvements to the BCI prototype, ensuring technical validation, performance assessment, and user experience evaluation in a systematic and user-focused manner. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.