622 results on '"hand tracking"'
Search Results
2. A Study on Differences in Educational Method to Periodic Inspection Work of Nuclear Power Plants
- Author
-
Yuichi Yashiro, Gang Wang, Fumio Hatori, and Nobuyoshi Yabuki
- Subjects
virtual reality ,eye tracking ,hand tracking ,expert knowledge ,plant construction ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Construction work and regular inspection work at nuclear power plants involve many special tasks, unlike general on-site work. In addition, the opportunity to transfer knowledge from skilled workers to unskilled workers is limited due to the inability to easily enter the plant and various security and radiation exposure issues. Therefore, in this study, we considered the application of virtual reality (VR) as a method to increase opportunities to learn anytime and anywhere and to transfer knowledge more effectively. In addition, as an interactive learning method to improve comprehension, we devised a system that uses hand tracking and eye tracking to allow participants to experience movements and postures that are closer to the real work in a virtual space. For hand-based work, three actions, “pinch”, “grab”, and “hold”, were reproduced depending on the sizes of the parts and tools, and visual confirmation work was reproduced by the movement of the gaze point of the eyes, faithfully reproducing the special actions of the inspection work. We confirmed that a hybrid learning process that appropriately combines the developed active learning method, using experiential VR, with conventional passive learning methods, using paper and video, can improve the comprehension and retention of special work at nuclear power plants.
- Published
- 2024
- Full Text
- View/download PDF
3. From microscope to head-mounted display: integrating hand tracking into microsurgical augmented reality.
- Author
-
El Chemaly, Trishia, Athayde Neves, Caio, Fu, Fanrui, Hargreaves, Brian, and Blevins, Nikolas H.
- Abstract
Purpose: The operating microscope plays a central role in middle and inner ear procedures that involve working within tightly confined spaces under limited exposure. Augmented reality (AR) may improve surgical guidance by combining preoperative computed tomography (CT) imaging that can provide precise anatomical information, with intraoperative microscope video feed. With current technology, the operator must manually interact with the AR interface using a computer. The latter poses a disruption in the surgical flow and is suboptimal for maintaining the sterility of the operating environment. The purpose of this study was to implement and evaluate free-hand interaction concepts leveraging hand tracking and gesture recognition as an attempt to reduce the disruption during surgery and improve human-computer interaction. Methods: An electromagnetically tracked surgical microscope was calibrated using a custom 3D printed calibration board. This allowed the augmentation of the microscope feed with segmented preoperative CT-derived virtual models. Ultraleap's Leap Motion Controller 2 was coupled to the microscope and used to implement hand-tracking capabilities. End-user feedback was gathered from a surgeon during development. Finally, users were asked to complete tasks that involved interacting with the virtual models, aligning them to physical targets, and adjusting the AR visualization. Results: Following observations and user feedback, we upgraded the functionalities of the hand interaction system. User feedback showed the users' preference for the new interaction concepts that provided minimal disruption of the surgical workflow and more intuitive interaction with the virtual content. Conclusion: We integrated hand interaction concepts, typically used with head-mounted displays (HMDs), into a surgical stereo microscope system intended for AR in otologic microsurgery. The concepts presented in this study demonstrated a more favorable approach to human-computer interaction in a surgical context. They hold potential for a more efficient execution of surgical tasks under microscopic AR guidance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Enhanced 2D Hand Pose Estimation for Gloved Medical Applications: A Preliminary Model.
- Author
-
Kiefer, Adam W., Willoughby, Dominic, MacPherson, Ryan P., Hubal, Robert, and Eckel, Stephen F.
- Subjects
- *
POSE estimation (Computer vision) , *MACHINE learning , *DOSAGE forms of drugs , *STANDARD deviations , *COMPUTER vision , *SURGICAL gloves - Abstract
(1) Background: As digital health technology evolves, the role of accurate medical-gloved hand tracking is becoming more important for the assessment and training of practitioners to reduce procedural errors in clinical settings. (2) Method: This study utilized computer vision for hand pose estimation to model skeletal hand movements during in situ aseptic drug compounding procedures. High-definition video cameras recorded hand movements while practitioners wore medical gloves of different colors. Hand poses were manually annotated, and machine learning models were developed and trained using the DeepLabCut interface via an 80/20 training/testing split. (3) Results: The developed model achieved an average root mean square error (RMSE) of 5.89 pixels across the training data set and 10.06 pixels across the test set. When excluding keypoints with a confidence value below 60%, the test set RMSE improved to 7.48 pixels, reflecting high accuracy in hand pose tracking. (4) Conclusions: The developed hand pose estimation model effectively tracks hand movements across both controlled and in situ drug compounding contexts, offering a first-of-its-kind medical glove hand tracking method. This model holds potential for enhancing clinical training and ensuring procedural safety, particularly in tasks requiring high precision such as drug compounding. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A Study on Differences in Educational Method to Periodic Inspection Work of Nuclear Power Plants.
- Author
-
Yashiro, Yuichi, Wang, Gang, Hatori, Fumio, and Yabuki, Nobuyoshi
- Subjects
BLENDED learning ,INTERACTIVE learning ,VIRTUAL reality ,KNOWLEDGE transfer ,VIRTUAL work ,EYE tracking - Abstract
Construction work and regular inspection work at nuclear power plants involve many special tasks, unlike general on-site work. In addition, the opportunity to transfer knowledge from skilled workers to unskilled workers is limited due to the inability to easily enter the plant and various security and radiation exposure issues. Therefore, in this study, we considered the application of virtual reality (VR) as a method to increase opportunities to learn anytime and anywhere and to transfer knowledge more effectively. In addition, as an interactive learning method to improve comprehension, we devised a system that uses hand tracking and eye tracking to allow participants to experience movements and postures that are closer to the real work in a virtual space. For hand-based work, three actions, "pinch", "grab", and "hold", were reproduced depending on the sizes of the parts and tools, and visual confirmation work was reproduced by the movement of the gaze point of the eyes, faithfully reproducing the special actions of the inspection work. We confirmed that a hybrid learning process that appropriately combines the developed active learning method, using experiential VR, with conventional passive learning methods, using paper and video, can improve the comprehension and retention of special work at nuclear power plants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Development of "Peter's First-Aid Adventure" Virtual Reality-Based Serious Game in First-Aid Education: Usability Analysis of Virtual Reality Interaction Tools.
- Author
-
Rainer, Alexander, Setiono, Angeline, Leonardrich, Kevin, and Ramdhan, Dimas
- Subjects
EDUCATIONAL games ,PHYSICAL training & conditioning ,TEACHING aids ,GAMES ,ADVENTURE & adventurers - Abstract
Incidents that threaten lives can occur unexpectedly, highlighting the importance of immediate first aid. Studies show that Virtual Reality (VR)-based Serious Games (SG) are excellent alternatives for learning, especially for physical training, as they provide immersive and realistic scenarios where users can learn and practice their skills in a safe environment. Therefore, this research proposes the design of a VR-based SG for learning first aid procedures. A total of 31 participants tested the game's usability using both VR controllers and hand tracking, playing the game with both tools in random order, and then completing the XR-GUQ and Interaction Categories questionnaire. The results indicate that controllers offer higher usability than hand tracking. However, the game can still effectively utilize both interaction tools, given their satisfactory usability ratings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Verification of Criterion-Related Validity for Developing a Markerless Hand Tracking Device.
- Author
-
Suwabe, Ryota, Saito, Takeshi, and Hamaguchi, Toyohiro
- Subjects
- *
PHYSICAL therapists , *OCCUPATIONAL therapists , *INFRARED cameras , *INFRARED imaging , *HEMIPLEGICS - Abstract
Physicians, physical therapists, and occupational therapists have traditionally assessed hand motor function in hemiplegic patients but often struggle to evaluate complex hand movements. To address this issue, in 2019, we developed Fahrenheit, a device and algorithm that uses infrared camera image processing to estimate hand paralysis. However, due to Fahrenheit's dependency on specialized equipment, we conceived a simpler solution: developing a smartphone app that integrates MediaPipe. The objective of this study was to measure hand movements in stroke patients using both MediaPipe and Fahrenheit and to assess their criterion-related validity. The analysis revealed moderate-to-high correlations between the two methods. Consistent results were also observed in the peak angle and velocity comparisons across the severity stages. Because Fahrenheit determines finger recovery status based on these measures, it has the potential to transfer this function to MediaPipe. This study highlighted the potential use of MediaPipe in paralysis estimation applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Hand Tracking: Survey.
- Author
-
Heo, Jinuk, Choi, Hyelim, Lee, Yongseok, Kim, Hyunsu, Ji, Harim, Park, Hyunreal, Lee, Youngseon, Jung, Cheongkee, Nguyen, Hai-Nguyen, and Lee, Dongjun
- Abstract
Hand tracking is relevant to such a variety of applications including human-robot interaction (HRI), human-computer interaction (HCI), virtual reality (VR), and augmented reality (AR). Accurate and robust hand tracking however is challenging due to the intricacies of dynamic motion within small space and the complex interactions with nearby objects, coupled with the hurdles in real-time hand mesh reconstruction. In this paper, we conduct a comprehensive examination and analysis of existing hand tracking technologies. Through the review of major works in the literature, we have discovered numerous studies employing a diverse array of sensors, leading us to propose their categorization into seven types: vision, soft wearable, encoder, magnetic, inertial measurement unit (IMU), electromyography (EMG), and the fusion of sensor modalities. Our findings indicate that no singular solution surpasses all others, attributing to the inherent limitations of using a single sensor modality. As a result, we assert that integrating multiple sensor modalities presents a viable path toward devising a superior hand tracking solution. Ultimately, this survey paper aims to bolster interdisciplinary research efforts across the spectrum of hand tracking technologies, thereby contributing to the advancement of the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A Modular Architecture for IMU-Based Data Gloves
- Author
-
Carfì, Alessandro, Alameh, Mohamad, Belcamino, Valerio, Mastrogiovanni, Fulvio, Siciliano, Bruno, Series Editor, Khatib, Oussama, Series Editor, Antonelli, Gianluca, Advisory Editor, Fox, Dieter, Advisory Editor, Harada, Kensuke, Advisory Editor, Hsieh, M. Ani, Advisory Editor, Kröger, Torsten, Advisory Editor, Kulic, Dana, Advisory Editor, Park, Jaeheung, Advisory Editor, Secchi, Cristian, editor, and Marconi, Lorenzo, editor
- Published
- 2024
- Full Text
- View/download PDF
10. Alphabet Recognition Using Virtual Air Canvas
- Author
-
Ashita, K., Virthi, M., Deepak, K. Mohan, Sangeetha, S. K. B., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Bhateja, Vikrant, editor, Lin, Hong, editor, Simic, Milan, editor, Attique Khan, Muhammad, editor, and Garg, Harish, editor
- Published
- 2024
- Full Text
- View/download PDF
11. WebXR-Driven Hand Gesture Data Capture: Designing a Web-Based Virtual Reality Application for Hand Tracking Data Recording
- Author
-
Tomeš, Jakub, Kohout, Jan, Mareš, Jan, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Silhavy, Radek, editor, and Silhavy, Petr, editor
- Published
- 2024
- Full Text
- View/download PDF
12. Event-Based Hand Detection on Neuromorphic Hardware Using a Sigma Delta Neural Network
- Author
-
Azzalini, Loïc, Glüge, Stefan, Struckmeier, Jens, Sandamirskaya, Yulia, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wand, Michael, editor, Malinovská, Kristína, editor, Schmidhuber, Jürgen, editor, and Tetko, Igor V., editor
- Published
- 2024
- Full Text
- View/download PDF
13. Hand Gesture Recognition and Volume Control
- Author
-
Tamilkodi, R., Madhuri, N., Dhanushkumar, G., Dileepkumar, G., Rajkumar, G., Sandeep, Y., Fournier-Viger, Philippe, Series Editor, Madhavi, K. Reddy, editor, Subba Rao, P., editor, Avanija, J., editor, Manikyamba, I. Lakshmi, editor, and Unhelkar, Bhuvan, editor
- Published
- 2024
- Full Text
- View/download PDF
14. Comparative Evaluation of Non-immersive and Immersive Approaches for Upper Limb Rehabilitation: A Performance and Usability Study
- Author
-
Herrera, V., Reyes-Guzmán, A., Vallejo, D., Castro-Schez, J. J., Monekosso, D., González-Morcillo, C., Albusac, J., van der Aalst, Wil, Series Editor, Ram, Sudha, Series Editor, Rosemann, Michael, Series Editor, Szyperski, Clemens, Series Editor, Guizzardi, Giancarlo, Series Editor, Filipe, Joaquim, editor, Śmiałek, Michał, editor, Brodsky, Alexander, editor, and Hammoudi, Slimane, editor
- Published
- 2024
- Full Text
- View/download PDF
15. Options Matter: Exploring VR Input Fatigue Reduction
- Author
-
Wilson, Michael, Scully, Levi, Le, Vinh, Harris, Frederick, Jr., Chu, Pengbo, Dascalu, Sergiu, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, and Latifi, Shahram, editor
- Published
- 2024
- Full Text
- View/download PDF
16. Bridging the Physical and Virtual Worlds: A Hand Tracking Gesture Recognition System for XR Applications
- Author
-
Fumić, Matija, Livada, Časlav, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Keser, Tomislav, editor, Ademović, Naida, editor, Desnica, Eleonora, editor, and Grgić, Ivan, editor
- Published
- 2024
- Full Text
- View/download PDF
17. Development and Evaluation of a Low-Jitter Hand Tracking System for Improving Typing Efficiency in a Virtual Reality Workspace
- Author
-
Tianshu Xu, Wen Gu, Koichi Ota, and Shinobu Hasegawa
- Subjects
virtual reality ,typing efficiency ,low jitter ,hand tracking ,Technology ,Science - Abstract
Virtual reality technology promises to transform immersive experiences across various applications, particularly within office environments. Despite its potential, the challenge of achieving efficient text entry in virtual reality persists. This study addresses this obstacle by introducing a novel machine learning-based solution, namely, the two-stream long short-term memory typing method, to enhance text entry performance in virtual reality. The two-stream long short-term memory method utilizes the back-of-the-hand image, employing a long short-term memory network and a Kalman filter to enhance hand position tracking accuracy and minimize jitter. Through statistical analysis of the data collected in the experiment and questionnaire results, we confirmed the effectiveness of the proposed method. In addition, we conducted an extra experiment to explore the differences in users’ typing behavior between regular typing and virtual reality-based typing. This additional experiment provides valuable insights into how users adapt their typing behavior in different environments. These findings represent a significant step in advancing text entry within virtual reality, setting the stage for immersive work experiences in office environments and beyond.
- Published
- 2025
- Full Text
- View/download PDF
18. Portable Head-Mounted System for Mobile Forearm Tracking.
- Author
-
Polsinelli, Matteo, Di Matteo, Alessandro, Lozzi, Daniele, Mattei, Enrico, Mignosi, Filippo, Nazzicone, Lorenzo, Stornelli, Vincenzo, and Placidi, Giuseppe
- Subjects
- *
COMPUTER vision , *MOTION control devices , *SINGLE-board computers , *MOBILE apps , *AUTOMATION , *VIRTUAL reality - Abstract
Computer vision (CV)-based systems using cameras and recognition algorithms offer touchless, cost-effective, precise, and versatile hand tracking. These systems allow unrestricted, fluid, and natural movements without the constraints of wearable devices, gaining popularity in human–system interaction, virtual reality, and medical procedures. However, traditional CV-based systems, relying on stationary cameras, are not compatible with mobile applications and demand substantial computing power. To address these limitations, we propose a portable hand-tracking system utilizing the Leap Motion Controller 2 (LMC) mounted on the head and controlled by a single-board computer (SBC) powered by a compact power bank. The proposed system enhances portability, enabling users to interact freely with their surroundings. We present the system's design and conduct experimental tests to evaluate its robustness under variable lighting conditions, power consumption, CPU usage, temperature, and frame rate. This portable hand-tracking solution, which has minimal weight and runs independently of external power, proves suitable for mobile applications in daily life. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Design of Virtual Hands for Natural Interaction in the Metaverse.
- Author
-
Cerdá-Boluda, Joaquín, Mora, Marta C., Lloret, Nuria, Scarani, Stefano, and Sastre, Jorge
- Subjects
- *
SHARED virtual environments , *VIRTUAL design , *NON-fungible tokens , *VIRTUAL reality , *SOCIAL interaction , *BLOCKCHAINS , *FINGERNAILS - Abstract
The emergence of the Metaverse is raising important questions in the field of human–machine interaction that must be addressed for a successful implementation of the new paradigm. Therefore, the exploration and integration of both technology and human interaction within this new framework are needed. This paper describes an innovative and technically viable proposal for virtual shopping in the fashion field. Virtual hands directly scanned from the real world have been integrated, after a retopology process, in a virtual environment created for the Metaverse, and have been integrated with digital nails. Human interaction with the Metaverse has been carried out through the acquisition of the real posture of the user's hands using an infrared-based sensor and mapping it in its virtualized version, achieving natural identification. The technique has been successfully tested in an immersive shopping experience with the Meta Quest 2 headset as a pilot project, where a transactions mechanism based on the blockchain technology (non-fungible tokens, NFTs) has allowed for the development of a feasible solution for massive audiences. The consumers' reactions were extremely positive, with a total of 250 in-person participants and 120 remote accesses to the Metaverse. Very interesting technical guidelines are raised in this project, the resolution of which may be useful for future implementations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Interactive Design With Gesture and Voice Recognition in Virtual Teaching Environments
- Author
-
Ke Fang and Jing Wang
- Subjects
Game engines ,hand tracking ,human–computer interaction ,recurrent neural networks ,speech processing ,virtual environments ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In virtual teaching scenarios, head-mounted display (HMD) interactions often employ traditional controller and UI interactions, which are not very conducive to teaching scenarios that require hand training. Existing improvements in this area have primarily focused on replacing controllers with gesture recognition. However, the exclusive use of gesture recognition may have limitations in certain scenarios, such as complex operations or multitasking environments. This study designed and tested an interaction method that combines simple gestures with voice assistance, aiming to offer a more intuitive user experience and enrich related research. A speech classification model was developed that can be activated via a fist-clenching gesture and is capable of recognising specific Chinese voice commands to initiate various UI interfaces, further controlled by pointing gestures. Virtual scenarios were constructed using Unity, with hand tracking achieved through the HTC OpenXR SDK. Within Unity, hand rendering and gesture recognition were facilitated, and interaction with the UI was made possible using the Unity XR Interaction Toolkit. The interaction method was detailed and exemplified using a teacher training simulation system, including sample code provision. Following this, an empirical test involving 20 participants was conducted, comparing the gesture-plus-voice operation to the traditional controller operation, both quantitatively and qualitatively. The data suggests that while there is no significant difference in task completion time between the two methods, the combined gesture and voice method received positive feedback in terms of user experience, indicating a promising direction for such interactive methods. Future work could involve adding more gestures and expanding the model training dataset to realize additional interactive functions, meeting diverse virtual teaching needs.
- Published
- 2024
- Full Text
- View/download PDF
21. Control the robot arm through vision-based human hand tracking
- Author
-
Phuong Le Hoai and Cong Vo Duy
- Subjects
hand tracking ,scara robot ,computer vision ,arduino ,mediapipe ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Mechanics of engineering. Applied mechanics ,TA349-359 - Abstract
In this paper, hand tracking based on computer vision is developed to control the movement of a SCARA robot arm. The robot arm will move according to the movement of the human hand. Instead of using buttons on the teach-pendant or a computer control program to move the robot arm, the robot can now be easily controlled and positioned quickly by the movement of the operator's hand. A SCARA robot arm with two rotation joints and one translation motion is constructed for the validation system. Two states of the hand are recognized for controlling the vacuum cup to grasp the products. Stepper motors drive the robot arm. Arduino Uno is used as the main controller for controlling the stepper motors. The handtracking is performed by using the MediaPipe Hands framework developed by Google. The coordinates of 21 hand landmarks are extracted for further processing. A program is written on a personal computer to process the image to get the position and state of the hand. This position is transformed into the rotation angles of the robot's joints. Then, the angles and state are sent to the Arduino board. The Arduino board creates pulse signals to rotate the stepper motors. The experimental results show that the robot's trajectory is close to the hand trajectory at a low speed.
- Published
- 2024
- Full Text
- View/download PDF
22. Design of Human–Computer Interaction Gesture Tracking Model Based on Improved PSO and KCF Algorithms
- Author
-
Dinghua He, Yan Yang, and Rangzhong Wu
- Subjects
Human–computer interaction ,particle swarm optimization ,skin tone model ,kernel correlation filtering ,dynamic gesture detection ,hand tracking ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The detection and tracking of gesture targets is an important aspect in the dynamic gesture recognition. To meet the accuracy and speed requirements of human-computer interaction for dynamic gesture recognition, this study explores long-term gesture recognition under monocular RGB cameras. This study uses an improved particle swarm optimization algorithm as the feature extraction method, and introduces a mixed Gaussian model and kernel correlation filtering to complete gesture detection and tracking. And it has constructed a dynamic gesture tracking model on the ground of kernel correlation filtering. The experimental results show that the skin color based gesture detection algorithm has the minimum average relative error value of 0.321 on different datasets, with accuracy and recall rates higher than 0.8. The maximum correlation coefficient R-squared value is 0.823, and the detection speed reaches 36.32 frames per second. And this detection method has high repeatability on different datasets, with better detection accuracy for different gesture targets. The F1 value of the gesture tracking model has the largest area of the receiver operation characteristic curve, and the two error values of the model are small, resulting in better gesture tracking performance. In human-computer interaction systems, the detection accuracy and target rejection rate of this method have been significantly improved, and the subjective evaluation of the interaction system by the subjects is relatively high, resulting in good application effects. This study enriches the theoretical foundation of dynamic gesture detection and tracking technology, and improves the quality level of gesture tracking in the field of human-computer interaction. This helps to expand the application scope of human-computer interaction.
- Published
- 2024
- Full Text
- View/download PDF
23. Accuracy of Video-Based Hand Tracking for People With Upper-Body Disabilities
- Author
-
Alexandra A. Portnova-Fahreeva, Momona Yamagami, Adria Robert-Gonzalez, Jennifer Mankoff, Heather Feldner, and Katherine M. Steele
- Subjects
Dimensionality reduction ,hand tracking ,principal component analysis ,synergies ,upper-body disabilities ,hand therapy ,Medical technology ,R855-855.5 ,Therapeutics. Pharmacology ,RM1-950 - Abstract
Utilization of hand-tracking cameras, such as Leap, for hand rehabilitation and functional assessments is an innovative approach to providing affordable alternatives for people with disabilities. However, prior to deploying these commercially-available tools, a thorough evaluation of their performance for disabled populations is necessary. In this study, we provide an in-depth analysis of the accuracy of Leap’s hand-tracking feature for both individuals with and without upper-body disabilities for common dynamic tasks used in rehabilitation. Leap is compared against motion capture with conventional techniques such as signal correlations, mean absolute errors, and digit segment length estimation. We also propose the use of dimensionality reduction techniques, such as Principal Component Analysis (PCA), to capture the complex, high-dimensional signal spaces of the hand. We found that Leap’s hand-tracking performance did not differ between individuals with and without disabilities, yielding average signal correlations between 0.7-0.9. Both low and high mean absolute errors (between 10-80mm) were observed across participants. Overall, Leap did well with general hand posture tracking, with the largest errors associated with the tracking of the index finger. Leap’s hand model was found to be most inaccurate in the proximal digit segment, underestimating digit lengths with errors as high as 18mm. Using PCA to quantify differences between the high-dimensional spaces of Leap and motion capture showed that high correlations between latent space projections were associated with high accuracy in the original signal space. These results point to the potential of low-dimensional representations of complex hand movements to support hand rehabilitation and assessment.
- Published
- 2024
- Full Text
- View/download PDF
24. Accuracy of Video-Based Hand Tracking for People With Upper-Body Disabilities.
- Author
-
Portnova-Fahreeva, Alexandra A., Yamagami, Momona, Robert-Gonzalez, Adria, Mankoff, Jennifer, Feldner, Heather, and Steele, Katherine M.
- Subjects
PRINCIPAL components analysis ,MOTION capture (Human mechanics) ,PEOPLE with disabilities ,TASK analysis ,HAND signals - Abstract
Utilization of hand-tracking cameras, such as Leap, for hand rehabilitation and functional assessments is an innovative approach to providing affordable alternatives for people with disabilities. However, prior to deploying these commercially-available tools, a thorough evaluation of their performance for disabled populations is necessary. In this study, we provide an in-depth analysis of the accuracy of Leap’s hand-tracking feature for both individuals with and without upper-body disabilities for common dynamic tasks used in rehabilitation. Leap is compared against motion capture with conventional techniques such as signal correlations, mean absolute errors, and digit segment length estimation. We also propose the use of dimensionality reduction techniques, such as Principal Component Analysis (PCA), to capture the complex, high-dimensional signal spaces of the hand. We found that Leap’s hand-tracking performance did not differ between individuals with and without disabilities, yielding average signal correlations between 0.7-0.9. Both low and high mean absolute errors (between 10-80mm) were observed across participants. Overall, Leap did well with general hand posture tracking, with the largest errors associated with the tracking of the index finger. Leap’s hand model was found to be most inaccurate in the proximal digit segment, underestimating digit lengths with errors as high as 18mm. Using PCA to quantify differences between the high-dimensional spaces of Leap and motion capture showed that high correlations between latent space projections were associated with high accuracy in the original signal space. These results point to the potential of low-dimensional representations of complex hand movements to support hand rehabilitation and assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Control the Robot Arm through Vision-Based Human Hand Tracking.
- Author
-
Le Hoai Phuong and Vo Duy Cong
- Subjects
ROBOT control systems ,INDUSTRIAL robots ,ROBOT hands ,STEPPING motors ,ROBOT motion ,AUTOMATION - Abstract
Copyright of FME Transactions is the property of University of Belgrade, Faculty of Mechanical Engineering and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
26. Double-handed dynamic gesture recognition using contour-based hand tracking and maximum mean probability ensembling (MMPE) for Indian Sign Language.
- Author
-
Sruthi, C. J. and Lijiya, A.
- Subjects
- *
SIGN language , *GESTURE , *DEAF children , *SUPPORT vector machines , *TRACKING algorithms , *VERBAL ability - Abstract
The ability to communicate in verbal language is one of the greatest gifts of humankind. The people who do not have this ability feel isolated and struggle to convey their part in society. Sign language or gesture communication is the only method they can rely upon, but most of our community cannot understand this language without the help of a translator. The paper presents a dynamic Indian Sign Language recognition system without complicated sensors or costly devices to sense the movements of the hands. The paper proposes a problem-specific contour-based hand tracking algorithm that can track both hands simultaneously, solving the ambiguity caused by merging the hands while gesturing. The paper also proposes a maximum mean probability ensembling that combines the classification probabilities of three different classification models for better accuracy. The proposed model recognizes the double-handed dynamic gestures with an accuracy of 89.83%. The paper discusses the performance of scale-invariant feature transform, tracked image feature and their combination feature for dynamic gesture classification, and tests the discriminating power of different classifiers on these features. The support vector machine classifier showed the best performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. Robust vision-based glove pose estimation for both hands in virtual reality.
- Author
-
Hsu, Fu-Song, Wang, Te-Mei, and Chen, Liang-Hsun
- Abstract
In virtual reality (VR) applications, haptic gloves provide feedback and more direct control than bare hands do. Most VR gloves contain flex and inertial measurement sensors for tracking the finger joints of a single hand; however, they lack a mechanism for tracking two-hand interactions. In this paper, a vision-based method is proposed for improved two-handed glove tracking. The proposed method requires only one camera attached to a VR headset. A photorealistic glove data generation framework was established to synthesize large quantities of training data for identifying the left, right, or both gloves in images with complex backgrounds. We also incorporated the "glove pose hypothesis" in the training stage, in which spatial cues regarding relative joint positions were exploited for accurately predict glove positions under severe self-occlusion or motion blur. In our experiments, a system based on the proposed method achieved an accuracy of 94.06% on a validation set and achieved high-speed tracking at 65 fps on a consumer graphics processing unit. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Gesture Recognition and Hand Tracking for Anti-Counterfeit Palmvein Recognition.
- Author
-
Xu, Jiawei, Leng, Lu, and Kim, Byung-Gyu
- Subjects
CONVOLUTIONAL neural networks ,FEATURE extraction ,GESTURE ,HAND - Abstract
At present, COVID-19 is posing a serious threat to global human health. The features of hand veins in infrared environments have many advantages, including non-contact acquisition, security, privacy, etc., which can remarkably reduce the risks of COVID-19. Therefore, this paper builds an interactive system, which can recognize hand gestures and track hands for palmvein recognition in infrared environments. The gesture contours are extracted and input into an improved convolutional neural network for gesture recognition. The hand is tracked based on key point detection. Because the hand gesture commands are randomly generated and the hand vein features are extracted from the infrared environment, the anti-counterfeiting performance is obviously improved. In addition, hand tracking is conducted after gesture recognition, which prevents the escape of the hand from the camera view range, so it ensures that the hand used for palmvein recognition is identical to the hand used during gesture recognition. The experimental results show that the proposed gesture recognition method performs satisfactorily on our dataset, and the hand tracking method has good robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Exploring Hand Tracking and Controller-Based Interactions in a VR Object Manipulation Task
- Author
-
Johnson, Cheryl I., Fraulini, Nicholas W., Peterson, Eric K., Entinger, Jacob, Whitmer, Daphne E., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Chen, Jessie Y. C., editor, Fragomeni, Gino, editor, and Fang, Xiaowen, editor
- Published
- 2023
- Full Text
- View/download PDF
30. Keyrtual: A Lightweight Virtual Musical Keyboard Based on RGB-D and Sensors Fusion
- Author
-
Avola, Danilo, Cinque, Luigi, Marini, Marco Raoul, Princic, Andrea, Venanzi, Valerio, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Tsapatsoulis, Nicolas, editor, Lanitis, Andreas, editor, Pattichis, Marios, editor, Pattichis, Constantinos, editor, Kyrkou, Christos, editor, Kyriacou, Efthyvoulos, editor, Theodosiou, Zenonas, editor, and Panayides, Andreas, editor
- Published
- 2023
- Full Text
- View/download PDF
31. Comparative Study of Hand-Tracking and Traditional Control Interfaces for Remote Palpation
- Author
-
Costi, Leone, Almanzor, Elijah, Scimeca, Luca, Iida, Fumiya, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Iida, Fumiya, editor, Maiolino, Perla, editor, Abdulali, Arsen, editor, and Wang, Mingfeng, editor
- Published
- 2023
- Full Text
- View/download PDF
32. Hand Tracking for XR-Based Apraxia Assessment: A Preliminary Study
- Author
-
Pellegrino, Giulia, d’Errico, Giovanni, De Luca, Valerio, Barba, Maria Cristina, De Paolis, Lucio Tommaso, Magjarevic, Ratko, Series Editor, Ładyżyński, Piotr, Associate Editor, Ibrahim, Fatimah, Associate Editor, Lackovic, Igor, Associate Editor, Rock, Emilio Sacristan, Associate Editor, Dekhtyar, Yuri, editor, and Saknite, Inga, editor
- Published
- 2023
- Full Text
- View/download PDF
33. WebAR-NFC to Gauge User Immersion in Education and Training
- Author
-
Korlapati, Soundarya, Seals, Cheryl D., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zaphiris, Panayiotis, editor, and Ioannou, Andri, editor
- Published
- 2023
- Full Text
- View/download PDF
34. A Survey on 3D Hand Detection and Tracking Algorithms for Human Computer Interfacing
- Author
-
Bajaj, Anu, Rajpal, Jimmy, Abraham, Ajith, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Abraham, Ajith, editor, Pllana, Sabri, editor, Casalino, Gabriella, editor, Ma, Kun, editor, and Bajaj, Anu, editor
- Published
- 2023
- Full Text
- View/download PDF
35. FROM HAPTIC INTERACTION TO DESIGN INSIGHT: AN EMPIRICAL COMPARISON OF COMMERCIAL HAND-TRACKING TECHNOLOGY.
- Author
-
Cox, Christopher Michael Jason, Hicks, Ben, Gopsill, James, and Snider, Chris
- Subjects
HAPTIC devices ,COMMERCIAL product tracking ,NEW product development ,PROTOTYPES ,COMPUTER input-output equipment - Abstract
Advancements in prototyping technologies – haptics and extended reality – are creating exciting new environments to enhance stakeholder and user interaction with design concepts. These interactions can now occur earlier in the design process, transforming feedback mechanisms resulting in greater and faster iterations. This is essential for bringing right-first-time products to market as quickly as possible. While existing feedback tools, such as speak-aloud, surveys and/or questionnaires, are a useful means for capturing user feedback and reflections on interactions, there is a desire to explicitly map user feedback to their physical prototype interaction. Over the past decade, several hand-tracking tools have been developed that can, in principle, capture product user interaction. In this paper, we explore the capability of the LeapMotion Controller, MediaPipe and Manus Prime X Haptic gloves to capture user interaction with prototypes. A broad perspective of capability is adopted, including accuracy as well as the practical aspects of knowledge, skills, and ease of use. In this study, challenges in accuracy, occlusion and data processing were elicited in the capture and translation of user interaction into design insights. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. A dataset for assessing real-time attention levels of the students during online classes
- Author
-
Muhammad Kamal Hossen and Mohammad Shorif Uddin
- Subjects
Face detection ,Hand tracking ,Head pose ,MediaPipe ,Mobile phone ,Machine learning ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Science (General) ,Q1-390 - Abstract
This dataset offers a comprehensive compilation of attention-related features captured during online classes. The dataset is generated through the integration of key components including face detection, hand tracking, head pose estimation, and mobile phone detection modules. The data collection process involves leveraging a web interface created using the Django web framework. Video frames of participating students are collected following institutional guidelines and informed consent through their webcams, subsequently decomposed into frames at a rate of 20 FPS, and transformed from BGR to RGB color model. The aforesaid modules subsequently process these video frames to extract raw data. The dataset consists of 16 features and one label column, encompassing numerical, categorical, and floating-point values. Inherent to its potential, the dataset enables researchers and practitioners to explore and examine attention-related patterns and characteristics exhibited by students during online classes. The composition and design of the dataset offer a unique opportunity to delve into the correlations and interactions among face presence, hand movements, head orientations, and phone interactions. Researchers can leverage this dataset to investigate and develop machine learning models aimed at automatic attention detection, thereby contributing to enhancing remote learning experiences and educational outcomes. The dataset in question also constitutes a highly valuable resource for the scientific community, enabling a thorough exploration of the multifaceted aspects pertaining to student attention levels during online classes. Its rich and diverse feature set, coupled with the underlying data collection methodology, provides ample opportunities for reuse and exploration across multiple domains including education, psychology, and computer vision research.
- Published
- 2023
- Full Text
- View/download PDF
37. Brass Haptics: Comparing Virtual and Physical Trumpets in Extended Realities.
- Author
-
Blewett, Devon John and Gerhard, David
- Subjects
MIDI (Standard) ,TRUMPET ,BRASS ,MUSIC classrooms - Abstract
Despite the benefits of learning an instrument, many students drop out early because it can be frustrating for the student, expensive for the caregiver, and loud for the household. Virtual Reality (VR) and Extended Reality (XR) offer the potential to address these challenges by simulating multiple instruments in an engaging and motivating environment through headphones. To assess the potential for commercial VR to augment musical experiences, we used standard VR implementation processes to design four virtual trumpet interfaces: camera-tracking with tracked register selection (two ways), camera-tracking with voice activation, and a controller plus a force-feedback haptic glove. To evaluate these implementations, we created a virtual music classroom that produces audio, notes, and finger pattern guides loaded from a selected Musical Instrument Digital Interface (MIDI) file. We analytically compared these implementations against physical trumpets (both acoustic and MIDI), considering features of ease of use, familiarity, playability, noise, and versatility. The physical trumpets produced the most reliable and familiar experience, and some XR benefits were considered. The camera-based methods were easy to use but lacked tactile feedback. The haptic glove provided improved tracking accuracy and haptic feedback over camera-based methods. Each method was also considered as a proof-of-concept for other instruments, real or imaginary. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. A Deep Learning Framework for Hand Gesture Recognition and Multimodal Interface Control.
- Author
-
Elmagrouni, Issam, Ettaoufik, Abdelaziz, Aouad, Siham, and Maizate, Abderrahim
- Subjects
MULTIMODAL user interfaces ,DEEP learning ,GESTURE ,GRAPHICAL user interfaces ,HUMAN-computer interaction ,PERSONAL computers ,STIMULUS & response (Psychology) - Abstract
Hand gesture recognition (HGR) is an essential technology with applications spanning human-computer interaction, robotics, augmented reality, and virtual reality. This technology enables more natural and effortless interaction with computers, resulting in an enhanced user experience. As HGR adoption increases, it plays a crucial role in bridging the gap between humans and technology, facilitating seamless communication and interaction. In this study, a novel deep learning approach is proposed for the development of a Hand Gesture Interface (HGI) that enables the control of graphical user interfaces without physical touch on personal computers. The methodology encompasses the analysis, design, implementation, and deployment of the HGI. Experimental results on a hand gesture recognition system indicate that the proposed approach improves accuracy and reduces response time compared to existing methods. The system is capable of controlling various multimedia applications, including VLC media player, Microsoft Word, and PowerPoint. In conclusion, this approach offers a promising solution for the development of HGIs that facilitate efficient and intuitive interactions with computers, making communication more natural and accessible for users. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Multimodal Multi-User Mixed Reality Human–Robot Interface for Remote Operations in Hazardous Environments
- Author
-
Krzysztof Adam Szczurek, Raul Marin Prades, Eloise Matheson, Jose Rodriguez-Nogueira, and Mario Di Castro
- Subjects
Augmented Reality ,facility maintenance ,hand tracking ,hazardous environment ,human–robot interaction ,mixed reality ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In hazardous environments, where conditions present risks for humans, the maintenance and interventions are often done with teleoperated remote systems or mobile robotic manipulators to avoid human exposure to dangers. The increasing need for safe and efficient teleoperation requires advanced environmental awareness and collision avoidance. The up-to-date screen-based 2D or 3D interfaces do not fully allow the operator to immerse in the controlled scenario. This problem can be addressed with the emerging Mixed Reality (MR) technologies with Head-Mounted Devices (HMDs) that offer stereoscopic immersion and interaction with virtual objects. Such human-robot interfaces have not yet been demonstrated in telerobotic interventions in particle physics accelerators. Moreover, the operations often require a few experts to collaborate, which increases the system complexity and requires sharing an Augmented Reality (AR) workspace. The multi-user mobile telerobotics in hazardous environments with shared control in the AR has not yet been approached in the state-of-the-art. In this work, the developed MR human-robot interface using the AR HMD is presented. The interface adapts to the constrained wireless networks in particle accelerator facilities and provides reliable high-precision interaction and specialized visualization. The multimodal operation uses hands, eyes and user motion tracking, and voice recognition for control, as well as offers video, 3D point cloud and audio feedback from the robot. Multiple experts can collaborate in the AR workspace locally or remotely, and share or monitor the robot’s control. Ten operators tested the interface in intervention scenarios in the European Organization for Nuclear Research (CERN) with complete network characterization and measurements to conclude if operational requirements were met and if the network architecture could support single and multi-user communication load. The interface system has proved to be operationally ready at the Technical Readiness Level (TRL) 8 and was validated through successful demonstration in single and multi-user missions. Some system limitations and further work areas were identified, such as optimizing the network architecture for multi-user scenarios or high-level interface actions applying automatic interaction strategies depending on network conditions.
- Published
- 2023
- Full Text
- View/download PDF
40. Hand gesture recognition using deep learning neural networks
- Author
-
Alnaim, Norah, Abbod, M., and Swash, M.
- Subjects
006.3 ,Convolutional Neural Network ,Holoscopic 3D video ,Wavelet Transform ,Empirical Mode Decomposition ,Hand Tracking - Abstract
Human Computer Interaction (HCI) is a broad field involving different types of interactions including gestures. Gesture recognition concerns non-verbal motions used as a means of communication in HCI. A system may be utilised to identify human gestures to convey information for device control. This represents a significant field within HCI involving device interfaces and users. The aim of gesture recognition is to record gestures that are formed in a certain way and then detected by a device such as a camera. Hand gestures can be used as a form of communication for many different applications. It may be used by people who possess different disabilities, including those with hearing-impairments, speech impairments and stroke patients, to communicate and fulfil their basic needs. Various studies have previously been conducted relating to hand gestures. Some studies proposed different techniques to implement the hand gesture experiments. For image processing there are multiple tools to extract features of images, as well as Artificial Intelligence which has varied classifiers to classify different types of data. 2D and 3D hand gestures request an effective algorithm to extract images and classify various mini gestures and movements. This research discusses this issue using different algorithms. To detect 2D or 3D hand gestures, this research proposed image processing tools such as Wavelet Transforms and Empirical Mode Decomposition to extract image features. The Artificial Neural Network (ANN) classifier which used to train and classify data besides Convolutional Neural Networks (CNN). These methods were examined in terms of multiple parameters such as execution time, accuracy, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood, negative likelihood, receiver operating characteristic, area under ROC curve and root mean square. This research discusses four original contributions in the field of hand gestures. The first contribution is an implementation of two experiments using 2D hand gesture video where ten different gestures are detected in short and long distances using an iPhone 6 Plus with 4K resolution. The experiments are performed using WT and EMD for feature extraction while ANN and CNN for classification. The second contribution comprises 3D hand gesture video experiments where twelve gestures are recorded using holoscopic imaging system camera. The third contribution pertains experimental work carried out to detect seven common hand gestures. Finally, disparity experiments were performed using the left and the right 3D hand gesture videos to discover disparities. The results of comparison show the accuracy results of CNN being 100% compared to other techniques. CNN is clearly the most appropriate method to be used in a hand gesture system.
- Published
- 2020
41. A Novel Sensor Fusion Approach for Precise Hand Tracking in Virtual Reality-Based Human—Computer Interaction.
- Author
-
Lei, Yu, Deng, Yi, Dong, Lin, Li, Xiaohui, Li, Xiangnan, and Su, Zhi
- Subjects
- *
HUMAN-computer interaction , *ARTIFICIAL neural networks , *CONVOLUTIONAL neural networks , *ARTIFICIAL intelligence , *COMPUTATIONAL fluid dynamics - Abstract
The rapidly evolving field of Virtual Reality (VR)-based Human–Computer Interaction (HCI) presents a significant demand for robust and accurate hand tracking solutions. Current technologies, predominantly based on single-sensing modalities, fall short in providing comprehensive information capture due to susceptibility to occlusions and environmental factors. In this paper, we introduce a novel sensor fusion approach combined with a Long Short-Term Memory (LSTM)-based algorithm for enhanced hand tracking in VR-based HCI. Our system employs six Leap Motion controllers, two RealSense depth cameras, and two Myo armbands to yield a multi-modal data capture. This rich data set is then processed using LSTM, ensuring the accurate real-time tracking of complex hand movements. The proposed system provides a powerful tool for intuitive and immersive interactions in VR environments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Vision-Based Hand Detection and Tracking Using Fusion of Kernelized Correlation Filter and Single-Shot Detection.
- Author
-
Haji Mohd, Mohd Norzali, Mohd Asaari, Mohd Shahrimie, Lay Ping, Ong, and Rosdi, Bakhtiar Affendi
- Subjects
OBJECT tracking (Computer vision) ,COMPUTER vision ,HUMAN-computer interaction ,AUGMENTED reality ,APPLICATION software ,DEEP learning - Abstract
Hand detection and tracking are key components in many computer vision applications, including hand pose estimation and gesture recognition for human–computer interaction systems, virtual reality, and augmented reality. Despite their importance, reliable hand detection in cluttered scenes remains a challenge. This study explores the use of deep learning techniques for fast and robust hand detection and tracking. A novel algorithm is proposed by combining the Kernelized Correlation Filter (KCF) tracker with the Single-Shot Detection (SSD) method. This integration enables the detection and tracking of hands in challenging environments, such as cluttered backgrounds and occlusions. The SSD algorithm helps reinitialize the KCF tracker when it fails or encounters drift issues due to sudden changes in hand gestures or fast movements. Testing in challenging scenes showed that the proposed tracker achieved a tracking rate of over 90% and a speed of 17 frames per second (FPS). Comparison with the KCF tracker on 17 video sequences revealed an average improvement of 13.31% in tracking detection rate (TRDR) and 27.04% in object detection error (OTE). Additional comparison with MediaPipe hand tracker on 10 hand gesture videos taken from the Intelligent Biometric Group Hand Tracking (IBGHT) dataset showed that the proposed method outperformed the MediaPipe hand tracker in terms of overall TRDR and tracking speed. The results demonstrate the promising potential of the proposed method for long-sequence tracking stability, reducing drift issues, and improving tracking performance during occlusions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. INNOVATIVE TECHNOLOGY FOR UPPER LIMB REHABILITATION USING VIRTUAL REALITY AND LEAP MOTION CONTROLLER.
- Author
-
ŢOPA, Ionuț-Cristian and RUSU, Dan-Mihai
- Subjects
- *
WRIST , *VIRTUAL reality , *TECHNOLOGICAL innovations , *MOTION control devices , *TELEREHABILITATION , *MEDICAL rehabilitation - Abstract
Over time, virtual reality has begun to be used not only for entertainment but has been introduced into the field of medicine with a beneficial role in medical rehabilitation. This paper presents an application on different levels for the rehabilitation of subjects who have suffered from neurological diseases and who have lost certain senses in their upper limbs. The application using virtual reality and a module for hand tracking aims at the rehabilitation of three different movements of the upper limb, namely flexion and extension of the whole arm, flexion and extension of the finger, and flexion of the wrist, with tests performed on healthy subjects. Future research directions will include collaborations with clinics specializing in the treatment of patients with movement disabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2023
44. Immersive virtual reality for upper limb rehabilitation: comparing hand and controller interaction.
- Author
-
Juan, M.-Carmen, Elexpuru, Julen, Dias, Paulo, Santos, Beatriz Sousa, and Amorim, Paula
- Subjects
SHARED virtual environments ,VIRTUAL reality ,EXERCISE therapy ,REHABILITATION ,HEALTH facilities ,REALITY television programs ,ARM - Abstract
Virtual reality shows great potential as an alternative to traditional therapies for motor rehabilitation given its ability to immerse the user in engaging scenarios that abstract them from medical facilities and tedious rehabilitation exercises. This paper presents a virtual reality application that includes three serious games and that was developed for motor rehabilitation. It uses a standalone headset and the user's hands without the need for any controller for interaction. Interacting with an immersive virtual reality environment using only natural hand gestures involves an interaction that is similar to that of real life, which would be especially desirable for patients with motor problems. A study involving 28 participants (4 with motor problems) was carried out to compare two types of interaction (hands vs. controllers). All of the participants completed the exercises. No significant differences were found in the number of attempts necessary to complete the games using the two types of interaction. The group that used controllers required less time to complete the exercise. The performance outcomes were independent of the gender and age of the participants. The subjective assessment of the participants with motor problems was not significantly different from the rest of the participants. With regard to the interaction type, the participants mostly preferred the interaction using their hands (78.5%). All four participants with motor problems preferred the hand interaction. These results suggest that the interaction with the user's hands together with standalone headsets could improve motivation, be well accepted by motor rehabilitation patients, and help to complete exercise therapy at home. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Hand Gesture Interface for Robot Path Definition in Collaborative Applications: Implementation and Comparative Study.
- Author
-
Vysocký, Aleš, Poštulka, Tomáš, Chlebek, Jakub, Kot, Tomáš, Maslowski, Jan, and Grushko, Stefan
- Subjects
- *
INDUSTRIAL robots , *GESTURE , *ROBOTS , *MANUFACTURING processes , *ROBOT hands , *SHARED workspaces , *COMPARATIVE studies - Abstract
The article explores the possibilities of using hand gestures as a control interface for robotic systems in a collaborative workspace. The development of hand gesture control interfaces has become increasingly important in everyday life as well as professional contexts such as manufacturing processes. We present a system designed to facilitate collaboration between humans and robots in manufacturing processes that require frequent revisions of the robot path and that allows direct definition of the waypoints, which differentiates our system from the existing ones. We introduce a novel and intuitive approach to human–robot cooperation through the use of simple gestures. As part of a robotic workspace, a proposed interface was developed and implemented utilising three RGB-D sensors for monitoring the operator's hand movements within the workspace. The system employs distributed data processing through multiple Jetson Nano units, with each unit processing data from a single camera. MediaPipe solution is utilised to localise the hand landmarks in the RGB image, enabling gesture recognition. We compare the conventional methods of defining robot trajectories with their developed gesture-based system through an experiment with 20 volunteers. The experiment involved verification of the system under realistic conditions in a real workspace closely resembling the intended industrial application. Data collected during the experiment included both objective and subjective parameters. The results indicate that the gesture-based interface enables users to define a given path objectively faster than conventional methods. We critically analyse the features and limitations of the developed system and suggest directions for future research. Overall, the experimental results indicate the usefulness of the developed system as it can speed up the definition of the robot's path. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Air Writing Recognition Using Mediapipe and Opencv
- Author
-
Nitin Kumar, R., Vaishnavi, Makkena, Gayatri, K. R., Prashanthi, Venigalla, Supriya, M., Howlett, Robert J., Series Editor, Jain, Lakhmi C., Series Editor, Karuppusamy, P., editor, García Márquez, Fausto Pedro, editor, and Nguyen, Tu N., editor
- Published
- 2022
- Full Text
- View/download PDF
47. Development of Fingering Learning Support System Using Fingertip Tracking from Monocular Camera
- Author
-
Kishimoto, Takuya, Imura, Masataka, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Stephanidis, Constantine, editor, Antona, Margherita, editor, and Ntoa, Stavroula, editor
- Published
- 2022
- Full Text
- View/download PDF
48. VR Human Body Treatment Game ‘BodyCureBot’ Using Hand Tracking
- Author
-
Lee, Da Yeon, Kim, Du Ri, Lee, Edward, Na, Jung Jo, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Stephanidis, Constantine, editor, Antona, Margherita, editor, and Ntoa, Stavroula, editor
- Published
- 2022
- Full Text
- View/download PDF
49. Spell Painter: Motion Controlled Spellcasting for a Wizard Video Game
- Author
-
MacCormick, Daniel, Zaman, Loutfouz, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Fang, Xiaowen, editor
- Published
- 2022
- Full Text
- View/download PDF
50. Easy Hand Gesture Control of a ROS-Car Using Google MediaPipe for Surveillance Use
- Author
-
Allena, Christian Diego, De Leon, Ryan Collin, Wong, Yung-Hao, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Fui-Hoon Nah, Fiona, editor, and Siau, Keng, editor
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.