11,722 results on '"teleoperation"'
Search Results
2. Emerging strategies in close proximity operations for space debris removal: A review
- Author
-
Arshad, Muneeb, Bazzocchi, Michael C.F., and Hussain, Faraz
- Published
- 2025
- Full Text
- View/download PDF
3. Digital Twin to Enhance Offshore Power-to-X Platforms with Operational Alarm Management
- Author
-
Bodenstein, Frederike, Dieckmann, Stefan, Dittler, Daniel, Geschke, Alexander, Jazdi, Nasser, and Weyrich, Michael
- Published
- 2024
- Full Text
- View/download PDF
4. Analysis and Impact of the End-to-End Communication Chain on a DLR Real Time On-Orbit Servicing Mission Project
- Author
-
Falcone, Rossella, Lucas, Dominic, Stangl, Christian, Krenn, Rainer, Brunner, Bernhard, Huber, Felix, De Rosa, Sergio, Series Editor, Zheng, Yao, Series Editor, Popova, Elena, Series Editor, Lee, Young H., editor, Schmidt, Alexander, editor, and Trollope, Ed, editor
- Published
- 2025
- Full Text
- View/download PDF
5. Identifying Challenges in Remote Driving
- Author
-
Klöppel-Gersdorf, Michael, Bellanger, Adrien, Otto, Thomas, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Klein, Cornel, editor, Jarke, Matthias, editor, Ploeg, Jeroen, editor, Berns, Karsten, editor, Vinel, Alexey, editor, and Gusikhin, Oleg, editor
- Published
- 2025
- Full Text
- View/download PDF
6. Surgical robots: history, applications and future prospects
- Author
-
Bogue, Rob
- Published
- 2024
- Full Text
- View/download PDF
7. Paradox of Time Pressure: Cognitive and Task Performance during Time-Sensitive and Challenging Teleoperation.
- Author
-
Lee, Jin Sol and Ham, Youngjib
- Subjects
- *
TIME pressure , *SITUATIONAL awareness , *TASK performance , *REMOTE control , *CONSTRUCTION equipment - Abstract
A heavy machinery operators' ability to adhere to schedules is crucial for the success of construction projects. However, unforeseen delays often occur during projects, forcing operators to expedite their work. This pressure often presents challenges for teleoperators. Completing tasks remotely typically takes longer than performing the same tasks on-site due to reduced situational awareness and reliance on technology for perception and understanding in remote workplaces. These inherent aspects of teleoperation add complexity to tasks, especially under time-constrained conditions. This study explores operators' cognitive and task performance during teleoperation of challenging tasks under various time pressures. Thirty-one participants operated a virtual excavator under four different levels of time pressure during the experiments. Results show that appropriate time pressure enhances task performance in aspects of safety, productivity, and quality, whereas excessive pressure results in cognitive overload, disengagement, impaired situational awareness, increased errors, and reduced productivity. This research contributes to enhancing the understanding of teleoperation from the operator's perspective, addressing cognitive challenges to improve safety and efficiency during the remote operation of heavy machinery in construction. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Augmenting visual feedback with visualized interaction forces in haptic-assisted virtual-reality teleoperation.
- Author
-
van den Berg, Alex, Hofland, Jelle, Heemskerk, Cock J. M., Abbink, David A., and Peternel, Luka
- Subjects
INDUSTRIAL robots ,HEAD-mounted displays ,REMOTE control ,VIRTUAL reality ,TASK performance ,HAPTIC devices - Abstract
In recent years, providing additional visual feedback about the interaction forces has been found to offer benefits to haptic-assisted teleoperation. However, there is limited insight into the effects of the design of force feedback-related visual cues and the type of visual display on the performance of teleoperation of robotic arms executing industrial tasks. In this study, we provide new insights into this interaction by extending these findings to the haptic assistance teleoperation of a simulated robotic arm in a virtual environment, in which the haptic assistance is comprised of a set of virtual fixtures. We design a novel method for providing visual cues about the interaction forces to complement the haptic assistance and augment visual feedback in virtual reality with a head-mounted display. We evaluate the visual cues method and head-mounted display method through human factors experiments in a teleoperated dross removal use case. The results show that both methods are beneficial for task performance, each of them having stronger points in different aspects of the operation. The visual cues method was found to significantly improve safety in terms of peak collision force, whereas the head-mounted display additionally improves the performance significantly. Furthermore, positive scores of the subjective analysis indicate an increased user acceptance of both methods. This work provides a new study on the importance of visual feedback related to (interaction) forces and spatial information for haptic assistance and provides two methods to take advantage of its potential benefits in the teleoperation of robotic arms. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Novel adaptive robust L1‐based controllers for teleoperation systems with uncertainties and time delays.
- Author
-
Yazdankhoo, Behnam, Ha'iri Yazdi, Mohammad Reza, Najafi, Farshid, and Beigzadeh, Borhan
- Subjects
ROBUST control ,TIME delay systems ,LINEAR matrix inequalities ,NONLINEAR systems ,ADAPTIVE control systems - Abstract
Despite various proposed control schemes for uncertain bilateral teleoperation systems under time delays, optimally restricting the system's overshoot has remained an overlooked issue in this realm. For this aim, we propose two novel control architectures based on robust L1 theory, entitled position‐based adaptive L1controller and transparent adaptive L1controller, with the former focusing on position synchronization and the latter concerning system transparency. Since developing L1‐based controllers for nonlinear telerobotic systems encompassing uncertainty and round‐trip delays puts significant theoretical challenges forward, the main contribution of this paper lies in advancing L1 theory within the field of delayed teleoperation control. To formulate the theories, the asymptotic stability of the closed‐loop system for each controller is first proved utilizing the Lyapunov method, followed by transformation, along with the L1 performance criterion, into linear matrix inequalities. Ultimately, the control gains are attained by solving a convex optimization problem. The superiority of the designed controllers over a benchmark transparent controller for teleoperators is demonstrated via simulation. Furthermore, experimental tests on a two‐degrees‐of‐freedom nonlinear telerobotic system validate the efficient performance of the proposed controllers. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Advancing teleoperation for legged manipulation with wearable motion capture.
- Author
-
Zhou, Chengxu, Wan, Yuhui, Peers, Christopher, Delfaki, Andromachi Maria, and Kanoulas, Dimitrios
- Subjects
EXPLOSIVE ordnance disposal ,MOTION capture (Cinematography) ,MOTION capture (Human mechanics) ,HUMAN-robot interaction ,ROBOT control systems - Abstract
The sanctity of human life mandates the replacement of individuals with robotic systems in the execution of hazardous tasks. Explosive Ordnance Disposal (EOD), a field fraught with mortal danger, stands at the forefront of this transition. In this study, we explore the potential of robotic telepresence as a safeguard for human operatives, drawing on the robust capabilities demonstrated by legged manipulators in diverse operational contexts. The challenge of autonomy in such precarious domains underscores the advantages of teleoperation—a harmonious blend of human intuition and robotic execution. Herein, we introduce a cost-effective telepresence and teleoperation system employing a legged manipulator, which combines a quadruped robot, an integrated manipulative arm, and RGB-D sensory capabilities. Our innovative approach tackles the intricate challenge of whole-body control for a quadrupedal manipulator. The core of our system is an IMU-based motion capture suit, enabling intuitive teleoperation, augmented by immersive visual telepresence via a VR headset. We have empirically validated our integrated system through rigorous real-world applications, focusing on loco-manipulation tasks that necessitate comprehensive robot control and enhanced visual telepresence for EOD operations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. The ballad of the bots: sonification using cognitive metaphor to support immersed teleoperation of robot teams.
- Author
-
Simmons, Joe, Bremner, Paul, Mitchell, Thomas J., Bown, Alison, and McIntosh, Verity
- Subjects
VIRTUAL reality ,REMOTE control ,NUCLEAR facilities ,DATA analysis ,METAPHOR - Abstract
As an embodied and spatial medium, virtual reality is proving an attractive proposition for robot teleoperation in hazardous environments. This paper examines a nuclear decommissioning scenario in which a simulated team of semi-autonomous robots are used to characterise a chamber within a virtual nuclear facility. This study examines the potential utility and impact of sonification as a means of communicating salient operator data in such an environment. However, the question of what sound should be used and how it can be applied in different applications is far from resolved. This paper explores and compares two sonification design approaches. The first is inspired by the theory of cognitive metaphor to create sonifications that align with socially acquired contextual and ecological understanding of the application domain. The second adopts a computationalist approach using auditory mappings that are commonplace in the literature. The results suggest that the computationalist approach outperforms the cognitive metaphor approach in terms of predictability and mental workload. However, qualitative data analysis demonstrates that the cognitive metaphor approach resulted in sounds that were more intuitive, and were better implemented for spatialisation of data sources and data legibility when there was more than one sound source. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Stability of tracking wheel mobile robot with teleoperation fuzzy neural network control system.
- Author
-
Sumathi, C. S., Ravi Kumar, R., and Anandhi, V.
- Subjects
- *
CLIENT/SERVER computing equipment , *LYAPUNOV stability , *LYAPUNOV functions , *REMOTE control , *ROBOTS , *MOBILE robots , *FUZZY neural networks - Abstract
The stability of the Tracking Wheel Mobile Robot with Teleoperation System and Path Following Method is discussed in this study. The path is to be tracked by the host computer which is the master robot. The response from the robot is captured on camera. As the slave robot approaches the target position, the camera captures the response robot's position and as well as moving trajectory. The host computer receives all of the images, enabling mobile robot deviation recoveries. The slave robot can use teleoperation to follow the sensor based on the decisions made by the master robot. The Lyapunov function in the Fuzzy Neural Network (FNN) control structure assures the system's stability and satisfactory performance. It supports a mobile robot's ability to adhere to a reference trajectory without deviating from it. Finally, the outcome of the simulation demonstrates that our controller is capable of tracking different environmental conditions and maintaining stability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Enabling Social Robots to Perceive and Join Socially Interacting Groups Using F-formation: A Comprehensive Overview.
- Author
-
Barua, Hrishav Bakul, Mg, Theint Haythi, Pramanick, Pradip, and Sarkar, Chayan
- Subjects
SOCIAL group work ,HUMAN-robot interaction ,ARTIFICIAL intelligence ,GROUP formation ,COMPUTER vision - Abstract
Social robots in our daily surroundings, like personal guides, waiter robots, home helpers, assistive robots, and telepresence/teleoperation robots, are increasing day by day. Their usability and acceptability largely depend on their explicit and implicit interaction capability with fellow human beings. As a result, social behavior is one of the most sought-after qualities that a robot can possess. However, there is no specific aspect and/or feature that defines socially acceptable behavior, and it largely depends on the situation, application, and society. In this article, we investigate one such social behavior for collocated robots. Imagine a group of people is interacting with each other, and we want to join the group. We as human beings do it in a socially acceptable manner, i.e., within the group, we do position ourselves in such a way that we can participate in the group activity without disturbing/obstructing anybody. To possess such a quality, first, a robot needs to determine the formation of the group and then determine a position for itself, which we humans do implicitly. There are many theories which study group formations and proxemics; one such theory is f-formation which could be utilized for this purpose. As the types of formations can be very diverse, detecting the social groups is not a trivial task. In this article, we provide a comprehensive survey of the existing work on social interaction and group detection using f-formation for robotics and other applications. We also put forward a novel holistic survey framework combining some of the possibly more important concerns and modules relevant to this problem. We define taxonomies based on methods, camera views, datasets, detection capabilities and scale, evaluation approaches, and application areas. We discuss certain open challenges and limitations in the current literature along with possible future research directions based on this framework. In particular, we discuss the existing methods/techniques and their relative merits and demerits, applications, and provide a set of unsolved but relevant problems in this domain. The official website for this work is available at: https://github.com/HrishavBakulBarua/Social-Robots-F-formation [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Scientists and drivers in-the-loop, on-the-loop or out-of-the-loop? Holistic human systems integration of teleoperated vehicles.
- Author
-
Flemisch, Frank, Usai, Marcel, Schrank, Andreas, Oehl, Michael, Plum, Lena, Shi, Elisabeth, Baumann, Martin, and Bengler, Klaus
- Subjects
SYSTEM integration ,RESEARCH institutes ,HUMAN beings - Abstract
Copyright of Automatisierungstechnik is the property of De Gruyter and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
15. Intent-based Task-Oriented Shared Control for Intuitive Telemanipulation.
- Author
-
Bowman, Michael, Zhang, Jiucai, and Zhang, Xiaoli
- Abstract
The unique challenges in shared control for telemanipulation-beyond teleoperation-include physical discrepancy between human and robot hands and the fine manipulation constraints needed for task success. We present an intuitive shared-control strategy that generates robotic grasp poses better suited for human perception of success and feeling of control while ensuring a stable grasp for task success. The robot’s motion follows an arbitration between following the user’s motion constraints and accomplishing the inferred task. The arbitration adapts based on the physical discrepancy between the human and robot hands. We have conducted a user study with a telemanipulation scenario to analyze the effects of task predictability, following, and user preference. The results demonstrated that intent-based approaches provide advantages over direct motion mapping strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Quantifying the Remote Driver's Interaction with 5G-Enabled Level 4 Automated Vehicles: A Real-World Study.
- Author
-
Li, Shuo, Zhang, Yanghanzi, Edwards, Simon, and Blythe, Phil
- Subjects
AUTONOMOUS vehicles ,EYE tracking ,REMOTE control ,MOTOR vehicle driving ,DISTRACTION - Abstract
This real-world investigation aimed to quantify the human–machine interaction between remote drivers of teleoperation systems and the Level 4 automated vehicle in a real-world setting. The primary goal was to investigate the effects of disengagement and distraction on remote driver performance and behaviour. Key findings revealed that mental disengagement, achieved through distraction via a reading task, significantly slowed the remote driver's reaction time by an average of 5.309 s when the Level 4 automated system required intervention. Similarly, disengagement resulted in a 4.232 s delay in decision-making time for remote drivers when they needed to step in and make critical strategic decisions. Moreover, mental disengagement affected the remote drivers' attention focus on the road and increased their cognitive workload compared to constant monitoring. Furthermore, when actively controlling the vehicle remotely, drivers experienced a higher cognitive workload than in both "monitoring" and "disengagement" conditions. The findings emphasize the importance of designing teleoperation systems that keep remote drivers actively engaged with their environment, minimise distractions, and reduce disengagement. Such designs are essential for enhancing safety and effectiveness in remote driving scenarios, ultimately supporting the successful deployment of Level 4 automated vehicles in real-world applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Teleoperation-Driven and Keyframe-Based Generalizable Imitation Learning for Construction Robots.
- Author
-
Li, Yan, Liu, Songyang, Wang, Mengjun, Li, Shuai, and Tan, Jindong
- Subjects
- *
ROBOT control systems , *COMPACT spaces (Topology) , *DEATH rate , *BUILDING sites , *REMOTE control - Abstract
The construction industry has long been plagued by low productivity and high injury and fatality rates. Robots have been envisioned to automate the construction process, thereby substantially improving construction productivity and safety. Despite the enormous potential, teaching robots to perform complex construction tasks is challenging. We present a generalizable framework to harness human teleoperation data to train construction robots to perform repetitive construction tasks. First, we develop a teleoperation method and interface to control robots on construction sites, serving as an intermediate solution toward full automation. Teleoperation data from human operators, along with context information from the job site, can be collected for robot learning. Second, we propose a new method for extracting keyframes from human operation data to reduce noise and redundancy in the training data, thereby improving robot learning efficacy. We propose a hierarchical imitation learning method that incorporates the keyframes to train the robot to generate appropriate trajectories for construction tasks. Third, we model the robot's visual observations of the working space in a compact latent space to improve learning performance and reduce computational load. To validate the proposed framework, we conduct experiments teaching a robot to generate appropriate trajectories for excavation tasks from human operators' teleoperations. The results suggest that the proposed method outperforms state-of-the-art approaches, demonstrating its significant potential for application. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Human teleoperation - a haptically enabled mixed reality system for teleultrasound.
- Author
-
Black, David, Oloumi Yazdi, Yas, Hadi Hosseinabadi, Amir Hossein, and Salcudean, Septimiu
- Subjects
- *
MIXED reality , *REMOTE control , *ULTRASONIC imaging , *MEASUREMENT errors , *LOCAL mass media - Abstract
Current teleultrasound methods include audiovisual guidance and robotic teleoperation, which constitute tradeoffs between precision and latency versus flexibility and cost. We present a novel concept of "human teleoperation" which bridges the gap between these two methods. In the concept, an expert remotely teloperates a person (the follower) wearing a mixed-reality headset by controlling a virtual ultrasound probe projected into the person's scene. The follower matches the pose and force of the virtual device with a real probe. The pose, force, video, ultrasound images, and 3-dimensional mesh of the scene are fed back to the expert. This control framework, where the actuation is carried out by people, allows more precision and speed than verbal guidance, yet is more flexible and inexpensive than robotic teleoperation. The purpose of this paper is to introduce this concept as well as a prototype teleultrasound system with limited haptics and local communication. The system was tested to show its potential, including mean teleoperation latencies of 0.32 ± 0.05 seconds and steady-state errors of 4.4 ± 2.8 mm and 5.4 ± 2.8 $^ \circ $ ∘ in position and orientation tracking respectively. A preliminary test with an ultrasonographer and four patients was completed, showing lower measurement error and a completion time of 1:36 ± 0:23 minutes using human teleoperation compared to 4:13 ± 3:58 using audiovisual teleguidance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Subtask-Based Usability Evaluation of Control Interfaces for Teleoperated Excavation Tasks.
- Author
-
Nagate, Takumi, Nagano, Hikaru, Tazaki, Yuichi, and Yokokohji, Yasuyoshi
- Subjects
VIRTUAL machine systems ,MACHINE performance ,HYDRAULIC machinery ,ANATOMICAL planes ,VIRTUAL reality - Abstract
This study aims to experimentally determine the most suitable control interface for different subtasks in the teleoperation of construction robots in a simulation environment. We compare a conventional lever-based rate control interface ("Rate-lever") with two alternative methods: rate control ("Rate-3D") and position control ("Position-3D"), both using a 3D positional input device. In the experiments, participants operated a construction machine in a virtual environment and evaluated the control interfaces across three tasks: sagittal plane excavation, turning, and continuous operation. The results revealed that "Position-3D" outperformed others for sagittal excavation, while both "Rate-lever" and "Rate-3D" were more effective for turning. Notably, "Position-3D" and "Rate-3D" can be implemented on the same input device and are easily integrated. This feature offers the possibility of a hybrid-type interface suitable for operators to obtain optimized performance in sagittal and horizontal tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Construction of a Wi-Fi System with a Tethered Balloon in a Mountainous Region for the Teleoperation of Vehicular Forestry Machines.
- Author
-
Kim, Gyun-Hyung, Lee, Hyeon-Seung, Mun, Ho-Seong, Oh, Jae-Heun, and Shin, Beom-Soo
- Subjects
GLOBAL Positioning System ,LOGGING ,WIND speed ,DATA loggers ,ANTENNAS (Electronics) - Abstract
In this study, a Wi-Fi system with a tethered balloon is proposed for the teleoperation of vehicular forestry machines. This system was developed to establish a Wi-Fi communication for stable teleoperation in a timber harvesting site. This system consisted of a helium balloon, Wi-Fi nodes, a measurement system, a global navigation satellite system (GNSS) antenna, and a wind speed sensor. The measurement system included a GNSS module, an inertial measurement unit (IMU), a data logger, and an altitude sensor. While the helium balloon with the Wi-Fi system was 60 m in the air, the received signal strength indicator (RSSI) was measured by moving a Wi-Fi receiver on the ground. Another GNSS set was also utilized to collect the latitude and longitude data from the Wi-Fi receiver as it traveled. The developed Wi-Fi system with a tethered balloon can create a Wi-Fi zone of up to 1.9 ha within an average wind speed range of 2.2 m/s. It is also capable of performing the teleoperation of vehicular forestry machines with a maximum latency of 185.7 ms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Teleoperation system for multiple robots with intuitive hand recognition interface
- Author
-
Lucas Alexandre Zick, Dieisson Martinelli, André Schneider de Oliveira, and Vivian Cremer Kalempa
- Subjects
Autonomous navigation ,Human–robot ,Multi-robots ,ROS ,Teleoperation ,Medicine ,Science - Abstract
Abstract Robotic teleoperation is essential for hazardous environments where human safety is at risk. However, efficient and intuitive human–machine interaction for multi-robot systems remains challenging. This article aims to demonstrate a robotic teleoperation system, denominated AutoNav, centered around autonomous navigation and gesture commands interpreted through computer vision. The central focus is on recognizing the palm of the hand as a control interface to facilitate human–machine interaction in the context of multi-robots. The MediaPipe framework was integrated to implement gesture recognition from a USB camera. The system was developed using the Robot Operating System, employing a simulated environment that includes the Gazebo and RViz applications with multiple TurtleBot 3 robots. The main results show a reduction of approximately 50% in the execution time, coupled with an increase in free time during teleoperation, reaching up to 94% of the total execution time. Furthermore, there is a decrease in collisions. These results demonstrate the effectiveness and practicality of the robotic control algorithm, showcasing its promise in managing teleoperations across multi-robots. This study fills a knowledge gap by developing a hand gesture-based control interface for more efficient and safer multi-robot teleoperation. These findings enhance human–machine interaction in complex robotic operations. A video showing the system working is available at https://youtu.be/94S4nJ3IwUw .
- Published
- 2024
- Full Text
- View/download PDF
22. Secure motion-copying via homomorphic encryption
- Author
-
Haruki Takanashi, Kaoru Teranishi, and Kiminao Kogiso
- Subjects
motion-copying ,encrypted control ,experimental validation ,four-channel bilateral control ,teleoperation ,Control engineering systems. Automatic machinery (General) ,TJ212-225 - Abstract
This study aims to develop an encrypted motion-copying system using homomorphic encryption for secure motion preservation and reproduction. A novel concept of encrypted motion-copying systems is introduced, realizing the preservation, edition, and reproduction of the motion over encrypted data. The developed motion-copying system uses the conventional encrypted four-channel bilateral control system with robotic arms to save the leader's motion by a human operator in the ciphertext in a memory. The follower's control system reproduces the motion using the encrypted data loaded from the secure memory. Additionally, the developed system enables us to directly edit the motion data preserved in the memory without decryption using homomorphic operation. Finally, this study demonstrates the effectiveness of the developed encrypted motion-copying system in free motion, object contact, and spatial scaling scenarios.
- Published
- 2024
- Full Text
- View/download PDF
23. Developments in robotic teleoperation
- Author
-
Bogue, Rob
- Published
- 2024
- Full Text
- View/download PDF
24. Vision-sharing system for android avatars to enable remote eye contact
- Author
-
Kaoruko Shinkawa, Mizuki Nakajima, and Yoshihiro Nakata
- Subjects
Android avatar ,Eyeball ,Interface ,Teleoperation ,Wide-angle lens camera ,Technology ,Mechanical engineering and machinery ,TJ1-1570 ,Control engineering systems. Automatic machinery (General) ,TJ212-225 ,Machine design and drawing ,TJ227-240 ,Technology (General) ,T1-995 ,Industrial engineering. Management engineering ,T55.4-60.8 ,Automation ,T59.5 ,Information technology ,T58.5-58.64 - Abstract
Abstract Maintaining eye contact is a fundamental aspect of non-verbal communication, yet current remote communication tools, such as video calls, struggle to replicate the natural eye contact experienced in face-to-face interactions. This study presents a cutting-edge vision-sharing system for android avatars that enables remote eye contact by synchronizing the eye movements of the operator and the avatar. Our innovative system features an eyeball integrated with a wide-angle lens camera, meticulously designed to mimic human eyes in both appearance and functionality. This technology is seamlessly integrated into android avatars, enabling operators to perceive the avatar’s surroundings as if physically present. It provides a stable and immersive visual experience through a head-mounted display synchronized with the avatar’s gaze direction. Through rigorous experimental evaluations, we demonstrated the system’s ability to faithfully replicate the operator’s perspective through the avatar, making eye contact more intuitive and effortless compared with conventional methods. Subjective assessments further validated the system’s capacity to reduce operational workload and enhance user experience. These compelling findings underscore the potential of our vision-sharing system to elevate the realism and efficacy of remote communication using android avatars.
- Published
- 2024
- Full Text
- View/download PDF
25. Visual augmentation of live-streaming images in virtual reality to enhance teleoperation of unmanned ground vehicles.
- Author
-
Yiming Luo, Jialin Wang, Yushan Pan, Shan Luo, Pourang Irani, and Hai-Ning Liang
- Subjects
SIMULATOR sickness ,REMOTE control ,AUTONOMOUS vehicles ,ROBOT control systems ,TASK performance - Abstract
First-person view (FPV) technology in virtual reality (VR) can offer in-situ environments in which teleoperators can manipulate unmanned ground vehicles (UGVs). However, non-experts and expert robot teleoperators still have trouble controlling robots remotely in various situations. For example, obstacles are not easy to avoid when teleoperating UGVs in dim, dangerous, and difficult-to-access areas with environmental obstacles, while unstable lighting can cause teleoperators to feel stressed. To support teleoperators' ability to operate UGVs efficiently, we adopted construction yellow and black lines from our everyday life as a standard design space and customised the Sobel algorithm to develop VR-mediated teleoperations to enhance teleoperators' performance. Our results show that our approach can improve user performance on avoidance tasks involving static and dynamic obstacles and reduce workload demands and simulator sickness. Our results also demonstrate that with other adjustment combinations (e.g., removing the original image from edge-enhanced images with a blue filter and yellow edges), we can reduce the effect of high-exposure performance in a dark environment on operation accuracy. Our present work can serve as a solid case for using VR to mediate and enhance teleoperation operations with a wider range of applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Vision-sharing system for android avatars to enable remote eye contact.
- Author
-
Shinkawa, Kaoruko, Nakajima, Mizuki, and Nakata, Yoshihiro
- Subjects
NONVERBAL communication ,EYE contact ,HEAD-mounted displays ,AVATARS (Virtual reality) ,EYE movements ,GAZE - Abstract
Maintaining eye contact is a fundamental aspect of non-verbal communication, yet current remote communication tools, such as video calls, struggle to replicate the natural eye contact experienced in face-to-face interactions. This study presents a cutting-edge vision-sharing system for android avatars that enables remote eye contact by synchronizing the eye movements of the operator and the avatar. Our innovative system features an eyeball integrated with a wide-angle lens camera, meticulously designed to mimic human eyes in both appearance and functionality. This technology is seamlessly integrated into android avatars, enabling operators to perceive the avatar's surroundings as if physically present. It provides a stable and immersive visual experience through a head-mounted display synchronized with the avatar's gaze direction. Through rigorous experimental evaluations, we demonstrated the system's ability to faithfully replicate the operator's perspective through the avatar, making eye contact more intuitive and effortless compared with conventional methods. Subjective assessments further validated the system's capacity to reduce operational workload and enhance user experience. These compelling findings underscore the potential of our vision-sharing system to elevate the realism and efficacy of remote communication using android avatars. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Full-Body Pose Estimation of Humanoid Robots Using Head-Worn Cameras for Digital Human-Augmented Robotic Telepresence.
- Author
-
Cho, Youngdae, Son, Wooram, Bak, Jaewan, Lee, Yisoo, Lim, Hwasup, and Cha, Youngwoon
- Subjects
- *
DIGITAL cameras , *COMPUTER vision , *TELECOMMUTING , *AUGMENTED reality , *TELEPRESENCE , *POSE estimation (Computer vision) - Abstract
We envision a telepresence system that enhances remote work by facilitating both physical and immersive visual interactions between individuals. However, during robot teleoperation, communication often lacks realism, as users see the robot's body rather than the remote individual. To address this, we propose a method for overlaying a digital human model onto a humanoid robot using XR visualization, enabling an immersive 3D telepresence experience. Our approach employs a learning-based method to estimate the 2D poses of the humanoid robot from head-worn stereo views, leveraging a newly collected dataset of full-body poses for humanoid robots. The stereo 2D poses and sparse inertial measurements from the remote operator are optimized to compute 3D poses over time. The digital human is localized from the perspective of a continuously moving observer, utilizing the estimated 3D pose of the humanoid robot. Our moving camera-based pose estimation method does not rely on any markers or external knowledge of the robot's status, effectively overcoming challenges such as marker occlusion, calibration issues, and dependencies on headset tracking errors. We demonstrate the system in a remote physical training scenario, achieving real-time performance at 40 fps, which enables simultaneous immersive and physical interactions. Experimental results show that our learning-based 3D pose estimation method, which operates without prior knowledge of the robot, significantly outperforms alternative approaches requiring the robot's global pose, particularly during rapid headset movements, achieving markerless digital human augmentation from head-worn views. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A Teleoperated Robotic System with Haptic Feedback for Soft Tissue Incision.
- Author
-
García-Cárdenas, Facundo, Fabian, Joao, Ramos, Oscar E., and Canahuire, Ruth
- Abstract
The teleoperation of robotic manipulators allows extending the capabilities of human operators to work in different scales and distances to improve the efficiency of tasks that require high precision and repeatability. However, the lack of kinesthetics makes teleoperation difficult under diminished visibility or during palpation tasks despite visual and auditory feedback. This work presents the design and implementation of a haptic teleoperation system based on a master–slave hybrid control scheme. The robotic system uses a haptic device and a joystick to map the desired pose of the robot, while a force sensor located in the end-effector provides stiffness perception. The implemented control algorithm employs a weighted quadratic program to compute the inverse kinematics at different scales, allowing the system to operate over delicate and uneven surfaces, such as those found in surgical incisions. Finally, experimental results are shown, where the performance of the haptic system in cutting porcine tissue and manipulation tasks inside the free workspace are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A Social Robot-facilitated Performance Assessment of Self-care Skills for People with Alzheimer's: A Preliminary Study.
- Author
-
Yuan, Fengpei, Bray, Robert, Oliver, Michael, Duzan, Joshua, Crane, Monica, and Zhao, Xiaopeng
- Subjects
ALZHEIMER'S disease ,ROBOT control systems ,HUMANOID robots ,ARTIFICIAL intelligence ,ACTIVITIES of daily living - Abstract
Worldwide, there are approximately 10 million new cases of dementia reported each year. Due to impairments in cognitive functioning such as loss of memory, linguistic abilities, and problem-solving skills, persons living with dementia (PLWDs) may face challenges in accomplishing activities of daily living (ADLs). Socially assistive robotics (SAR) holds promise as an effective tool to assist PLWDs with their ADLs; however, developing a powerful, reliable SAR requires a robust dataset of PLWDs performing ADLs to ensure the efficacy and reliability of the SAR. In this paper, we present the development of a teleoperated robotic platform using a humanoid social robot. This platform provided instructions to perform ten Performance Assessment of Self-care Skills (PASS) tasks, while multimodal sensors were employed to collect the individual's physiological signals, behaviors, and interactions with the robot. We conducted a preliminary study to investigate the acceptability, user experience, and feasibility of the platform in a simulated home environment with five young cognitively normal individuals. Participants' experiences and satisfaction with the platform were evaluated through a questionnaire. The results demonstrate the capability of the robot administering PASS tasks. However, one limitation of the current platform is the low efficiency of controlling the robot to move in the Smart Home. Our next step is to implement navigational abilities in the robot and then conduct the experiments with a large cohort of PLWDs to generate our multimodal dataset. This work aims to contribute significantly towards the development of a powerful, reliable, and trustworthy SAR to help PLWDs with their ADLs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Gesture Recognition Framework for Teleoperation of Infrared (IR) Consumer Devices Using a Novel pFMG Soft Armband.
- Author
-
Young, Sam, Zhou, Hao, and Alici, Gursel
- Subjects
- *
ARTIFICIAL intelligence , *WEARABLE technology , *MACHINE learning , *REMOTE control , *HOUSEHOLD electronics - Abstract
Wearable technologies represent a significant advancement in facilitating communication between humans and machines. Powered by artificial intelligence (AI), human gestures detected by wearable sensors can provide people with seamless interaction with physical, digital, and mixed environments. In this paper, the foundations of a gesture-recognition framework for the teleoperation of infrared consumer electronics are established. This framework is based on force myography data of the upper forearm, acquired from a prototype novel soft pressure-based force myography (pFMG) armband. Here, the sub-processes of the framework are detailed, including the acquisition of infrared and force myography data; pre-processing; feature construction/selection; classifier selection; post-processing; and interfacing/actuation. The gesture recognition system is evaluated using 12 subjects' force myography data obtained whilst performing five classes of gestures. Our results demonstrate an inter-session and inter-trial gesture average recognition accuracy of approximately 92.2% and 88.9%, respectively. The gesture recognition framework was successfully able to teleoperate several infrared consumer electronics as a wearable, safe and affordable human–machine interface system. The contribution of this study centres around proposing and demonstrating a user-centred design methodology to allow direct human–machine interaction and interface for applications where humans and devices are in the same loop or coexist, as typified between users and infrared-communicating devices in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Optimizing link-length fitting between an operator and a robot with compensation for habitual movements.
- Author
-
Otani, Takuya, Nakamura, Makoto, Kimura, Koichi, and Takanishi, Atsuo
- Subjects
ROBOT hands ,REMOTE control ,QUANTUM computing ,SPEED limits ,HUMAN body - Abstract
Teleoperation has become increasingly important for the real-world application of robots. However, in current teleoperation applications, operators need to visually recognize any physical deviations between the robot and themselves and correct its operation accordingly. Even when humans move their bodies, achieving high positional accuracy can be difficult for them, which can limit their speed of movement and place a heavy burden on them. In this study, we proposed a parameter optimization method for feedforward compensation of the link-length deviation and operator's habitual body movements in leader–follower control in 3D space. To optimize the parameters, we used Digital Annealer developed by Fujitsu Ltd, which could rapidly solve the combinatorial optimization problem. The objective function minimized the difference between the hand positions of the robot and targets. Simulations verified that the proposed method could reduce hand positional differences. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. PAM: Research on posture alignment method for camera robot system.
- Author
-
He, Jingjie, Huang, Jian, Zhang, Wenjing, Xu, Feng, Liu, Qiang, and Li, He
- Subjects
ROBOT motion ,MOTION capture (Human mechanics) ,MATRIX inversion ,CAMERA calibration ,REMOTE control - Abstract
By acquiring the posture information of the rigid body with markers, the camera motion on the robot end-effector can be controlled remotely. The essential step of posture alignment in teleoperation is to calibrate the transformation matrix of the world coordinate system of the optical motion capture system and the robot base coordinate system. It was necessary to change the position and attitude of the camera robot in the studio frequently. At the same time, the studio was complex with strong personnel mobility, thus the world coordinate system of the optical tracking system in the studio was needed to be re-established frequently. These two points asked the pose calibration scheme of the camera robot in the optical tracking system to be convenient and universal. In this paper, a novel posture alignment method (PAM) was proposed, which was an automatic, accurate and quick method to complete the posture alignment for related coordinate systems without non-systematic errors. PAM was applied to teleoperation for camera robot by inverse movement matrix. The comparative experiments prove that the proposed method is better than manual method, with higher precision and stability, more flexible physical setting for reference coordinate system, and shorter operation time consuming. Meanwhile, PAM is more suitable for camera robot teleoperation than existed automatic method, because it does not require paying attention to the calibration postures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation.
- Author
-
Jia, Ruixing, Yang, Lei, Cao, Ying, Kalun Or, Calvin, Wang, Wenping, and Pan, Jia
- Subjects
ARTIFICIAL neural networks ,ROBOT motion ,REMOTE control ,SOCIAL interaction ,PREDICTION models - Abstract
Teleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. OmniCharger: CNN-Based Hand Gesture Interface to Operate an Electric Car Charging Robot through Teleconference.
- Author
-
Cabrera, Miguel Altamirano, Rakhmatulin, Viktor, Fedoseev, Aleksey, Sautenkov, Oleg, Alyounes, Oussama, Puchkov, Andrei, and Tsetserukou, Dzmitry
- Subjects
INDUSTRIAL robots ,COMPUTER vision ,ELECTRIC charge ,ELECTRIC automobiles ,WEATHER - Abstract
The automation of the car charging process is motivated by the rapid development of technologies for self-driving cars and the increasing importance of ecological transportation units. Automation of this process requires the implementation of Computer Vision (CV) techniques. However, it remains challenging to precisely position the charger plug autonomously due to the sensitivity of CV algorithms to lighting and weather conditions. We introduce a novel robotic operation system based on hand gesture recognition through teleconferencing software. The users, connected by teleconference, use their hand gestures to teleoperate the electric plug located on the collaborative robot end-effector. We conducted a user study to evaluate the system performance and suitability using OmniCharger and two baseline interfaces (a UR10 Teach Pendant and a Logitech F710 Wireless Gamepad). Except for two trials, all the users were able to locate the plug inside of a 5 cm target using the interfaces. The distance to the target and the orientation error did not present statistically significant differences (\(p=0.1099 \gt 0.05\) and \(p=0.0903 \gt 0.05\) , respectively) in the use of the three interfaces. The NASA-TLX questionnaire results showed low values in all the sub-classes, the SUS results rated the usability of the proposed interface above average (68%), and the UEQ showed excellent performance of the OmniCharger interface in the attractiveness, stimulation, and novelty attributes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Assistance in Teleoperation of Redundant Robots through Predictive Joint Maneuvering.
- Author
-
Brooks, Connor, Rees, Wyatt, and Szafir, Daniel
- Subjects
MOTION control devices ,REMOTE control ,PREDICTION models ,ROBOTICS ,ROBOTS - Abstract
In teleoperation of redundant robotic manipulators, translating an operator's end effector motion command to joint space can be a tool for maintaining feasible and precise robot motion. Through optimizing redundancy resolution, the control system can ensure the end effector maintains maneuverability by avoiding joint limits and kinematic singularities. In autonomous motion planning, this optimization can be done over an entire trajectory to improve performance over local optimization. However, teleoperation involves a human-in-the-loop who determines the trajectory to be executed through a dynamic sequence of motion commands. We present two systems, Predictive Kinematic Control Tree and Predictive Kinematic Control Search, for utilizing a predictive model of operator commands to accomplish this redundancy resolution in a manner that considers future expected motion during teleoperation. Using a probabilistic model of operator commands allows optimization over an expected trajectory of future motion rather than consideration of local motion alone. Evaluation through a user study demonstrates improved control outcomes from this predictive redundancy resolution over minimum joint velocity solutions and inverse kinematics-based motion controllers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Experimental Assessment of Human–Robot Teaming for Multi-step Remote Manipulation with Expert Operators.
- Author
-
Pérez-D'Arpino, Claudia, Khurshid, Rebecca P., and Shah, Julie A.
- Subjects
SPACE robotics ,REMOTE control ,SURGICAL robots ,SITUATIONAL awareness ,EMERGENCY management ,ROBOT hands - Abstract
Remote robot manipulation with human control enables applications in which safety and environmental constraints are adverse to humans (e.g., underwater, space robotics and disaster response) or the complexity of the task demands human-level cognition and dexterity (e.g., robotic surgery and manufacturing). These systems typically use direct teleoperation at the motion level and are usually limited to low-DOF arms and two-dimensional (2D) perception. Improving dexterity and situational awareness demands new interaction and planning workflows. We explore the use of human–robot teaming through teleautonomy with assisted planning for remote control of a dual-arm dexterous robot for multi-step manipulation, and conduct a within-subjects experimental assessment (n = 12 expert users) to compare it with direct teleoperation with an imitation controller with 2D and three-dimensional (3D) perception, as well as teleoperation through a teleautonomy interface. The proposed assisted planning approach achieves task times comparable with direct teleoperation while improving other objective and subjective metrics, including re-grasps, collisions, and TLX workload. Assisted planning in the teleautonomy interface achieves faster task execution and removes a significant interaction with the operator's expertise level, resulting in a performance equalizer across users. Our study protocol, metrics, and models for statistical analysis might also serve as a general benchmarking framework in teleoperation domains. Accompanying video and reference R code: https://people.csail.mit.edu/cdarpino/THRIteleop/ [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Virtual Teleoperation System for Mobile Manipulator Robots Focused on Object Transport and Manipulation.
- Author
-
Pantusin, Fernando J., Carvajal, Christian P., Ortiz, Jessica S., and Andaluz, Víctor H.
- Subjects
ROBOT motion ,MOBILE robots ,DEGREES of freedom ,VIRTUAL reality ,OBJECT manipulation - Abstract
This work describes the development of a tool for the teleoperation of robots. The tool is developed in a virtual environment using the Unity graphics engine. For the development of the application, a kinematic model and a dynamic model of a mobile manipulator are used. The mobile manipulator robot consists of an omnidirectional platform and an anthropomorphic robotic arm with 4 degrees of freedom (4DOF). The model is essential to emulate the movements of the robot and to facilitate the immersion in the virtual environment. In addition, the control algorithms are established and developed in MATLAB 2020 software, which improves the acquisition of knowledge to teleoperate robots and execute tasks of manipulation and transport of objects. This methodology offers a cheaper and safer alternative to real physical systems, as it reduces both the costs and risks associated with using a real robot for training. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A novel affordable user interface for robotic surgery training: design, development and usability study.
- Author
-
Neri, Alberto, Coduri, Mara, Penza, Veronica, Santangelo, Andrea, Oliveri, Alessandra, Turco, Enrico, Pizzirani, Mattia, Trinceri, Elisa, Soriero, Domenico, Boero, Federico, Ricci, Serena, and Mattos, Leonardo S.
- Subjects
SURGICAL robots ,COMPUTER simulation ,COMPUTER-aided design ,TASK performance ,RESEARCH funding ,QUESTIONNAIRES ,DESCRIPTIVE statistics ,VIRTUAL reality ,SURVEYS ,ROBOTICS ,USER-centered system design ,USER interfaces - Abstract
Introduction: The use of robotic systems in the surgical domain has become groundbreaking for patients and surgeons in the last decades. While the annual number of robotic surgical procedures continues to increase rapidly, it is essential to provide the surgeon with innovative training courses along with the standard specialization path. To this end, simulators play a fundamental role. Currently, the high cost of the leading VR simulators limits their accessibility to educational institutions. The challenge lies in balancing highfidelity simulation with cost-effectiveness; however, few cost-effective options exist for robotic surgery training. Methods: This paper proposes the design, development and user-centered usability study of an affordable user interface to control a surgical robot simulator. It consists of a cart equipped with two haptic interfaces, a VR visor and two pedals. The simulations were created using Unity, which offers versatility for expanding the simulator to more complex scenes. An intuitive teleoperation control of the simulated robotic instruments is achieved through a high-level control strategy. Results and Discussion: Its affordability and resemblance to real surgeon consoles make it ideal for implementing robotic surgery training programs in medical schools, enhancing accessibility to a broader audience. This is demonstrated by the results of an usability study involving expert surgeons who use surgical robots regularly, expert surgeons without robotic surgery experience, and a control group. The results of the study, which was based on a traditional Peg-board exercise and Camera Control task, demonstrate the simulator's high usability and intuitive control across diverse user groups, including those with limited experience. This offers evidence that this affordable system is a promising solution for expanding robotic surgery training. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Feasibility and performance enhancement of collaborative control of unmanned ground vehicles via virtual reality.
- Author
-
Li, Ziming, Luo, Yiming, Wang, Jialin, Pan, Yushan, Yu, Lingyun, and Liang, Hai-Ning
- Subjects
- *
COGNITIVE load , *REMOTE control , *VIRTUAL reality , *WORKFLOW , *RESEARCH personnel - Abstract
To support people working in dangerous industries, virtual reality (VR) can ensure operators manipulate standardized tasks and work collaboratively to deal with potential risks. Surprisingly, limited research has paid attention to the cognitive load of operators in their collaborative tasks, especially via VR interfaces. Once task demands become complex, many researchers focus on optimizing the design of the interaction interfaces to reduce the cognitive load on the operator. In this paper, we propose a new collaborative VR system with edge enhancement to support two teleoperators working in the VR environment to remote control an uncrewed ground vehicle. We use a compared experiment to evaluate the collaborative VR systems, focusing on the time spent on tasks and the total number of operations. Our results show that the total number of processes and the cognitive load during operations were significantly lower in the two-person group than in the single-person group. Our study sheds light on designing VR systems to support collaborative work with respect to the flow of work of teleoperators instead of simply optimizing the design outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Hand Teleoperation with Combined Kinaesthetic and Tactile Feedback: A Full Upper Limb Exoskeleton Interface Enhanced by Tactile Linear Actuators.
- Author
-
Leonardis, Daniele, Gabardi, Massimiliano, Marcheschi, Simone, Barsotti, Michele, Porcini, Francesco, Chiaradia, Domenico, and Frisoli, Antonio
- Subjects
HAPTIC devices ,ROBOTIC exoskeletons ,REMOTE control ,TELEROBOTICS ,ACTUATORS - Abstract
Manipulation involves both fine tactile feedback, with dynamic transients perceived by fingerpad mechanoreceptors, and kinaesthetic force feedback, involving the whole hand musculoskeletal structure. In teleoperation experiments, these fundamental aspects are usually divided between different setups at the operator side: those making use of lightweight gloves and optical tracking systems, oriented toward tactile-only feedback, and those implementing exoskeletons or grounded manipulators as haptic devices delivering kinaesthetic force feedback. At the level of hand interfaces, exoskeletons providing kinaesthetic force feedback undergo a trade-off between maximum rendered forces and bandpass of the embedded actuators, making these systems unable to properly render tactile feedback. To overcome these limitations, here, we investigate a full upper limb exoskeleton, covering all the upper limb body segments from shoulder to finger phalanxes, coupled with linear voice coil actuators at the fingertips. These are developed to render wide-bandwidth tactile feedback together with the kinaesthetic force feedback provided by the hand exoskeleton. We investigate the system in a pick-and-place teleoperation task, under two different feedback conditions (visual-only and visuo-haptic). The performance based on measured interaction forces and the number of correct trials are evaluated and compared. The study demonstrates the overall feasibility and effectiveness of a complex full upper limb exoskeleton (seven limb-actuated DoFs plus five hand DoFs) capable of combined kinaesthetic and tactile haptic feedback. Quantitative results show significant performance improvements when haptic feedback is provided, in particular for the mean and peak exerted forces, and for the correct rate of the pick-and-place task. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. WearMoCap: multimodal pose tracking for ubiquitous robot control using a smartwatch
- Author
-
Fabian C. Weigend, Neelesh Kumar, Oya Aran, and Heni Ben Amor
- Subjects
motion capture ,human-robot interaction ,teleoperation ,smartwatch ,wearables ,drone control ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
We present WearMoCap, an open-source library to track the human pose from smartwatch sensor data and leveraging pose predictions for ubiquitous robot control. WearMoCap operates in three modes: 1) a Watch Only mode, which uses a smartwatch only, 2) a novel Upper Arm mode, which utilizes the smartphone strapped onto the upper arm and 3) a Pocket mode, which determines body orientation from a smartphone in any pocket. We evaluate all modes on large-scale datasets consisting of recordings from up to 8 human subjects using a range of consumer-grade devices. Further, we discuss real-robot applications of underlying works and evaluate WearMoCap in handover and teleoperation tasks, resulting in performances that are within 2 cm of the accuracy of the gold-standard motion capture system. Our Upper Arm mode provides the most accurate wrist position estimates with a Root Mean Squared prediction error of 6.79 cm. To evaluate WearMoCap in more scenarios and investigate strategies to mitigate sensor drift, we publish the WearMoCap system with thorough documentation as open source. The system is designed to foster future research in smartwatch-based motion capture for robotics applications where ubiquity matters. www.github.com/wearable-motion-capture.
- Published
- 2025
- Full Text
- View/download PDF
42. Augmenting visual feedback with visualized interaction forces in haptic-assisted virtual-reality teleoperation
- Author
-
Alex van den Berg, Jelle Hofland, Cock J. M. Heemskerk, David A. Abbink, and Luka Peternel
- Subjects
teleoperation ,visual cues ,virtual reality ,head-mounted display ,force feedback ,virtual fixtures ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In recent years, providing additional visual feedback about the interaction forces has been found to offer benefits to haptic-assisted teleoperation. However, there is limited insight into the effects of the design of force feedback-related visual cues and the type of visual display on the performance of teleoperation of robotic arms executing industrial tasks. In this study, we provide new insights into this interaction by extending these findings to the haptic assistance teleoperation of a simulated robotic arm in a virtual environment, in which the haptic assistance is comprised of a set of virtual fixtures. We design a novel method for providing visual cues about the interaction forces to complement the haptic assistance and augment visual feedback in virtual reality with a head-mounted display. We evaluate the visual cues method and head-mounted display method through human factors experiments in a teleoperated dross removal use case. The results show that both methods are beneficial for task performance, each of them having stronger points in different aspects of the operation. The visual cues method was found to significantly improve safety in terms of peak collision force, whereas the head-mounted display additionally improves the performance significantly. Furthermore, positive scores of the subjective analysis indicate an increased user acceptance of both methods. This work provides a new study on the importance of visual feedback related to (interaction) forces and spatial information for haptic assistance and provides two methods to take advantage of its potential benefits in the teleoperation of robotic arms.
- Published
- 2024
- Full Text
- View/download PDF
43. Advancing teleoperation for legged manipulation with wearable motion capture
- Author
-
Chengxu Zhou, Yuhui Wan, Christopher Peers, Andromachi Maria Delfaki, and Dimitrios Kanoulas
- Subjects
teleoperation ,legged robots ,mobile manipulation ,whole-body control ,human-robot interaction ,telexistence ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The sanctity of human life mandates the replacement of individuals with robotic systems in the execution of hazardous tasks. Explosive Ordnance Disposal (EOD), a field fraught with mortal danger, stands at the forefront of this transition. In this study, we explore the potential of robotic telepresence as a safeguard for human operatives, drawing on the robust capabilities demonstrated by legged manipulators in diverse operational contexts. The challenge of autonomy in such precarious domains underscores the advantages of teleoperation—a harmonious blend of human intuition and robotic execution. Herein, we introduce a cost-effective telepresence and teleoperation system employing a legged manipulator, which combines a quadruped robot, an integrated manipulative arm, and RGB-D sensory capabilities. Our innovative approach tackles the intricate challenge of whole-body control for a quadrupedal manipulator. The core of our system is an IMU-based motion capture suit, enabling intuitive teleoperation, augmented by immersive visual telepresence via a VR headset. We have empirically validated our integrated system through rigorous real-world applications, focusing on loco-manipulation tasks that necessitate comprehensive robot control and enhanced visual telepresence for EOD operations.
- Published
- 2024
- Full Text
- View/download PDF
44. Teleoperation in robot-assisted MIS with adaptive RCM via admittance control: Teleoperation in robot-assisted MIS...
- Author
-
Nasiri, Ehsan, Sowrirajan, Srikarran, and Wang, Long
- Published
- 2024
- Full Text
- View/download PDF
45. JVC-02 Teleoperated Robot: Design, Implementation, and Validation for Assistance in Real Explosive Ordnance Disposal Missions.
- Author
-
Canaza Ccari, Luis F., Adrian Ali, Ronald, Valdeiglesias Flores, Erick, Medina Chilo, Nicolás O., Sulla Espinoza, Erasmo, Silva Vidal, Yuri, and Pari, Lizardo
- Subjects
EXPLOSIVE ordnance disposal ,QUALITY function deployment ,ROBOT design & construction ,ROBOT control systems ,POLICE - Abstract
Explosive ordnance disposal (EOD) operations are hazardous due to the volatile and sensitive nature of these devices. EOD robots have improved these tasks, but their high cost limits accessibility for security institutions that do not have sufficient funds. This article presents the design, implementation, and validation of a low-cost EOD robot named JVC-02, specifically designed for use in explosive hazardous environments to safeguard the safety of police officers of the Explosives Disposal Unit (UDEX) of Arequipa, Peru. To achieve this goal, the essential requirements for this type of robot were compiled, referencing the capabilities of Rescue Robots from RoboCup. Additionally, the Quality Function Deployment (QFD) methodology was used to identify the needs and requirements of UDEX police officers. Based on this information, a modular approach to robot design was developed, utilizing commercial off-the-shelf components to facilitate maintenance and repair. The JVC-02 was integrated with a 5-DoF manipulator and a two-finger mechanical gripper to perform dexterity tasks, along with a tracked locomotion mechanism, which enables effective movement, and a three-camera vision system to facilitate exploration tasks. Finally, field tests were conducted in real scenarios to evaluate and experimentally validate the capabilities of the JVC-02 robot, assessing its mobility, dexterity, and exploration skills. Additionally, real EOD missions were carried out in which UDEX agents intervened and controlled the robot. The results demonstrate that the JVC-02 robot possesses strong capabilities for real EOD applications, excelling in intuitive operation, low cost, and ease of maintenance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Investigating intervention road scenarios for teleoperation of autonomous vehicles.
- Author
-
Tener, Felix and Lanir, Joel
- Subjects
REMOTE control ,AUTONOMOUS vehicles ,USER interfaces ,THEMATIC analysis ,SEMI-structured interviews ,HUMAN-computer interaction - Abstract
Autonomous vehicles (AVs) are quickly advancing and show promise to be a transformative mode of transportation. However, the prevailing consensus suggests that AVs will not be capable of addressing every traffic situation, necessitating remote human intervention in certain edge case scenarios. To facilitate the development of future teleoperation solutions, it is imperative to establish a clear and comprehensive set of intervention scenarios. To achieve this, we undertook a thorough investigation employing in-depth semi-structured interviews with 14 experts specializing in AV teleoperation. Employing thematic analysis to organize and classify the collected data, our study offers a comprehensive compilation of use cases that may require remote human assistance. By doing so, our findings provide a solid groundwork for the design and implementation of future teleoperation user interfaces. These interfaces may play a crucial role in enabling effective and efficient collaboration between humans and AVs in situations where remote intervention becomes necessary. Ultimately, our research contributes to the advancement of AV technology by identifying critical areas where human involvement can augment the capabilities of autonomous systems, thereby ensuring safer and more reliable transportation solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Network Latency in Teleoperation of Connected and Autonomous Vehicles: A Review of Trends, Challenges, and Mitigation Strategies.
- Author
-
Kamtam, Sidharth Bhanu, Lu, Qian, Bouali, Faouzi, Haas, Olivier C. L., and Birrell, Stewart
- Subjects
- *
REMOTE control , *RESEARCH personnel , *DATA transmission systems , *AUTONOMOUS vehicles , *PERCEIVED control (Psychology) - Abstract
With remarkable advancements in the development of connected and autonomous vehicles (CAVs), the integration of teleoperation has become crucial for improving safety and operational efficiency. However, teleoperation faces substantial challenges, with network latency being a critical factor influencing its performance. This survey paper explores the impact of network latency along with state-of-the-art mitigation/compensation approaches. It examines cascading effects on teleoperation communication links (i.e., uplink and downlink) and how delays in data transmission affect the real-time perception and decision-making of operators. By elucidating the challenges and available mitigation strategies, the paper offers valuable insights for researchers, engineers, and practitioners working towards the seamless integration of teleoperation in the evolving landscape of CAVs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Adaptive Impedance Control for Teleoperation with Event-Triggered Controller.
- Author
-
Zhao, Jiangbo, Duan, Bowen, and Wang, Junzheng
- Abstract
Uncertainty in the master–slave model is one of the primary factors affecting the transparency of teleoperation systems, and congestion in the master–slave communication network also greatly influences the performance of the teleoperation system. This paper proposes a combined framework of adaptive and impedance control to address the uncertainty in the master–slave model and achieve smooth operation at the slave end. Building upon this linear model, an event-triggered mechanism is designed using Lyapunov functions, with dynamic online adjustment of the triggering threshold parameters. Following the completion of the aforementioned research, control objectives are established to validate the performance of the teleoperation control system proposed in this paper. Finally, simulation verification is conducted in the Matlab/Simulink environment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A virtual reality-based dual-mode robot teleoperation architecture.
- Author
-
Gallipoli, Marco, Buonocore, Sara, Selvaggio, Mario, Fontanelli, Giuseppe Andrea, Grazioso, Stanislao, and Di Gironimo, Giuseppe
- Subjects
- *
HUMAN-machine systems , *FINITE state machines , *DIGITAL twins , *ROBOT motion , *VIRTUAL reality - Abstract
This paper proposes a virtual reality-based dual-mode teleoperation architecture to assist human operators in remotely operating robotic manipulation systems in a safe and flexible way. The architecture, implemented via a finite state machine, enables the operator to switch between two operational modes: the Approach mode, where the operator indirectly controls the robotic system by specifying its target configuration via the immersive virtual reality (VR) interface, and the Telemanip mode, where the operator directly controls the robot end-effector motion via input devices. The two independent control modes have been tested along the task of reaching a glass on a table by a sample population of 18 participants. Two working groups have been considered to distinguish users with previous experience with VR technologies from the novices. The results of the user study presented in this work show the potential of the proposed architecture in terms of usability, both physical and mental workload, and user satisfaction. Finally, a statistical analysis showed no significant differences along these three metrics between the two considered groups demonstrating ease of use of the proposed architecture by both people with and with no previous experience in VR. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. AiroTouch: enhancing telerobotic assembly through naturalistic haptic feedback of tool vibrations.
- Author
-
Gong, Yijie, Husin, Haliza Mat, Erol, Ecda, Ortenzi, Valerio, Kuchenbecker, Katherine J., and Bimbo, Joao
- Subjects
HAPTIC devices ,PSYCHOLOGICAL feedback ,AUDIO equipment ,REMOTE control ,USER experience ,ACTUATORS - Abstract
Teleoperation allows workers to safely control powerful construction machines; however, its primary reliance on visual feedback limits the operator's efficiency in situations with stiff contact or poor visibility, hindering its use for assembly of pre-fabricated building components. Reliable, economical, and easy-to-implement haptic feedback could fill this perception gap and facilitate the broader use of robots in construction and other application areas. Thus, we adapted widely available commercial audio equipment to create AiroTouch, a naturalistic haptic feedback system that measures the vibration experienced by each robot tool and enables the operator to feel a scaled version of this vibration in real time. Accurate haptic transmission was achieved by optimizing the positions of the system's off-the-shelf accelerometers and voice-coil actuators. A study was conducted to evaluate how adding this naturalistic type of vibrotactile feedback affects the operator during telerobotic assembly. Thirty participants used a bimanual dexterous teleoperation system (Intuitive da Vinci Si) to build a small rigid structure under three randomly ordered haptic feedback conditions: no vibrations, one-axis vibrations, and summed three-axis vibrations. The results show that users took advantage of both tested versions of the naturalistic haptic feedback after gaining some experience with the task, causing significantly lower vibrations and forces in the second trial. Subjective responses indicate that haptic feedback increased the realism of the interaction and reduced the perceived task duration, task difficulty, and fatigue. As hypothesized, higher haptic feedback gains were chosen by users with larger hands and for the smaller sensed vibrations in the one-axis condition. These results elucidate important details for effective implementation of naturalistic vibrotactile feedback and demonstrate that our accessible audio-based approach could enhance user performance and experience during telerobotic assembly in construction and other application domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.