1,576 results on '"Human Robot Interaction"'
Search Results
2. A Child-Robot Interaction Experiment to Analyze Gender Stereotypes in the Perception of Mathematical Abilities
- Author
-
Croitoru, Madalina, Laviron, Pablo, Bando, Sio, Gilles, Eric, Miled, Amine, Anders, Royce, Blanc, Nathalie, Ganesh, Gowrishankar, Brigaud, Emmanuelle, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bramer, Max, editor, and Stahl, Frederic, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Experimental implementation of skeleton tracking for collision avoidance in collaborative robotics.
- Author
-
Forlini, Matteo, Neri, Federico, Ciccarelli, Marianna, Palmieri, Giacomo, and Callegari, Massimo
- Subjects
- *
HUMAN skeleton , *ROBOT vision , *COMPUTER vision , *HUMAN body , *SHARED workspaces , *ROBOT motion - Abstract
Collaborative robotic manipulators can stop in case of a collision, according to the ISO/TS 15066 and ISO 10218-1 standards. However, in a human-robot collaboration scenario where the robot and the human share the workspace, a better solution with concerns to production and operator safety would be to perceive in advance the presence of an obstacle and be able to avoid it, thereby completing the task without halting the robot. In this paper, an obstacle avoidance algorithm is tested using a sensor system for real-time human detection; the operator represents a potential dynamic obstacle that can interfere with the robot motion. The sensor system consists of three RGB-D cameras. A custom software framework has been developed in Python exploiting machine learning tools for human skeleton detection. The coordinates of the human body joints relative to the manipulator base are used as input to the obstacle avoidance algorithm. The use of multiple sensors makes it possible to limit the occlusion problem; in addition, the choice of non-wearable sensors goes in the direction of better operator comfort. A series of experimental tests were performed to verify the accuracy of skeleton detection and the ability of the system to avoid obstacles in real time. The human motion caption system, in particular, was validated through a comparison with a commercial system based on wearable IMU sensors widely used and validated in motion capture. There is an NRMSE of 3.23 % for the RGB-D camera-based skeleton detection system, against an NRMSE of 12.23 % for the IMU wearable sensor system. Test results confirm that the system is able to avoid collisions with the human body under various conditions, static or dynamic, ensuring a minimum safety distance to any part of the manipulator. In 44 tests in which the operator moves around the robot with possible collisions (at a speed typical of manufacturing operations), the minimum operator-robot distance averaged 208 mm , being 200 mm the limit safety distance set by algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation.
- Author
-
Jia, Ruixing, Yang, Lei, Cao, Ying, Kalun Or, Calvin, Wang, Wenping, and Pan, Jia
- Subjects
ARTIFICIAL neural networks ,ROBOT motion ,REMOTE control ,SOCIAL interaction ,PREDICTION models - Abstract
Teleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Biomimetic learning of hand gestures in a humanoid robot.
- Author
-
Olikkal, Parthan, Dingyi Pei, Karri, Bharat Kashyap, Satyanarayana, Ashwin, Kakoty, Nayan M., and Vinjamuri, Ramana
- Abstract
Hand gestures are a natural and intuitive form of communication, and integrating this communication method into robotic systems presents significant potential to improve human-robot collaboration. Recent advances in motor neuroscience have focused on replicating human hand movements from synergies also known as movement primitives. Synergies, fundamental building blocks of movement, serve as a potential strategy adapted by the central nervous system to generate and control movements. Identifying how synergies contribute to movement can help in dexterous control of robotics, exoskeletons, prosthetics and extend its applications to rehabilitation. In this paper, 33 static hand gestures were recorded through a single RGB camera and identified in real-time through the MediaPipe framework as participants made various postures with their dominant hand. Assuming an open palm as initial posture, uniform joint angular velocities were obtained from all these gestures. By applying a dimensionality reduction method, kinematic synergies were obtained from these joint angular velocities. Kinematic synergies that explain 98% of variance of movements were utilized to reconstruct new hand gestures using convex optimization. Reconstructed hand gestures and selected kinematic synergies were translated onto a humanoid robot, Mitra, in real-time, as the participants demonstrated various hand gestures. The results showed that by using only few kinematic synergies it is possible to generate various hand gestures, with 95.7% accuracy. Furthermore, utilizing low-dimensional synergies in control of high dimensional end effectors holds promise to enable near-natural human-robot collaboration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Adaptive Interview Strategy Based on Interviewees’ Speaking Willingness Recognition for Interview Robots.
- Author
-
Nagasawa, Fuminori, Okada, Shogo, Ishihara, Takuya, and Nitta, Katsumi
- Abstract
Social signal recognition techniques based on nonverbal behavioral sensing allow conversational robots to understand the user’s social signals, thereby enabling them to adopt interaction strategies based on internal states inferred from the social signals. This research investigates how the online social signal recognition and adaptive dialog strategy influences the dynamic change in a user’s inner state. For this purpose, we develop a semiautonomous interview robot system with an online speaker’s willingness recognition module and an adaptive question selection module based on the willingness level. The online recognition model of speaker willingness is trained from multimodal nonverbal features extracted using a novel interview corpus, which allows appropriate interview questions to be chosen based on the estimated willingness level of the user. We conduct the experiment using the system to evaluate the effectiveness of adaptive question selection based on the willingness recognition model. First, the multimodal willingness recognition model is evaluated using the interview corpus. The best recognition accuracy of willingness level (high or low) was $72. 8\%$ 72. 8 p e r c n t ; with the random forest classifier. Second, 27 interviewees were interviewed with the two interview robot systems: (I) with the adaptive question selection module based on willingness recognition and (II) with a random question selection strategy. The proposed adaptive question strategy significantly increased the number of utterances with high willingness compared with the baseline system (II); thus, adaptive question selection with online willingness recognition elicited the speaker’s willingness even though the model cannot be estimated with near-perfect accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Redundant Hybrid Robots for Resilience in Future Smart Factories
- Author
-
Manzardo, Matteo, Yan, Yicheng, A. Rojas, Rafael, Shahidi, Amir, Vidoni, Renato, Hüsing, Mathias, Corves, Burkhard, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Concli, Franco, editor, Maccioni, Lorenzo, editor, Vidoni, Renato, editor, and Matt, Dominik T., editor
- Published
- 2024
- Full Text
- View/download PDF
8. Social Representation of Robots and Its Impact on Trust and Willingness to Cooperate
- Author
-
Pochwatko, Grzegorz, Możaryn, Jakub, Różańska-Walczuk, Monika, Giger, Jean-Christophe, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Biele, Cezary, editor, Kopeć, Wiesław, editor, Możaryn, Jakub, editor, Owsiński, Jan W., editor, Romanowski, Andrzej, editor, and Sikorski, Marcin, editor
- Published
- 2024
- Full Text
- View/download PDF
9. Deploying Humanoid Robots in a Social Environment
- Author
-
Kuvaja Adolfsson, Kristoffer, Tigerstedt, Christa, Biström, Dennis, Espinosa-Leal, Leonardo, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Auer, Michael E., editor, Langmann, Reinhard, editor, May, Dominik, editor, and Roos, Kim, editor
- Published
- 2024
- Full Text
- View/download PDF
10. Development of a Attentive Listening Robot Using the Motion Prediction Based on Surrogate Data
- Author
-
Noguchi, Shohei, Nakamura, Yutaka, Okadome, Yuya, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Stephanidis, Constantine, editor, Antona, Margherita, editor, Ntoa, Stavroula, editor, and Salvendy, Gavriel, editor
- Published
- 2024
- Full Text
- View/download PDF
11. Embodying Intelligence: Humanoid Robot Advancements and Future Prospects
- Author
-
Valenzuela, Kirsten Lynx, Roxas, Samantha Isabel, Wong, Yung-Hao, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, van Leeuwen, Jan, Series Editor, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Kobsa, Alfred, Series Editor, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Series Editor, Pandu Rangan, C., Editorial Board Member, Sudan, Madhu, Series Editor, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Vardi, Moshe Y, Series Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Degen, Helmut, editor, and Ntoa, Stavroula, editor
- Published
- 2024
- Full Text
- View/download PDF
12. Explicit vs. Implicit - Communicating the Navigational Intent of Industrial Autonomous Mobile Robots
- Author
-
Niessen, Nicolas, Micheli, Gioele, Bengler, Klaus, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Stephanidis, Constantine, editor, Antona, Margherita, editor, Ntoa, Stavroula, editor, and Salvendy, Gavriel, editor
- Published
- 2024
- Full Text
- View/download PDF
13. Robotic Music Therapy Assistant: A Cognitive Game Playing Robot
- Author
-
Hussain, Jwaad, Mangiacotti, Anthony, Franco, Fabia, Chinellato, Eris, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ali, Abdulaziz Al, editor, Cabibihan, John-John, editor, Meskin, Nader, editor, Rossi, Silvia, editor, Jiang, Wanyue, editor, He, Hongsheng, editor, and Ge, Shuzhi Sam, editor
- Published
- 2024
- Full Text
- View/download PDF
14. AI Planning from Natural-Language Instructions for Trustworthy Human-Robot Communication
- Author
-
Tran, Dang, Li, Hui, He, Hongsheng, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ali, Abdulaziz Al, editor, Cabibihan, John-John, editor, Meskin, Nader, editor, Rossi, Silvia, editor, Jiang, Wanyue, editor, He, Hongsheng, editor, and Ge, Shuzhi Sam, editor
- Published
- 2024
- Full Text
- View/download PDF
15. Feasibility Study on Parameter Adjustment for a Humanoid Using LLM Tailoring Physical Care
- Author
-
Miyake, Tamon, Wang, Yushi, Yang, Pin-chu, Sugano, Shigeki, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ali, Abdulaziz Al, editor, Cabibihan, John-John, editor, Meskin, Nader, editor, Rossi, Silvia, editor, Jiang, Wanyue, editor, He, Hongsheng, editor, and Ge, Shuzhi Sam, editor
- Published
- 2024
- Full Text
- View/download PDF
16. Can a Robot Collaborate with Alpana Artists? A Concept Design of an Alpana Painting Robot
- Author
-
Ahmed, Farhad, Tasnim, Zarin, Tasnim, Zerin, Shidujaman, Mohammad, Ahmed, Salah Uddin, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ali, Abdulaziz Al, editor, Cabibihan, John-John, editor, Meskin, Nader, editor, Rossi, Silvia, editor, Jiang, Wanyue, editor, He, Hongsheng, editor, and Ge, Shuzhi Sam, editor
- Published
- 2024
- Full Text
- View/download PDF
17. VR Driven Unsupervised Classification for Context Aware Human Robot Collaboration
- Author
-
Kamali Mohammadzadeh, Ali, Allen, Carlton Leroy, Masoud, Sara, Chaari, Fakher, Series Editor, Gherardini, Francesco, Series Editor, Ivanov, Vitalii, Series Editor, Haddar, Mohamed, Series Editor, Cavas-Martínez, Francisco, Editorial Board Member, di Mare, Francesca, Editorial Board Member, Kwon, Young W., Editorial Board Member, Trojanowska, Justyna, Editorial Board Member, Xu, Jinyang, Editorial Board Member, Silva, Francisco J. G., editor, Pereira, António B., editor, and Campilho, Raul D. S. G., editor
- Published
- 2024
- Full Text
- View/download PDF
18. Design and testing of (A)MICO: a multimodal feedback system to facilitate the interaction between cobot and human operator
- Author
-
Dei, Carla, Meregalli Falerni, Matteo, Cilsal, Turgut, Redaelli, Davide Felice, Lavit Nicora, Matteo, Chiappini, Mattia, Storm, Fabio Alexander, and Malosio, Matteo
- Published
- 2024
- Full Text
- View/download PDF
19. Führen emotional programmierte Serviceroboter zu einem Mehrwert in Messegesprächen? Ergebnisse einer experimentellen Untersuchung unter Einsatz des sozialen Roboters Furhat
- Author
-
Piazza, Alexander, Riedmüller, Florian, and Wild, Judith
- Published
- 2024
- Full Text
- View/download PDF
20. Immersive participatory design of assistive robots to support older adults.
- Author
-
Olatunji, Samuel A., Nguyen, Vy, Cakmak, Maya, Edsinger, Aaron, Kemp, Charles C., Rogers, Wendy A., and Mahajan, Harshal P.
- Subjects
TASK performance ,PATIENT safety ,RESEARCH funding ,ETHNOLOGY research ,MEDICAL care ,INTERVIEWING ,QUESTIONNAIRES ,INTERNET ,HOME environment ,DESCRIPTIVE statistics ,ASSISTIVE technology ,ROBOTICS ,BODY movement ,PATIENT satisfaction ,ARTIFICIAL feeding ,MEDICAL care costs ,ACTIVITIES of daily living ,USER interfaces ,OLD age - Abstract
Assistive robots have the potential to support independence, enhance safety, and lower healthcare costs for older adults, as well as alleviate the demands of their care partners. However, ensuring that these robots will effectively and reliably address end-user needs in the long term requires user-specific design factors to be considered during the robot development process. To identify these design factors, we embedded Stretch, a mobile manipulator created by Hello Robot Inc., in the home of an older adult with motor impairments and his care partner for four weeks to support them with everyday activities. An occupational therapist and a robotics engineer lived with them during this period, employing an immersive participatory design approach to co-design and customise the robot with them. We highlight the benefits of this immersive participatory design experience and provide insights into robot design that can be applied broadly to other assistive technologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. When a notification at the right time is not enough: the reminding process for socially assistive robots in institutional care.
- Author
-
Rehm, Matthias, Krummheuer, Antonia L., Diaz-Boladeras, Marta, and Sutherland, Craig
- Subjects
INSTITUTIONAL care ,ROBOTS ,BRAIN injuries ,MEMORY disorders - Abstract
Reminding is often identified as a central function of socially assistive robots in the healthcare sector. The robotic reminders are supposed to help people with memory impairments to remember to take their medicine, to drink and eat, or to attend appointments. Such standalone reminding technologies can, however, be too demanding for people with memory injuries. In a co-creation process, we developed an individual reminder robot together with a person with traumatic brain injury and her care personnel. During this process, we learned that while current research describe reminding as a prototypical task for socially assistive robots, there is no clear definition of what constitutes a reminder nor that it is based on complex sequences of interactions that evolve over time and space, across different actions, actors and technologies. Based on our data from the co-creation process and the first deployment, we argue for a shift towards a sequential and socially distributed character of reminding. Understanding socially assistive robots as rehabilitative tools for people with memory impairment, they need to be reconsidered as interconnected elements in institutional care practices instead of isolated events for the remindee. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Multi-robot cooperative behavior for reducing unnaturalness of starting a conversation.
- Author
-
Iio, Takamasa, Yoshikawa, Yuichiro, and Ishiguro, Hiroshi
- Subjects
- *
CONVERSATION , *ROBOTS - Abstract
In a human–robot conversation, it is difficult for the robot to start the conversation just when the addressee is ready to listen to the robot, due to recognition technology issues. This paper proposes and evaluates a method to reduce the sense of that the timing of starting the conversation is bad. In this method, two robots perform a cooperative behavior during a waiting time from the call for attracting addressee's attention to the main utterance the robot want to deliver. To evaluate the effectiveness of this method, we conducted an experiment that compared three conversation initiation approaches: early timing and late timing by one robot, and the proposed approach involving two robots. The results revealed that the proposed method mitigates the bad timing of main utterances as perceived by the participants. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Biomimetic learning of hand gestures in a humanoid robot
- Author
-
Parthan Olikkal, Dingyi Pei, Bharat Kashyap Karri, Ashwin Satyanarayana, Nayan M. Kakoty, and Ramana Vinjamuri
- Subjects
MediaPipe ,hand kinematics ,kinematic synergies ,biomimetic robots ,human robot interaction ,bioinspired robots ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Hand gestures are a natural and intuitive form of communication, and integrating this communication method into robotic systems presents significant potential to improve human-robot collaboration. Recent advances in motor neuroscience have focused on replicating human hand movements from synergies also known as movement primitives. Synergies, fundamental building blocks of movement, serve as a potential strategy adapted by the central nervous system to generate and control movements. Identifying how synergies contribute to movement can help in dexterous control of robotics, exoskeletons, prosthetics and extend its applications to rehabilitation. In this paper, 33 static hand gestures were recorded through a single RGB camera and identified in real-time through the MediaPipe framework as participants made various postures with their dominant hand. Assuming an open palm as initial posture, uniform joint angular velocities were obtained from all these gestures. By applying a dimensionality reduction method, kinematic synergies were obtained from these joint angular velocities. Kinematic synergies that explain 98% of variance of movements were utilized to reconstruct new hand gestures using convex optimization. Reconstructed hand gestures and selected kinematic synergies were translated onto a humanoid robot, Mitra, in real-time, as the participants demonstrated various hand gestures. The results showed that by using only few kinematic synergies it is possible to generate various hand gestures, with 95.7% accuracy. Furthermore, utilizing low-dimensional synergies in control of high dimensional end effectors holds promise to enable near-natural human-robot collaboration.
- Published
- 2024
- Full Text
- View/download PDF
24. "An Emotional Support Animal, Without the Animal": Design Guidelines for a Social Robot to Address Symptoms of Depression.
- Author
-
Collins, Sawyer, Baugus Henkel, Kenna, Henkel, Zachary, Bennett, Casey C., Stanojevic, Cedomir, Piatt, Jennifer A., Bethel, Cindy L., and Sabanović, Selma
- Subjects
SERVICE animals ,MENTAL depression ,ROBOT design & construction ,SOCIAL robots ,MOBILE robots - Abstract
Socially assistive robots can be used as therapeutic technologies to address depression symptoms. Through three sets of workshops with individuals living with depression and clinicians, we developed design guidelines for a personalized therapeutic robot for adults living with depression. Building on the design of Therabot, workshop participants discussed various aspects of the robot's design, sensors, behaviors, and a robot connected mobile phone app. Similarities among participants and workshops included a preference for a soft textured exterior and natural colors and sounds. There were also differences - clinicians wanted the robot to be able to call for aid, while participants with depression differed in their degree of comfort in sharing data collected by the robot with clinicians. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. CARMEN: A Cognitively Assistive Robot for Personalized Neurorehabilitation at Home.
- Author
-
Bouzida, Anya, Kubota, Alyssa, Cruz-Sandoval, Dagoberto, Twamley, Elizabeth W., and Riek, Laurel D.
- Subjects
ROBOT design & construction ,MILD cognitive impairment ,COGNITIVE training ,COGNITIVE rehabilitation ,DIGITAL health - Abstract
Cognitively assistive robots (CARs) have great potential to extend the reach of clinical interventions to the home. Due to the wide variety of cognitive abilities and rehabilitation goals, these systems must be flexible to support rapid and accurate implementation of intervention content that is grounded in existing clinical practice. To this end, we detail the system architecture of CARMEN (Cognitively Assistive Robot for Motivation and Neurorehabilitation), a flexible robot system we developed in collaboration with our key stakeholders: clinicians and people with mild cognitive impairment (PwMCI). We implemented a well-validated compensatory cognitive training (CCT) intervention on CARMEN, which it autonomously delivers to PwMCI. We deployed CARMEN in the homes of these stakeholders to evaluate and gain initial feedback on the system. We found that CARMEN gave participants confidence to use cognitive strategies in their everyday life, and participants saw opportunities for CARMEN to exhibit greater levels of autonomy or be used for other applications. Furthermore, elements of CARMEN are open source to support flexible home-deployed robots. Thus, CARMEN will enable the HRI community to deploy quality interventions to robots, ultimately increasing their accessibility and extensibility. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Navigating Real-World Complexity: A Multi-Medium System for Heterogeneous Robot Teams and Multi-Stakeholder Human-Robot Interaction.
- Author
-
Schroepfer, Pete, Gründling, Jan P., Schauffel, Nathalie, Oehrl, Simon, Pape, Sebastian, Kuhlen, Torsten W., Weyers, Benjamin, Ellwart, Thomas, and Pradalier, Cédric
- Subjects
AUGMENTED reality ,HUMAN-robot interaction ,VIRTUAL design ,VIRTUAL reality ,SOCIAL interaction - Abstract
Real-world robot system deployment is often performed in complex and unstructured environments. These complex environments coupled with multi-faceted global tasks often lead to complicated stakeholder structures, making designing for these environments extremely challenging. Magnifying this difficulty, tasks performed in these environments often cannot be accomplished by a single robot or even single robot type because of the broad range of needs and psychical constraints of the robots. In these cases, heterogeneous robot teams may need to be coupled to human team members to perform the global tasks. From a Human-Robot Interaction (HRI) perspective, this increases the complexity of designing and deploying the system significantly, as now complicated stakeholder structures are mixed with complex robot teams. This paper presents a novel real-world system and interface design leveraging multiple mediums to balance stakeholder needs. To this end, the UI presented here incorporates features that support shared mental models (SMMs), trust establishment and development, and utilizes a centralized data distribution architecture to improve team performance. In addition to the interface, this paper presents a detailed look at the design process and the lessons learned from the perspective of a multi-year, real-world deployed system, as part of a large European project consisting of 21 partners from varying countries and backgrounds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Joint-Aware Transformer: An Inter-Joint Correlation Encoding Transformer for Short-Term 3D Human Motion Prediction
- Author
-
Chang Liu, Satoshi Yagi, Satoshi Yamamori, and Jun Morimoto
- Subjects
Attention mechanism ,human body pose/motion ,human robot interaction ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
3D Skeleton-based human motion prediction, a classic task in computer vision, aims to forecast subsequent motions based on historical motion observations. In particular, precise short-term motion prediction is crucial for the effectiveness of machines designed for real-time human-computer interaction. This study aims to achieve accurate predictions of human motion within timeframes of less than 400 milliseconds, with the goal of improving machine responsiveness and efficiency. Previous research has predominantly relied on sequence models like Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) or Transformers. Transformer-based methods have been relied on temporal forecasting ability of Transformer. In this study, instead, we introduce an alternative perspective for Transformer-based models to comprehend the structure of skeletons, named Joint-Aware Transformer (JAT). Within our model, the attention mechanism is employed to encode inter-joint correlation instead of temporal dependencies. Our approach outperformed the state-of-the-art (SOTA) model in short-term prediction on three types of human motion datasets.
- Published
- 2024
- Full Text
- View/download PDF
28. On the Effects of Personalizing Vibrotactile Feedback to Facilitate User Interaction With a Robotic System
- Author
-
Sudip Hazra and Panos S. Shiakolas
- Subjects
Assistive robotics ,action verification ,human robot interaction ,interaction framework ,personalized haptics ,vibrotactile feedback ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Distinguishable vibrotactile feedback can convey information non-verbally and complete the sensory loop when using assistive devices. Feedback can increase acceptance of assistive devices but could require personalization as these devices need to adapt to user capabilities and preferences that may affect the location for inducing feedback. Developing personalized feedback for each user may be ideal, but impractical if demand for these devices increases. In this research, we evaluate the hypothesis that the ability to define, generate, and use personalized feedback is preferred and should be provided. The hypothesis is evaluated using a system capable of capturing and recognizing non-verbal inputs, providing vibrotactile feedback to verify the identified action, and executing the verified action. Evaluation focuses on determining the ability to initiate actions using the system architecture, and define, generate, and use personalized feedback. In addition to recording performance metrics (success or failure), the participants also completed the NASA TLX and post-evaluation questionnaires. Based on results, the actions can be initiated using the system architecture, and defining, generating, and using personalized feedback was found to be preferable with the success rate reaching 100% after only three successive repetitions, and could decrease mental demand and effort while interacting with the system.
- Published
- 2024
- Full Text
- View/download PDF
29. Is the Robot Spying on me? A Study on Perceived Privacy in Telepresence Scenarios in a Care Setting with Mobile and Humanoid Robots
- Author
-
Nieto Agraz, Celia, Hinrichs, Pascal, Eichelberg, Marco, and Hein, Andreas
- Published
- 2024
- Full Text
- View/download PDF
30. The Perception of Agency.
- Author
-
Trafton, J. Gregory, McCurry, J. Malcolm, Zish, Kevin, and Frazier, Chelsea R.
- Subjects
AGENT (Philosophy) ,SOCIAL interaction ,RESEARCH personnel ,ETHICS - Abstract
The perception of agency in human robot interaction has become increasingly important as robots become more capable and more social. There are, however, no accepted or consistent methods of measuring perceived agency; researchers currently use a wide range of techniques and surveys. We provide a definition of perceived agency, and from that definition we create and psychometrically validate a scale to measure perceived agency. We then perform a scale evaluation by comparing the PA scale constructed in experiment 1 to two other existing scales. We find that our PA and PA-R (Perceived Agency–Rasch) scales provide a better fit to empirical data than existing measures. We also perform scale validation by showing that our scale shows the hypothesized relationship between perceived agency and morality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Evaluating speech-in-speech perception via a humanoid robot.
- Author
-
Meyer, Luke, Araiza-Illan, Gloria, Rachman, Laura, Gaudrain, Etienne, and Başkent, Deniz
- Subjects
HUMANOID robots ,SPEECH perception ,INTELLIGIBILITY of speech ,HUMAN-robot interaction ,PERCEPTION testing ,OBJECT manipulation ,PSYCHOPHYSICS - Abstract
Introduction: Underlying mechanisms of speech perception masked by background speakers, a common daily listening condition, are often investigated using various and lengthy psychophysical tests. The presence of a social agent, such as an interactive humanoid NAO robot, may help maintain engagement and attention. However, such robots potentially have limited sound quality or processing speed. Methods: As a first step toward the use of NAO in psychophysical testing of speech-in-speech perception, we compared normal-hearing young adults' performance when using the standard computer interface to that when using a NAO robot to introduce the test and present all corresponding stimuli. Target sentences were presented with colour and number keywords in the presence of competing masker speech at varying target-to-masker ratios. Sentences were produced by the same speaker, but voice differences between the target and masker were introduced using speech synthesis methods. To assess test performance, speech intelligibility and data collection duration were compared between the computer and NAO setups. Human-robot interaction was assessed using the Negative Attitude Toward Robot Scale (NARS) and quantification of behavioural cues (backchannels). Results: Speech intelligibility results showed functional similarity between the computer and NAO setups. Data collection durations were longer when using NAO. NARS results showed participants had a relatively positive attitude toward "situations of interactions" with robots prior to the experiment, but otherwise showed neutral attitudes toward the "social influence" of and "emotions in interaction" with robots. The presence of more positive backchannels when using NAO suggest higher engagement with the robot in comparison to the computer. Discussion: Overall, the study presents the potential of the NAO for presenting speech materials and collecting psychophysical measurements for speech-inspeech perception. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Design and Development of a Low-Cost Vision-Based 6 DoF Assistive Feeding Robot for the Aged and Specially-Abled People.
- Author
-
Parikh, Priyam, Trivedi, Reena, Dave, Jatin, Joshi, Keyur, and Adhyaru, Dipak
- Subjects
- *
ROBOT kinematics , *ROBOT control systems , *COMPUTER vision , *HUMAN-robot interaction , *ROBOTICS , *ROBOTS , *FOOD waste - Abstract
Human-Robot interaction plays a vital role in the service robotics field, especially to support old age-dependent people for socio-economic reasons. In this paper, an indigenous 3D printed 6 DoF robotic arm is proposed to support specially-abled people in their independent-feeding process. The objective of the present paper is to find the combination of optimal positional controllers such as CPID, FC, FPID and FOPID, which can handle the cubic reference input signal and can produce an output signal with minimal overshoot and lesser positional error. The reduced positional error helps the robotic arm to accurately reach the destination with minimal oscillations. This would reduce the wastage of food in the middle of the trajectory as well as at the destination. The technical challenge of the paper is to synchronize machine vision, robot kinematics and trajectory planning with robot control for multiple intermediate points, subjected to the cubic input signal. The feeding robotic arm is equipped with Intel© visual depth camera, which is in synchronization with the microcontroller, servo controller, and servo actuators. Here, six intermediate points were identified in the C-space using FK, among which, fuzzy controller was selected for the first two IPs, the GA optimized FOPID was selected for IP3 to IP5 and FPID was deployed for the last IP. The GA: FOPID produced a negligible overshoot of 0.67%, whereas FC and FPID produced an overshoot of 1.4% and 0.8% respectively. The positional error gap between GA: FOPID, FC and FPID was 0.9, and 0.6 Deg/sec respectively. The selection of combination of optimal controllers helps the manipulator to successfully deliver the food without wasting it. The repeatability of the feeding process is ensured by successfully conducting user testing on 20 users for 25 cycles. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. MINDBOT: DESIGN AND IMPLEMENTATION OF A MIND-CONTROLLED EDUCATIONAL ROBOT TOY FOR DISABLED CHILDREN.
- Author
-
Khalid, Sibar J. and Ali, Ismael A.
- Subjects
ROBOTS ,CHILDREN with disabilities ,ELECTROENCEPHALOGRAPHY ,TECHNOLOGY ,EDUCATIONAL evaluation - Abstract
The mindBot robot is a new educational robot toy that can be controlled by brain signals and voice commands. It was evaluated with children with disabilities as well as healthy children as the potential users. The most significant challenge was the size of the used Emotiv Insight electroencephalogram headset when adjusting it on the children's' heads. Despite all the challenges, the mindBot robot is a promising technology that could be fun and educational for disabled children. The 11 participants took 36 minutes to finish all tasks on average. This includes the time they spent setting up the robot for the first time, putting on the headset, learning how to use the robot, and using the main educational features. The System Usability Scale usability score for the robot is 71.13, which is considered to be the score of good. The future stages of improving the mindBot includes adding more mobility capabilities and adding the feature of educational assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. When a notification at the right time is not enough: the reminding process for socially assistive robots in institutional care
- Author
-
Matthias Rehm and Antonia L. Krummheuer
- Subjects
social robots ,reminder ,cognitive impairment ,human robot interaction ,practice ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Reminding is often identified as a central function of socially assistive robots in the healthcare sector. The robotic reminders are supposed to help people with memory impairments to remember to take their medicine, to drink and eat, or to attend appointments. Such standalone reminding technologies can, however, be too demanding for people with memory injuries. In a co-creation process, we developed an individual reminder robot together with a person with traumatic brain injury and her care personnel. During this process, we learned that while current research describe reminding as a prototypical task for socially assistive robots, there is no clear definition of what constitutes a reminder nor that it is based on complex sequences of interactions that evolve over time and space, across different actions, actors and technologies. Based on our data from the co-creation process and the first deployment, we argue for a shift towards a sequential and socially distributed character of reminding. Understanding socially assistive robots as rehabilitative tools for people with memory impairment, they need to be reconsidered as interconnected elements in institutional care practices instead of isolated events for the remindee.
- Published
- 2024
- Full Text
- View/download PDF
35. Safety Analysis of Human Robot Collaborations with GRL Goal Models
- Author
-
Daun, Marian, Manjunath, Meenakshi, Jesus Raja, Jeshwitha, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Almeida, João Paulo A., editor, Borbinha, José, editor, Guizzardi, Giancarlo, editor, Link, Sebastian, editor, and Zdravkovic, Jelena, editor
- Published
- 2023
- Full Text
- View/download PDF
36. Pre-ceiving the Imminent : Emotions-Had, Emotions-Perceived and Gibsonian Affordances for Emotion Perception in Social Robots
- Author
-
Poljanšek, Tom, Grunwald, Armin, Series Editor, Heil, Reinhard, Series Editor, Coenen, Christopher, Series Editor, Misselhorn, Catrin, editor, Poljanšek, Tom, editor, Störzinger, Tobias, editor, and Klein, Maike, editor
- Published
- 2023
- Full Text
- View/download PDF
37. Collision Avoidance in Collaborative Robotics Based on Real-Time Skeleton Tracking
- Author
-
Forlini, Matteo, Neri, Federico, Scoccia, Cecilia, Carbonari, Luca, Palmieri, Giacomo, Ceccarelli, Marco, Series Editor, Agrawal, Sunil K., Advisory Editor, Corves, Burkhard, Advisory Editor, Glazunov, Victor, Advisory Editor, Hernández, Alfonso, Advisory Editor, Huang, Tian, Advisory Editor, Jauregui Correa, Juan Carlos, Advisory Editor, Takeda, Yukio, Advisory Editor, Petrič, Tadej, editor, Ude, Aleš, editor, and Žlajpah, Leon, editor
- Published
- 2023
- Full Text
- View/download PDF
38. A Review of Different Aspects of Human Robot Interaction
- Author
-
Manas, A. U., Sikka, Sunil, Pandey, Manoj Kumar, Mishra, Anil Kumar, Lim, Meng-Hiot, Series Editor, Sharma, Harish, editor, Saha, Apu Kumar, editor, and Prasad, Mukesh, editor
- Published
- 2023
- Full Text
- View/download PDF
39. Embedding Contextual Information in Seq2seq Models for Grounded Semantic Role Labeling
- Author
-
Hromei, Claudiu Daniel, Cristofori, Lorenzo, Croce, Danilo, Basili, Roberto, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Dovier, Agostino, editor, Montanari, Angelo, editor, and Orlandini, Andrea, editor
- Published
- 2023
- Full Text
- View/download PDF
40. An Emotional Support Robot Framework Using Emotion Recognition as Nonverbal Communication for Human-Robot Co-adaptation
- Author
-
Al-Omair, Osamah M., Huang, Shihong, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, and Arai, Kohei, editor
- Published
- 2023
- Full Text
- View/download PDF
41. Mixed Reality Platform Supporting Human-Robot Interaction
- Author
-
Calzone, Nicolas, Sileo, Monica, Mozzillo, Rocco, Pierri, Francesco, Caccavale, Fabrizio, Chaari, Fakher, Series Editor, Gherardini, Francesco, Series Editor, Ivanov, Vitalii, Series Editor, Cavas-Martínez, Francisco, Editorial Board Member, di Mare, Francesca, Editorial Board Member, Haddar, Mohamed, Editorial Board Member, Kwon, Young W., Editorial Board Member, Trojanowska, Justyna, Editorial Board Member, Gerbino, Salvatore, editor, Lanzotti, Antonio, editor, Martorelli, Massimo, editor, Mirálbes Buil, Ramón, editor, Rizzi, Caterina, editor, and Roucoules, Lionel, editor
- Published
- 2023
- Full Text
- View/download PDF
42. Hand Gesture and Human-Drone Interaction
- Author
-
Latif, Bilawal, Buckley, Neil, Secco, Emanuele Lindo, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, and Arai, Kohei, editor
- Published
- 2023
- Full Text
- View/download PDF
43. Evaluating speech-in-speech perception via a humanoid robot
- Author
-
Luke Meyer, Gloria Araiza-Illan, Laura Rachman, Etienne Gaudrain, and Deniz Başkent
- Subjects
speech perception ,psychophysics testing ,speech masking ,NAO robot ,human robot interaction ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
IntroductionUnderlying mechanisms of speech perception masked by background speakers, a common daily listening condition, are often investigated using various and lengthy psychophysical tests. The presence of a social agent, such as an interactive humanoid NAO robot, may help maintain engagement and attention. However, such robots potentially have limited sound quality or processing speed.MethodsAs a first step toward the use of NAO in psychophysical testing of speech- in-speech perception, we compared normal-hearing young adults’ performance when using the standard computer interface to that when using a NAO robot to introduce the test and present all corresponding stimuli. Target sentences were presented with colour and number keywords in the presence of competing masker speech at varying target-to-masker ratios. Sentences were produced by the same speaker, but voice differences between the target and masker were introduced using speech synthesis methods. To assess test performance, speech intelligibility and data collection duration were compared between the computer and NAO setups. Human-robot interaction was assessed using the Negative Attitude Toward Robot Scale (NARS) and quantification of behavioural cues (backchannels).ResultsSpeech intelligibility results showed functional similarity between the computer and NAO setups. Data collection durations were longer when using NAO. NARS results showed participants had a relatively positive attitude toward “situations of interactions” with robots prior to the experiment, but otherwise showed neutral attitudes toward the “social influence” of and “emotions in interaction” with robots. The presence of more positive backchannels when using NAO suggest higher engagement with the robot in comparison to the computer.DiscussionOverall, the study presents the potential of the NAO for presenting speech materials and collecting psychophysical measurements for speech-in-speech perception.
- Published
- 2024
- Full Text
- View/download PDF
44. Exploring human-machine collaboration in industry: a systematic literature review of digital twin and robotics interfaced with extended reality technologies.
- Author
-
Feddoul, Yassine, Ragot, Nicolas, Duval, Fabrice, Havard, Vincent, Baudry, David, and Assila, Ahlem
- Subjects
- *
DIGITAL twins , *INDUSTRIAL robots , *HUMAN-robot interaction , *ROBOT industry , *INDUSTRY 4.0 - Abstract
This systematic literature review presents the latest advancements and insights about digital twin technology and robotics interfaced with extended reality in the context of Industry 4.0. As the extended reality technologies emerge, it results in an increasing overlap between digital twins and human-robot interactions in industrial settings, promoting collaboration between operators and cobots in manufacturing environments. The objective of this study is to serve as a valuable resource for researchers and practitioners working in the field of Industry 4.0. It aims to highlight the latest developments and innovations in the application of digital twins and robotics interfaced with extended reality technologies in manufacturing. By extracting data from relevant articles, it provides a comprehensive understanding of the current state-of-the-art in this field by: analyzing the favored extended reality interfaces for digital twin and robotics interactions; analyzing the digital twin and physical twin interaction; evaluating the digital twin application levels and pillars through extended reality interfacing; and introducing a new concept called augmented perception for creating new physical-digital interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Intuitive and Interactive Robotic Avatar System for Tele-Existence: TEAM SNU in the ANA Avatar XPRIZE Finals
- Author
-
Park, Beomyeong, Kim, Donghyeon, Lim, Daegyu, Park, Suhan, Ahn, Junewhee, Kim, Seungyeon, Shin, Jaeyong, Sung, Eunho, Sim, Jaehoon, Kim, Junhyung, Kim, Myeong-Ju, Cha, Junhyeok, Park, Gyeongjae, Lee, Hokyun, You, Seungbin, Jang, Keunwoo, Kim, Seung-Hun, Schwartz, Mathew, and Park, Jaeheung
- Published
- 2024
- Full Text
- View/download PDF
46. Designing user interfaces for fully autonomous vehicles : to speak, tap or press?
- Author
-
Amanatidis, Theocharis and Clarkson, P. John
- Subjects
autonomous vehicles ,human robot interaction ,user interface design - Abstract
Fully autonomous vehicles will herald in a new era of personal mobility. However, the transition from manual to self-driving cars could be difficult for many. Improving the user experience will help manage the transition, increase adoption, and realise the benefits of fully autonomous technology. Hence, this thesis investigates how a positive user experience can be delivered and which interaction technologies can contribute to the design of a fully autonomous vehicle interface. The theoretical foundation of the literature review draws from the fields of human-machine interaction, autonomous vehicles, and human-robot interaction. The Design Research Methodology was used as a framework by which to plan and execute this research. First, an exploratory sequential mixed-methods study investigated the users' needs and expectations of autonomous vehicle interfaces. It consisted of both a qualitative interview study and a quantitative online survey which found that users wanted different information to be shared with them, and a different interior, depending on if the vehicle was privately owned or shared. This resulted in a dichotomy of interfaces for each ownership model. Second, two simulator experiments investigated three interaction technologies for user interfaces. Findings indicated that subjective metrics, especially satisfaction and usefulness, were better predictors of improvement, when compared to purely relying on performance metrics. While a touchscreen-only interface was more performant, an interface that included a voice assistant was found to be both more satisfying and more useful. The results of this research have led to the development of a list of design recommendations about how the user experience of fully autonomous vehicles can be improved and which interaction technologies may be employed in providing improvements. Ultimately, this work concludes that while "conventional" automotive interface technologies are mostly sufficient to control a fully autonomous vehicle, a voice assistant and other novel technologies can provide additional improvements in user experience.
- Published
- 2021
- Full Text
- View/download PDF
47. Trust and fluency in industrial human-robot interaction : virtual reality study of human behaviour
- Author
-
Fratczak, Piotr
- Subjects
Virtual Reality ,Human Robot Interaction ,Human Factors ,Industrial robots - Abstract
As industrial automation is evolving, the notion of combining the strengths of humans and robots in Human-Robot Collaboration becomes more common. Although there are countless risks in removing physical barriers separating humans and industrial robots, robots are evolving and becoming inherently, mechanically safe and may soon become harmless to humans. However, just because humans cannot be physically harmed by robots, does not mean their wellbeing and performance cannot be influenced by them. Because of that, it is imperative to understand how human trust and collaborative fluency are affected by a robot's changing behaviour. The main focus of this doctoral thesis is the influence of industrial robots on human behaviour. To acquire strong and natural human responses in a safe and controllable laboratory environment, this thesis uses immersive Virtual Reality (VR) Head Mounted Display (HMD) as a means of simulating the dangers of industrial robots without putting anyone in danger. As the first knowledge contribution, this doctoral thesis used VR HMD to study the influence of an industrial robot's predictability and change of behaviour on human responses and trust in Human-Robot Interaction (HRI) scenario. The results show that robot's unexpected sudden movements significantly influence human's posture, focus and trust. Furthermore, it is shown that people can naturally recover after the accident, however, robot's postaccident actions can either jeopardise that attempt (by doing more trustviolating actions, such as suddenly changing movement trajectories), or significantly speed it up (by doing trust-recovering actions, such as apologising). The second contribution of this thesis is focused on collaborative fluency and human adaptability to an industrial robot's behaviour. It used VR HMD to simulate a collaborative scenario, where a robot and a human work at the same time on the same part. The results show a large difference between people's adaptive capabilities. Some people can easily adapt to the robot's increasing speed and work around the robot without losing any collaborative fluency. Other people fail to keep up with the robot, which significantly lowers their collaborative fluency. When the robot speeded up, some participants completely gave up on collaborative fluency and waited for the robot to finish its task and stop moving before they show any intention to work. Furthermore, it is shown that the fluent and non-fluent participants display statistically different behaviours and physiology and that they can often be identified even before their fluency decreases. This suggests that it may be possible to predict eventual drops in collaborative performance and prepare for them accordingly in an individualised way. The final contribution studies the relationship between people's physiology and their motion. The results show that training classification models on physiological features from two different groups of participants engaged in two different experiments generates similar outputs. Even though some motion features are less likely to be reflected by physiological features the results suggest that it should be possible to create a task-independent method of clustering participants to predict certain behaviours. Although all the data collection was conducted in virtual reality and it is uncertain whether the same results would be obtained from experiments conducted in a real industrial environment, participants' self-reports suggest that up to 80% of participants felt immersed and were committed to the goals of the experiments. This doctoral thesis shows not only that human mental and emotional wellbeing needs to be considered when designing HRI scenarios, but also that virtual reality is a valuable tool, which can significantly improve the way industrial environments are studied and improved.
- Published
- 2020
- Full Text
- View/download PDF
48. Lessons learned: Symbiotic autonomous robot ecosystem for nuclear environments
- Author
-
Daniel Mitchell, Paul Dominick Emor Baniqued, Abdul Zahid, Andrew West, Bahman Nouri Rahmat Abadi, Barry Lennox, Bin Liu, Burak Kizilkaya, David Flynn, David John Francis, Erwin Jose Lopez Pulgarin, Guodong Zhao, Hasan Kivrak, Jamie Rowland Douglas Blanche, Jennifer David, Jingyan Wang, Joseph Bolarinwa, Kanzhong Yao, Keir Groves, Liyuan Qi, Mahmoud A. Shawky, Manuel Giuliani, Melissa Sandison, Olaoluwa Popoola, Ognjen Marjanovic, Paul Bremner, Samuel Thomas Harper, Shivoh Nandakumar, Simon Watson, Subham Agrawal, Theodore Lim, Thomas Johnson, Wasim Ahmad, Xiangmin Xu, Zhen Meng, and Zhengyi Jiang
- Subjects
cyber‐systems ,field robotics ,hazardous inspection ,human robot interaction ,industrial robotics ,mobile robots ,Cybernetics ,Q300-390 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Nuclear facilities have a regulatory requirement to measure radiation levels within Post Operational Clean Out (POCO) around nuclear facilities each year, resulting in a trend towards robotic deployments to gain an improved understanding during nuclear decommissioning phases. The UK Nuclear Decommissioning Authority supports the view that human‐in‐the‐loop (HITL) robotic deployments are a solution to improve procedures and reduce risks within radiation characterisation of nuclear sites. The authors present a novel implementation of a Cyber‐Physical System (CPS) deployed in an analogue nuclear environment, comprised of a multi‐robot (MR) team coordinated by a HITL operator through a digital twin interface. The development of the CPS created efficient partnerships across systems including robots, digital systems and human. This was presented as a multi‐staged mission within an inspection scenario for the heterogeneous Symbiotic Multi‐Robot Fleet (SMuRF). Symbiotic interactions were achieved across the SMuRF where robots utilised automated collaborative governance to work together, where a single robot would face challenges in full characterisation of radiation. Key contributions include the demonstration of symbiotic autonomy and query‐based learning of an autonomous mission supporting scalable autonomy and autonomy as a service. The coordination of the CPS was a success and displayed further challenges and improvements related to future MR fleets.
- Published
- 2023
- Full Text
- View/download PDF
49. A comprehensive review of task understanding of command-triggered execution of tasks for service robots.
- Author
-
Xi, Xiangming and Zhu, Shiqiang
- Subjects
ARTIFICIAL intelligence ,COMPUTER science ,SHOPPING malls ,AUTONOMOUS robots ,ROBOTS - Abstract
Robotics is a cross-disciplinary branch of science and technology, and lays foundation on mechanics, control, computer science, artificial intelligence, and so on. With the developments of both softwares and hardwares, especially in the artificial intelligence technologies, robots have been widely applied in multiple areas in the society, and become more and more interactive in our daily life, such as the service robots in the museums, shopping malls, restaurants, etc. Though the ultimate goal for a service robot to behave like a human is not easy to be achieved, significant processes have been made during the past decades. Considering that it is universal that service robots are triggered to execute tasks specified by human users via commands (Comm-TET), and it is essential to process and understand human users' commands correctly, we comprehensively overview the developments of the task understanding (TU) sub-process of Comm-TET for service robots. In order to organize the related literature in a reasonable manner, we abstracted the pipeline of Comm-TET and the generic framework of TU based on the existing researches. Following the abstracted framework, we present in-depth discussions on each of its building blocks over the past decades, and give some insights on the future research directions. Compared to other reviews on TU, this review emphasizes more on the technical developments and organizes the existing researches as an integrality. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Attention Sharing Handling Through Projection Capability Within Human–Robot Collaboration
- Author
-
Camblor, Benjamin, Daney, David, Joseph, Lucas, and Salotti, Jean-Marc
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.