10,514 results on '"*HUMAN-robot interaction"'
Search Results
2. Hybrid Recurrent Neural Network Architecture-Based Intention Recognition for Human–Robot Collaboration
- Author
-
Liang Yan, Xiaoshan Gao, Gang Wang, and Chris Gerada
- Subjects
Network architecture ,business.industry ,Computer science ,Deep learning ,Human–robot interaction ,Computer Science Applications ,Human-Computer Interaction ,Recurrent neural network ,Control and Systems Engineering ,Sliding window protocol ,Robot ,Network performance ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Feature learning ,Software ,Information Systems - Abstract
Human-robot-collaboration requires robot to proactively and intelligently recognize the intention of human operator. Despite deep learning approaches have achieved certain results in performing feature learning and long-term temporal dependencies modeling, the motion prediction is still not desirable enough, which unavoidably compromises the accomplishment of tasks. Therefore, a hybrid recurrent neural network architecture is proposed for intention recognition to conduct the assembly tasks cooperatively. Specifically, the improved LSTM (ILSTM) and improved Bi-LSTM (IBi-LSTM) networks are first explored with state activation function and gate activation function to improve the network performance. The employment of the IBi-LSTM unit in the first layers of the hybrid architecture helps to learn the features effectively and fully from complex sequential data, and the LSTM-based cell in the last layer contributes to capturing the forward dependency. This hybrid network architecture can improve the prediction performance of intention recognition effectively. One experimental platform with the UR5 collaborative robot and human motion capture device is set up to test the performance of the proposed method. One filter, that is, the quartile-based amplitude limiting algorithm in sliding window, is designed to deal with the abnormal data of the spatiotemporal data, and thus, to improve the accuracy of network training and testing. The experimental results show that the hybrid network can predict the motion of human operator more precisely in collaborative workspace, compared with some representative deep learning methods.
- Published
- 2023
- Full Text
- View/download PDF
3. Integrating Conversational Agents and Knowledge Graphs Within the Scholarly Domain
- Author
-
Antonello Meloni, Simone Angioni, Angelo Salatino, Francesco Osborne, Diego Reforgiato Recupero, Enrico Motta, Meloni, A, Angioni, S, Salatino, A, Osborne, F, Reforgiato Recupero, D, and Motta, E
- Subjects
scholarly data ,human-robot interaction ,knowledge graph ,General Computer Science ,user experience ,General Engineering ,virtual assistant ,General Materials Science ,Electrical and Electronic Engineering ,Chatbot - Abstract
In the last few years, chatbots have become mainstream solutions adopted in a variety of domains for automatizing communication at scale. In the same period, knowledge graphs have attracted significant attention from business and academia as robust and scalable representations of information. In the scientific and academic research domain, they are increasingly used to illustrate the relevant actors (e.g., researchers, institutions), documents (e.g., articles, patents), entities (e.g., concepts, innovations), and other related information. Following the same direction, this paper describes how to integrate conversational agents with knowledge graphs focused on the scholarly domain, a.k.a. Scientific Knowledge Graphs. On top of the proposed architecture, we developed AIDA-Bot, a simple chatbot that leverages a large-scale knowledge graph of scholarly data. AIDA-Bot can answer natural language questions about scientific articles, research concepts, researchers, institutions, and research venues. We have developed four prototypes of AIDA-Bot on Alexa products, web browsers, Telegram clients, and humanoid robots. We performed a user study evaluation with 15 domain experts showing a high level of interest and engagement with the proposed agent.
- Published
- 2023
- Full Text
- View/download PDF
4. Exoskeleton Training Modulates Complexity in Movement Patterns and Cortical Activity in Able-Bodied Volunteers
- Author
-
Roberto Di Marco, Maria Rubega, Olive Lennon, Asja Vianello, Stefano Masiero, Emanuela Formaggio, and Alessandra Del Felice
- Subjects
EMG ,Motor Learning ,General Neuroscience ,Rehabilitation ,Human-Robot interaction ,Biomedical Engineering ,Internal Medicine ,Robot-assisted Gait Training ,Neuroplasticity ,EEG ,Robot-assisted Gait Training, Motor Learning, Robotic Rehabilitation, Neuroplasticity, EEG, EMG, Human-Robot interaction ,Robotic Rehabilitation - Published
- 2023
- Full Text
- View/download PDF
5. Neural networks design and training for safe human-robot cooperation
- Author
-
Ahmed A. Mostfa and Abdel-Nasser Sharkawy
- Subjects
Environmental Engineering ,Artificial neural network ,Computer science ,020209 energy ,General Chemical Engineering ,Mechanical Engineering ,0211 other engineering and technologies ,General Engineering ,Feed forward ,02 engineering and technology ,High effectiveness ,Catalysis ,Human–robot interaction ,021105 building & construction ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,Electrical and Electronic Engineering ,Manipulator ,Simulation ,Position sensor ,Civil and Structural Engineering - Abstract
In the present paper, a neural network (NN) is proposed for detecting the human-manipulator collisions. Because of that purpose, three types of NNs are designed and trained named as; multilayer feedforward, cascaded forward, and recurrent NNs. The NN is designed based on the joints’ dynamics of the manipulator. In addition, the NN is trained using a dataset with and without the collisions using the algorithm of Levenberg-Marquardt to detect the collisions happened with the manipulator. During the design of the NN, only the intrinsic joint position sensor of the robot is used which enable the proposed method to be applied to any robot. The three designed neural networks are compared quantitatively and qualitatively to each other. Furthermore, the proposed method was evaluated with the KUKA LWR manipulator using one joint motion. The results of the designed systems achieve high effectiveness in detecting the collisions between human and robot.
- Published
- 2022
- Full Text
- View/download PDF
6. Adaptive-Constrained Impedance Control for Human–Robot Co-Transportation
- Author
-
Carlos Silvestre, Xinbo Yu, Wei He, Long Cheng, Yanghe Feng, and Bin Li
- Subjects
Computer science ,Control engineering ,Robotics ,Nonlinear control ,Human–robot interaction ,Computer Science Applications ,Task (project management) ,Computer Science::Robotics ,Human-Computer Interaction ,Range (mathematics) ,Impedance control ,Control and Systems Engineering ,Control theory ,Electric Impedance ,Humans ,Robot ,Electrical and Electronic Engineering ,Actuator ,Algorithms ,Software ,Information Systems - Abstract
Human-robot co-transportation allows for a human and a robot to perform an object transportation task cooperatively on a shared environment. This range of applications raises a great number of theoretical and practical challenges arising mainly from the unknown human-robot interaction model as well as from the difficulty of accurately model the robot dynamics. In this article, an adaptive impedance controller for human-robot co-transportation is put forward in task space. Vision and force sensing are employed to obtain the human hand position, and to measure the interaction force between the human and the robot. Using the latest developments in nonlinear control theory, we propose a robot end-effector controller to track the motion of the human partner under actuators' input constraints, unknown initial conditions, and unknown robot dynamics. The proposed adaptive impedance control algorithm offers a safe interaction between the human and the robot and achieves a smooth control behavior along the different phases of the co-transportation task. Simulations and experiments are conducted to illustrate the performance of the proposed techniques in a co-transportation task.
- Published
- 2022
- Full Text
- View/download PDF
7. Creant vincles amb robots mascota: una enfocament integrador
- Author
-
Marta Díaz-Boladeras and Universitat Politècnica de Catalunya. Departament d'Organització d'Empreses
- Subjects
Robòtica personal ,Bonding ,Personal robotics ,Robotic pets ,Interacció persona-robot ,Companion robots ,Child-Robot interaction ,Attachment ,Informàtica::Robòtica [Àrees temàtiques de la UPC] ,Informàtica::Sistemes d'informació::Interacció home-màquina [Àrees temàtiques de la UPC] ,Human-robot interaction ,Robot-based intervention ,General Psychology - Abstract
The challenge of long-term interaction between humans and robots is still a bottleneck in service robot research. To gain an understanding of sustained relatedness with robots, this study proposes a conceptual framework for bond formation. More specifically, it addresses the dynamics of children bonding with robotic pets as the basis for certain services in healthcare and education. The framework presented herein offers an integrative approach and draws from theoretical models and empirical research in Human Robot Interaction and also from related disciplines that investigate lasting relationships, such as human-animal affiliation and attachment to everyday objects. The research question is how children’s relatedness to personified technologies occurs and evolves and what underpinning processes are involved. The subfield of research is child-robot interaction, within the boundaries of social psychology, where the robot is viewed as a social agent, and human-system interaction, where the robot is regarded as an artificial entity. The proposed framework envisions bonding with pet-robots as a socio-affective process towards lasting connectedness and emotional involvement that evolves through three stages: first encounter, short-term interaction and lasting relationship. The stages are characterized by children’s behaviors, cognitions and feelings that can be identified, measured and, maybe more importantly, managed. This model aims to integrate fragmentary and heterogeneous knowledge into a new perspective on the impact of robots in close and enduring proximity to children.
- Published
- 2022
- Full Text
- View/download PDF
8. Diver‐Robot Communication Using Wearable Sensing: Remote Pool Experiments
- Author
-
Fausto Ferreira, Igor Kvasić, Đula Nađ, Luka Mandić, Nikola Mišković, Christopher Walker, Derek Orbaugh Antillon, and Iain Anderson
- Subjects
Ocean Engineering ,diver‐robot interaction ,human‐robot interaction ,underwater robotics ,wearables ,Oceanography - Abstract
Diver‐robot interaction is an exciting and recent field of study. There are different ways a diver and robot can interact, such as using tablets or detecting divers with cameras or sonars. A novel approach presented in this paper uses direct diver‐robot communication. To facilitate communication for humans, we use typical diver gestures, which are transmitted to a robot using a wearable glove and acoustic communications. Following previous work by the University of Zagreb and the University of Auckland, a collaboration to control an autonomous underwater vehicle based on a wearable diver glove has been made possible through the EU Marine Robots project. Under this project, Trans-National Access trials allow Laboratory for Underwater Systems and Technologies, University of Zagreb, to offer its robots and infrastructure to external partners. Initial trials with the University of Auckland, which were planned to take place on site, were transformed into remote access trials. This paper reports on these challenging trials and collaboration given the distance and time zone difference. The key point is to demonstrate the possibility of having a diver remotely controlling a robot using typical gestures recognized by a wearable glove and transmitted via acoustic modems (and the Internet for the remote connection).
- Published
- 2022
- Full Text
- View/download PDF
9. An Offline-Merge-Online Robot Teaching Method Based on Natural Human-Robot Interaction and Visual-Aid Algorithm
- Author
-
Gengcheng Yao, Peter X. Liu, Chunquan Li, and Guanglong Du
- Subjects
Operator (computer programming) ,Control and Systems Engineering ,Computer science ,Interface (computing) ,Teaching method ,Robot ,Electrical and Electronic Engineering ,Algorithm ,Human–robot interaction ,Motion (physics) ,Computer Science Applications ,Gesture ,Haptic technology - Abstract
This article proposes an offline-merge-online robot teaching method (OMORTM). Specifically, a virtual-real fusion interactive interface (VRFII) is first developed by projecting a virtual robot into the real scene with an augmented-reality (AR) device, aiming to implement offline teaching. Second, a visual-aid algorithm (VAA) is proposed to improve offline teaching accuracy. Third, a gesture and speech teaching fusion algorithm (GSTA) with the fingertip tactile force feedback is developed to obtain the natural teaching pattern and improve the interactive accuracy of teaching the real or virtual robot. More specifically, through the VRFII, the operator can use the GSTA and the VAA to teach the virtual robot naturally and safely, and then the real robot reproduces the motion of the virtual robot. Therefore, OMORTM enables the teaching results to be quickly verified while ensuring the operator's safety and avoiding damage to the robot or workpiece. A series of experiments were conducted to validate the practicality and effectiveness of OMORTM. The results show that by effectively combining the offline and online, OMORTM provides accurate robotic teaching processes, suitable for nonprofessionals.
- Published
- 2022
- Full Text
- View/download PDF
10. Perturbation-Based Stiffness Inference in Variable Impedance Control
- Author
-
Adrià Colomé, Carme Torras, Edoardo Caldarelli, Dylan Dah-Chuan Lu, European Commission, European Research Council, Universitat Politècnica de Catalunya. Doctorat en Automàtica, Robòtica i Visió, Universitat Politècnica de Catalunya. Departament de Matemàtiques, Institut de Robòtica i Informàtica Industrial, and Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
- Subjects
Control and Optimization ,Mechanical Engineering ,Biomedical Engineering ,Learning from demonstration ,Impedance control ,Computer Science Applications ,Human-Computer Interaction ,Compliant robots ,Interacció persona-robot ,Probabilistic inference ,Artificial Intelligence ,Control and Systems Engineering ,Computer Vision and Pattern Recognition ,Informàtica::Robòtica [Àrees temàtiques de la UPC] ,Human-robot interaction ,Robots - Abstract
One of the major challenges in learning from demonstration is to teach the robot a wider set of task features than the plain trajectories to be followed. In this sense, one key parameter is stiffness, i.e., the rigidity that the manipulator should exhibit when performing a task. The required robot stiffness is often not known a priori and varies along the execution of the task, thus its profile needs to be inferred from the demonstrations. In this work, we propose a novel, force-based algorithm for inferring time-varying stiffness profiles, leveraging the relationship between stiffness and tracking error, and involving human-robot interaction. We begin by gathering a set of demonstrations with kinesthetic teaching. Then, the robot executes a perturbed reference, obtained from these demonstrations by means of Gaussian process regression, and the human intervenes if the perturbation makes the manipulator deviate from its expected behaviour. Human intervention is measured and used to infer the desired control stiffness. In the experiments section, we show that our algorithm can be combined with different types of force sensors, and provide suitable processing algorithms. Our approach correctly infers the stiffness profiles from the force and electromyography sensors, their combination permitting also to comply with the physical constraints imposed by the environment. This is demonstrated in three experiments of increasing complexity: a motion in free Cartesian space, a rigid assembly task, and bed-making., This paper was recommended for publication by Editor Jens Kober upon evaluation of the Associate Editor and Reviewers’ comments. This work was supported by the project CLOTHILDE (“CLOTH manIpulation Learning from DEmonstrations”), funded from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Advanced Grant agreement No 741930).
- Published
- 2022
- Full Text
- View/download PDF
11. Domain‐specific and domain‐general neural network engagement during human–robot interactions
- Author
-
Hogenhuis, A., Hortensius, R., Social-cognitive and interpersonal determinants of behaviour, Leerstoel Aarts, Social-cognitive and interpersonal determinants of behaviour, and Leerstoel Aarts
- Subjects
Brain Mapping ,Neuroscience(all) ,General Neuroscience ,Brain ,social interaction ,social cognition ,Robotics ,Magnetic Resonance Imaging ,human–robot interaction ,two-person neuroscience ,Humans ,Neural Networks, Computer ,Comprehension - Abstract
To what extent do domain-general and domain-specific neural networks generalise across interactions with human and artificial agents? In this exploratory study, we analysed a publicly available fMRI dataset (n = 22; Rauchbauer, et al., 2019) to probe the similarities and dissimilarities in neural architecture while participants conversed with another person or a robot. Incorporating trial-by-trial dynamics of the interactions, listening and speaking, we used whole-brain, region-of-interest, and functional connectivity analyses to test response profiles within and across social or non-social, domain-specific and domain-general networks, i.e., the person perception, theory-of-mind, object-specific, language, multiple-demand networks. Listening to a robot compared to a human resulted in higher activation in the language network, especially in areas associated with listening comprehension, and in the person perception network. No differences in activity of the theory-of-mind network were found. Results from the functional connectivity analysis showed no difference between interactions with a human or robot in within- and between-network connectivity. Together, these results suggest that while similar regions are activated during communication regardless of the type of conversational agent, activity profiles during listening point to a dissociation at a lower-level or perceptual level, but not higher-order cognitive level.
- Published
- 2022
- Full Text
- View/download PDF
12. Soft Wrist-Worn Multi-Functional Sensor Array for Real-Time Hand Gesture Recognition
- Author
-
Raffaele Gravina, Giancarlo Fortino, Lin Yang, and Wentao Dong
- Subjects
Computer science ,business.industry ,Sign language ,Wrist ,Linear discriminant analysis ,Human–robot interaction ,body regions ,medicine.anatomical_structure ,Sensor array ,Gesture recognition ,medicine ,Computer vision ,Ulnar deviation ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Instrumentation ,Wearable technology ,Gesture - Abstract
Soft electronics have been widely applied to wearable electronics, hand gesture detection and human robot interaction (HRI). Soft wrist-worn sensor system (SWSS) with electromyography (EMG), strain/pressure sensing abilities has been created for continuous hand gesture recognition, which includes multi-source data sensing, data collection and process, and wireless communication. SWSS is conformal contact with skin surface as the softness of SWSS, and it would improve the wearability with the wrist during long-term hand gesture monitoring. Linear discriminant analysis (LDA) and support vector machines (SVM) algorithms are applied to hand gesture recognition with average accuracy 83.67%, 86.8% for Group #1 (wrist flexion (WF), wrist extension (WE), finger flexion (FF), finger extension (FE), radial deviation (RD), and ulnar deviation (UD)) and 84.71%, 88.53% for Group #2 (sign language 0-9 digits), respectively. It is found that the recognition accuracy has just a little decreased during the long-term hand gesture detection at different sessions (at initial time, one day after, three days after and seven days after). This paper demonstrated the feasibility of gesture recognition via SWSS on wrist, and it could be integrated into wrist-worn electronic system for human robot interaction.
- Published
- 2022
- Full Text
- View/download PDF
13. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings
- Author
-
Thellman, Sam, de Graaf, Maartje, Ziemke, Tom, Sub Human-Centered Computing, Human-Centered Computing, Sub Human-Centered Computing, and Human-Centered Computing
- Subjects
Human-Computer Interaction ,Artificial Intelligence ,anthropomorphism ,mentalizing ,intentional stance ,folk psychology ,Human-robot interaction ,mind perception ,theory of mind - Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a computer < robot < human pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
- Published
- 2022
- Full Text
- View/download PDF
14. Audio-Visual Speaker Diarization in the Framework of Multi-User Human-Robot Interaction
- Author
-
Dhaussy, Timothée, Jabaian, Bassam, Lefèvre, Fabrice, Horaud, Radu, Laboratoire Informatique d'Avignon (LIA), Avignon Université (AU)-Centre d'Enseignement et de Recherche en Informatique - CERI, Vers des robots à l’intelligence sociale au travers de l’apprentissage, de la perception et de la commande (ROBOTLEARN), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université Grenoble Alpes (UGA), IEEE Signal Processing Society, and ANR-20-CE33-0008,muDialBot,MUlti-party perceptually-active situated DIALog for human-roBOT interaction(2020)
- Subjects
[INFO.INFO-SD]Computer Science [cs]/Sound [cs.SD] ,multimodal ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,speaker diarization ,multimodal human-robot interaction ,[INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] - Abstract
International audience; The speaker diarization task answers the question "who is speaking at a given time?". It represents valuable information for scene analysis in a domain such as robotics. In this paper, we introduce a temporal audiovisual fusion model for multiusers speaker diarization, with low computing requirement, a good robustness and an absence of training phase. The proposed method identifies the dominant speakers and tracks them over time by measuring the spatial coincidence between sound locations and visual presence. The model is generative, parameters are estimated online, and does not require training. Its effectiveness was assessed using two datasets, a public one and one collected in-house with the Pepper humanoid robot.
- Published
- 2023
- Full Text
- View/download PDF
15. Toward Proactive Human–Robot Collaborative Assembly: A Multimodal Transfer-Learning-Enabled Action Prediction Approach
- Author
-
Junming Fan, Shufei Li, Lihui Wang, and Pai Zheng
- Subjects
Computer science ,business.industry ,020208 electrical & electronic engineering ,Mobile robot ,02 engineering and technology ,Motion control ,Automation ,Human–robot interaction ,Bottleneck ,Task (project management) ,Personalization ,Control and Systems Engineering ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Transfer of learning ,business - Abstract
Human-robot collaborative assembly (HRCA) is vital for achieving high-level flexible automation for mass personalization in todays smart factories. However, existing works in both industry and academia mainly focus on adaptive robot planning, while seldom consider human operators intentions in advance. Hence, it hinders the HRCA transition towards a proactive manner. To overcome the bottleneck, this research proposes a multimodal transfer learning-enabled action prediction approach, serving as the prerequisite to ensure the Proactive HRCA. Firstly, a multimodal intelligence-based action recognition approach is proposed to predict ongoing human actions by leveraging the visual stream and skeleton stream with short-time input frames. Secondly, a transfer learning-enabled model is adapted to transfer learned knowledge from daily activities to industrial assembly operations rapidly for online operator intention analysis. Thirdly, a dynamic decision-making mechanism including the robotic decision and motion control is described to allow mobile robots to assist operators in a proactive manner. Lastly, an aircraft bracket assembly task is demonstrated in the lab environment, and the comparative study result shows that the proposed approach outperforms other state-of-the-art ones for efficient action prediction.
- Published
- 2022
- Full Text
- View/download PDF
16. Impedance Variation and Learning Strategies in Human–Robot Interaction
- Author
-
Mojtaba Sharifi, Javad K. Mehr, Mahdi Tavakoli, Amir Zakerimanesh, Ali Torabi, and Vivian K. Mushahwar
- Subjects
Lyapunov function ,Admittance ,Computer science ,Movement ,SIGNAL (programming language) ,Stability (learning theory) ,Control engineering ,Robotics ,Human–robot interaction ,Computer Science Applications ,Human-Computer Interaction ,Nonlinear system ,symbols.namesake ,Control and Systems Engineering ,Electric Impedance ,symbols ,Humans ,Learning ,Electrical and Electronic Engineering ,Electrical impedance ,Algorithms ,Software ,Information Systems - Abstract
In this survey, various concepts and methodologies developed over the past two decades for varying and learning the impedance or admittance of robotic systems that physically interact with humans are explored. For this purpose, the assumptions and mathematical formulations for the online adjustment of impedance models and controllers for physical human-robot interaction (HRI) are categorized and compared. In this systematic review, studies on: 1) variation and 2) learning of appropriate impedance elements are taken into account. These strategies are classified and described in terms of their objectives, points of view (approaches), and signal requirements (including position, HRI force, and electromyography activity). Different methods involving linear/nonlinear analyses (e.g., optimal control design and nonlinear Lyapunov-based stability guarantee) and the Gaussian approximation algorithms (e.g., Gaussian mixture model-based and dynamic movement primitives-based strategies) are reviewed. Current challenges and research trends in physical HRI are finally discussed.
- Published
- 2022
- Full Text
- View/download PDF
17. Novel Compliant Control of a Pneumatic Artificial Muscle Driven by Hydrogen Pressure Under a Varying Environment
- Author
-
Thananchai Leephakpreeda and Thanana Nuchkrua
- Subjects
Adaptive control ,Control and Systems Engineering ,Computer science ,Artificial muscle ,Control engineering ,Electrical and Electronic Engineering ,Robust control ,Bayesian inference ,Adaptation (computer science) ,Actuator ,Human–robot interaction ,Parametric statistics - Abstract
A pneumatic artificial muscle (PAM) based on a metal hydride (MH) is considered for a compact compliant actuator. It is suitable for board applications of human robot interaction (HRI). To address the problem of HRI representing by varying environment, a compliant control is introduced. In fact, the bottlenecks of improving the performance in the compliant control of the PAM actuator are: a) an inherent non-linear dynamics of a PAM, b) the parametric and non-linear uncertainties, influenced by varying environment, and c) an additional high dimension introduced by an MH employed as a driving force for the PAM. We propose a learning-based adaptive robust control (LARC) framework to tackle these challenges. A Bayesian learning technique deals with the parameter adaptation for the adaptive control. The effectiveness of the LARC has been examined in extensive experiments of tracking control.
- Published
- 2022
- Full Text
- View/download PDF
18. Automated Off-Line Generation of Stable Variable Impedance Controllers According to Performance Specifications
- Author
-
Alberto San-Miguel, Guillem Alenya, Vicenc Puig, European Commission, Agencia Estatal de Investigación (España), Ministerio de Ciencia e Innovación (España), Consejo Superior de Investigaciones Científicas (España), Universitat Politècnica de Catalunya. Doctorat en Automàtica, Robòtica i Visió, Institut de Robòtica i Informàtica Industrial, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Universitat Politècnica de Catalunya. SAC - Sistemes Avançats de Control, and Universitat Politècnica de Catalunya. ROBiri - Grup de Percepció i Manipulació Robotitzada de l'IRI
- Subjects
Control and Optimization ,Physical human-robot interaction ,Mechanical Engineering ,Biomedical Engineering ,Optimization and optimal control ,Computer Science Applications ,Human-Computer Interaction ,Interacció persona-robot ,Compliance and impedance control ,Artificial Intelligence ,Control and Systems Engineering ,Computer Vision and Pattern Recognition ,Informàtica::Robòtica [Àrees temàtiques de la UPC] ,Human-robot interaction - Abstract
In this letter, we propose a novel methodology for off-line generating stable Variable Impedance Controllers considering any parameter modulation law in function of exogenous signals to the robot, as e.g. the exerted force by the human in a collaborative task. The aim is to find the optimal controller according to a desired trade-off between accuracy and control effort. Each controller is formulated as a polytopic Linear Parameter Varying system consisting in a set of vertex systems at the limit operation points. Then, the stability and operating properties can be assessed through Linear Matrix Inequalities, from which an optimality index can be obtained. This index is used by a genetic optimisation algorithm to iteratively generate new controller solutions towards the best one. To exemplify our method we choose a case study of modulation laws for tasks that require a physical interaction between human and robot. Generated solutions for different trade-offs are evaluated on a object handover scenario using a 7-DoF WAM robotic manipulator., This work was supported in part by MCIN/ AEI /10.13039/501100011033 and the European Union NextGenerationEU/PRTR under the Project ROB-IN PLEC2021-007859, and in part by the European Union NextGenerationEU/PRTR through CSIC’s Thematic Platforms (PTI+ Neuro-Aging).
- Published
- 2022
- Full Text
- View/download PDF
19. Unified Intention Inference and Learning for Human–Robot Cooperative Assembly
- Author
-
Max Q.-H. Meng, Erli Lyu, Tingting Liu, and Jiaole Wang
- Subjects
Structure (mathematical logic) ,Control and Systems Engineering ,Human–computer interaction ,Computer science ,Incremental learning ,Inference ,Robot ,Electrical and Electronic Engineering ,Set (psychology) ,Hidden Markov model ,Human–robot interaction - Abstract
Collaborative robots are widely utilized in intelligent manufacturing to cooperate with the human to accomplish different assembly tasks. To improve the efficiency of human-robot cooperation, robots should be able to recognize human intentions and provide necessary assistance proactively. The major challenge for current human intention recognition methods is that they only deal with known human intentions of predefined tasks and lack of ability to learn unknown intentions corresponding to new tasks. This article introduces an evolving hidden Markov model (EHMM)-based approach to learn new human intentions incrementally by carrying out structure and parameter updating based on the observed sequence, in parallel with the recognition. The incremental learning ability makes it applicable in dynamic environments with changing tasks. A set of assistive execution policies has been developed for the robot to provide appropriate assistance to the human partner based on the intention recognition results in real time. Experiments have been carried out to verify the effectiveness of our approach in human-robot cooperative assembly tasks. The results show very high recognition accuracy (≥95.45%), and the human subjects show their high satisfaction with the intention learning ability of the proposed approach.
- Published
- 2022
- Full Text
- View/download PDF
20. Low-Impedance Displacement Sensors for Intuitive Physical Human–Robot Interaction: Motion Guidance, Design, and Prototyping
- Author
-
Clément Gosselin and Thierry Laliberte
- Subjects
0209 industrial biotechnology ,Interaction forces ,Computer science ,02 engineering and technology ,Degrees of freedom (mechanics) ,Low impedance ,Human–robot interaction ,Action (physics) ,Motion (physics) ,Displacement (vector) ,Computer Science Applications ,Computer Science::Robotics ,020901 industrial engineering & automation ,Control and Systems Engineering ,Robot ,Electrical and Electronic Engineering ,Simulation - Abstract
This article provides a general framework for the use of low-impedance displacement sensors mounted on the links of a serial robot to provide an intuitive physical human–robot interaction. A general formulation is developed to handle the motion guidance problem, i.e., the mapping of the measured motion of the sensors into the required robot joint motions to provide intuitive responsiveness. The formulation is general and can be applied to any architecture of serial robot with any number of displacement sensors each having an arbitrary number of degrees of freedom. Then, the design of a novel three-degree-of-freedom low-impedance displacement sensor is presented as a particularly effective instantiation of the general concept. Partial force balancing is used to reduce the required elastic return action, thereby ensuring the low impedance of the interaction. A prototype of a three-degree-of-freedom displacement sensor is then introduced. Two such sensors are mounted on the links of a custom-built five-degree-of-freedom robot in order to demonstrate the proposed approach. Experimental results are provided and comparisons with other collaborative robots are given. It is shown that the proposed sensors and motion guidance approach yield very intuitive low-impedance interaction involving very low interaction forces.
- Published
- 2022
- Full Text
- View/download PDF
21. A motivational model based on artificial biological functions for the intelligent decision-making of social robots
- Author
-
Marcos Maroto-Gómez, María Malfaz, Álvaro Castro-González, Miguel Ángel Salichs, Ministerio de Ciencia e Innovación (España), and Agencia Estatal de Investigación (España)
- Subjects
Informática ,Motivation ,Control and Optimization ,General Computer Science ,Robótica e Informática Industrial ,Ethology ,Neuroendocrinology ,Social robots ,Human-robot interaction ,Decision-making - Abstract
Modelling the biology behind animal behaviour has attracted great interest in recent years. Nevertheless, neuroscience and artificial intelligence face the challenge of representing and emulating animal behaviour in robots. Consequently, this paper presents a biologically inspired motivational model to control the biological functions of autonomous robots that interact with and emulate human behaviour. The model is intended to produce fully autonomous, natural, and behaviour that can adapt to both familiar and unexpected situations in human–robot interactions. The primary contribution of this paper is to present novel methods for modelling the robot’s internal state to generate deliberative and reactive behaviour, how it perceives and evaluates the stimuli from the environment, and the role of emotional responses. Our architecture emulates essential animal biological functions such as neuroendocrine responses, circadian and ultradian rhythms, motivation, and affection, to generate biologically inspired behaviour in social robots. Neuroendocrinal substances control biological functions such as sleep, wakefulness, and emotion. Deficits in these processes regulate the robot’s motivational and affective states, significantly influencing the robot’s decision-making and, therefore, its behaviour. We evaluated the model by observing the long-term behaviour of the social robot Mini while interacting with people. The experiment assessed how the robot’s behaviour varied and evolved depending on its internal variables and external situations, adapting to different conditions. The outcomes show that an autonomous robot with appropriate decision-making can cope with its internal deficits and unexpected situations, controlling its sleep–wake cycle, social behaviour, affective states, and stress, when acting in human–robot interactions. The research leading to these results has received funding from the projects: Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Ministerio de Ciencia, Innovación y Universidades; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de Investigación (AEI), Spanish Ministerio de Ciencia e Innovación. This publication is part of the R&D&I project PLEC2021-007819 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR.
- Published
- 2023
- Full Text
- View/download PDF
22. Continual Learning for Affective Robotics
- Author
-
Churamani, Nikhil
- Subjects
Continual Learning ,Social Robotics ,Affective Robotics ,Deep Learning ,Facial Expression Recognition ,Neural Networks ,Human Behaviour Understanding ,Affective Computing ,Robotics ,Human-Robot Interaction - Abstract
Recent advancements in Artificial Intelligence (AI) and Human-Robot Interaction (HRI) have enabled robots to be integrated into daily human life. Operating in human-centred environments, these robots need to actively participate in the human ‘affective loop’, sensing and interpreting human socio-emotional behaviours while also learning to respond in a manner that fosters their social and emotional wellbeing. Embedding affective robots with learning mechanisms that enable such a robust understanding of human behaviour as well as their own role in an interaction forms the central focus of affective robotics research. Current Machine Learning (ML)-based solutions for realising affective capabilities in robots, be it robust affect perception or behaviour generation, are primed towards generalisation of application. Pre-trained on volumes of data, these solutions, although enabling a wide variety of applications, are static and unable to adapt sufficiently to the dynamics of real-world interactions. Affective robots, on the other hand, need personalised interaction capabilities that, sensitive to an individual’s socio-emotional behaviour, can adapt affective interactions towards them, expanding their learning on the go to include novel information, while ensuring past knowledge is preserved. Addressing these challenges, this dissertation proposes the novel application of the Continual Learning (CL) paradigm for affective robotics, enabling continual and lifelong adaptation capabilities in robots. It provides the foundational formulations that translate key principles of CL-based adaptation for affective learning in robots. Furthermore, investigating learning at every stage of the ‘affective loop’, it reflects upon the key desiderata for affective robots in terms of continual and personalised affect perception and context-appropriate behaviour generation. Starting with affect perception, this dissertation presents the first extensive benchmark on Continual Facial Expression Recognition (ConFER), evaluating CL-based approaches for continually learning facial expression classes under different learning settings. Despite enabling incremental learning, ConFER does not focus on personalised affect perception, another key requirement for affective robots. To address this, a novel framework is proposed using Continual Learning with Imagination for Facial Expression Recognition (CLIFER). Inspired from the cognitive processes in the human brain focused on memory-based learning and mental imagery, CLIFER incrementally learns facial expression classes while personalising towards individual affective expression, using imagination to augment person-specific learning. CLIFER is shown to achieve state-of-the-art (SOTA) results on different benchmark evaluations. Learning under such dynamic conditions, affective robots need to remain fair and equitable, ensuring no individual (or group) is disadvantaged and the robots’ perception is bias-free. Exploring different domain groups based on gender and race attributes, this dissertation proposes and evaluates CL as an effective strategy to ensure fairness in Facial Expression Recognition (FER) systems, guarding against biases arising from imbalances in data distributions. Benchmark comparisons against SOTA ML-based approaches highlight the superior bias-mitigation capabilities of CL-based methods. Finally, this dissertation explores sensing and adapting to human affective behaviour during wellbeing coaching sessions as an application scenario for continually learning affective robots. Enabling personalised human-robot interactions, the Pepper robot is embedded with CLIFER-based facial affect perception, allowing it to personalise its learning towards individual affective behaviour, dynamically adapting the interaction flow to generate naturalistic responses, sensitive to the participants’ affective state. To evaluate such continual personalisation ability in Pepper, a user study is conducted with 20 participants demonstrating that using CL-based personalisation significantly improves the subjective experience of the participants interacting with Pepper. The theoretical formulations, benchmarks, and frameworks presented in this dissertation initiate a novel field of enquiry exploring the benefits of CL-based learning for affective robots. This dissertation aims to create a stepping stone for affective robotics research to consider taking a continual and personalised learning approach towards building fully autonomous and adaptive robots that are purposeful and engaging in their interactions with human users.
- Published
- 2023
- Full Text
- View/download PDF
23. Collision Design
- Author
-
Joe Marshall, Paul Tennent, Christine Li, Claudia Núñez Pacheco, Rachael Garrett, Vasiliki Tsaknaki, Kristina Höök, Praminda Caleb-Solly, and Steven David Benford
- Subjects
human-robot interaction ,Human Computer Interaction ,Människa-datorinteraktion (interaktionsdesign) ,collision ,risk - Abstract
Collision, "the violent encounter of a moving body with another", is poorly understood in HCI. When we discuss people colliding with the physical artifacts we create, or colliding with each other while using our systems, this is primarily treated as a hazard, something which we should design to avoid. However many other human activities involve situations where deliberate exposure to risk of collision may in fact have positive aspects. In this paper we discuss how the ’risk matrix’, a widely used risk-management tool, which categorizes risks in terms of likelihood and severity, may limit interaction in unintended ways. We discuss reframings of this matrix in relation to design concepts of ’adventure’, ’disempowerment/agency’ and ’consent’. and show that a range of design spaces for collisions exist which may be fruitful to explore. QC 20230628
- Published
- 2023
- Full Text
- View/download PDF
24. A robotic system for solo surgery in flexible ureteroscopy: development and evaluation with clinical users
- Author
-
Schlenk, Christopher, Hagmann, Katharina, Steidle, Florian, Oliva Maza, Laura, Kolb, Alexander, Hellings, Anja, Schoeb, Dominik, Stefan, Klodmann, Julian, Miernik, Arkadiusz, and Albu-Schäffer, Alin Olimpiu
- Subjects
User study ,Physical human-robot interaction ,Kidney stones ,Biomedical Engineering ,Health Informatics ,General Medicine ,Computer Graphics and Computer-Aided Design ,Computer Science Applications ,Surgical robot ,Flexible ureteroscopy (fURS) ,DLR MIRO ,Radiology, Nuclear Medicine and imaging ,Surgery ,Computer Vision and Pattern Recognition - Abstract
Purpose The robotic system CoFlex for kidney stone removal via flexible ureteroscopy (fURS) by a single surgeon (solo surgery, abbreviated SSU) is introduced. It combines a versatile robotic arm and a commercially available ureteroscope to enable gravity compensation and safety functions like virtual walls. The haptic feedback from the operation site is comparable to manual fURS, as the surgeon actuates all ureteroscope DoF manually. Methods The system hardware and software as well as the design of an exploratory user study on the simulator model with non-medical participants and urology surgeons are described. For each user study task both objective measurements (e.g., completion time) and subjective user ratings of workload (using the NASA-TLX) and usability (using the System Usability Scale SUS) were obtained. Results CoFlex enabled SSU in fURS. The implemented setup procedure resulted in an average added setup time of 341.7 ± 71.6 s, a NASA-TLX value of 25.2 ± 13.3 and a SUS value of 82.9 ± 14.4. The ratio of inspected kidney calyces remained similar for robotic (93.68 %) and manual endoscope guidance (94.74 %), but the NASA-TLX values were higher (58.1 ± 16.0 vs. 48.9 ± 20.1) and the SUS values lower (51.5 ± 19.9 vs. 63.6 ± 15.3) in the robotic scenario. SSU in the fURS procedure increased the overall operation time from 1173.5 ± 355.7 s to 2131.0 ± 338.0 s, but reduced the number of required surgeons from two to one. Conclusions The evaluation of CoFlex in a user study covering a complete fURS intervention confirmed the technical feasibility of the concept and its potential to reduce surgeon working time. Future development steps will enhance the system ergonomics, minimize the users’ physical load while interacting with the robot and exploit the logged data from the user study to optimize the current fURS workflow.
- Published
- 2023
- Full Text
- View/download PDF
25. Can We Empower Attentive E-reading with a Social Robot? An Introductory Study with a Novel Multimodal Dataset and Deep Learning Approaches
- Author
-
Yoon Lee and Marcus Specht
- Subjects
Deep Learning ,Novel dataset ,Attention Self-regulation ,E-reading ,Human-Robot Interaction - Abstract
Reading on digital devices has become more commonplace, while it often poses challenges to learners' attention. In this study, we hypothesized that allowing learners to reflect on their reading phases with an empathic social robot companion might enhance learners' attention in e-reading. To verify our assumption, we collected a novel dataset (SKEP) in an e-reading setting with social robot support. It contains 25 multimodal features from various sensors and logged data that are direct and indirect cues of attention. Based on the SKEP dataset, we comprehensively compared the difference between HRI-based (treatment) and GUI-based (control) feedback and obtained insights for intervention design. Based on the human annotation of the nearly 40 hours of video data streams from 60 subjects, we developed a machine learning model to capture attention-regulation behaviors in e-reading. We exploited a two-stage framework to recognize learners' observable self-regulatory behaviors and conducted attention analysis. The proposed system showed a promising performance with high prediction results of e-reading with HRI, such as 72.97% accuracy in recognizing attention regulation behaviors, 74.29% accuracy in predicting knowledge gain, 75.00% for perceived interaction experience, and 75.00% for perceived social presence. We believe our work can inspire the future design of HRI-based e-reading and its analysis.
- Published
- 2023
- Full Text
- View/download PDF
26. Robotic Coaches Delivering Group Mindfulness Practice at a Public Cafe
- Author
-
Minja Axelsson, Micol Spitale, and Hatice Gunes
- Subjects
human-robot interaction ,mindfulness ,meditation ,group interaction ,mental well-being ,public space ,robotic coach - Abstract
Group meditation is known to keep people motivated and committed over longer periods of time, as compared to individual practice. Robotic coaching is a promising avenue for engaging people in group meditation and mindfulness exercises. However, the deployment of robotic coaches to deliver group mindfulness sessions in real-world settings is very scarce. We present the first steps in deploying a robotic mindfulness coach at a public cafe, where participants could join robot-led meditation sessions in a group setting. We conducted two studies with two robotic coaches: the toy-like Misty II robot for 4 weeks (𝑛 = 4), and the child-like QTrobot for 3 weeks (𝑛 = 3). This paper presents an exploratory qualitative analysis of the data collected via group discussions after the sessions, and researcher observations during the sessions. Additionally, we discuss the lessons learned and future work related to deploying a robotic coach in a real-world group setting.
- Published
- 2023
- Full Text
- View/download PDF
27. Towards a Computational Approach for Proactive Robot Behaviour in Assistive Tasks
- Author
-
Silvia Rossi, Antonio Andriella, Ilenia Cucciniello, Cucciniello, I., Andriella, A., and Rossi, S.
- Subjects
Socially Assistive Robots ,Adaptive Behaviour ,Proactive Behaviour ,Human-Robot Interaction - Abstract
While most of the current work has been focused on developing adaptive techniques to respond to human-initiated inputs (what behaviour to perform), very few of them have explored how to proactively initiate an interaction (when to perform a given behaviour). The selection of the proper action, its timing and confidence are essential features for the success of proactive behaviour, especially in collaborative and assistive contexts. In this work, we present the initial phase towards the deployment of a robotic system that will be capable of learning what, when, and with what confidence to provide assistance to users playing a sequential memory game.
- Published
- 2023
- Full Text
- View/download PDF
28. The integration of dogs into collaborative human-robot teams - An applied ethological approach
- Author
-
Linda Gerencsér
- Subjects
Engineering ,business.industry ,Artificial intelligence ,business ,Human–robot interaction - Published
- 2023
- Full Text
- View/download PDF
29. Telexistence-Based Remote Maintenance for Marine Engineers
- Author
-
Mazeas, Damien, Erkoyuncu, John Ahmet, and Noël, Frédéric
- Subjects
human-robot interaction ,telexistence ,remote control ,virtual reality ,maintenance - Abstract
Remote work practice has seen significant developments in the field of maintenance, which is beneficial in the maritime sector. The defense industry is investigating the utilization of unmanned vehicle systems (UVS) to fulfil the demands of efficiency, safety, and field tasks that heavily rely on equipment usage. Telexistence technology affords marine engineers the capability to conduct inspections and repairs as if physically present. This study presents a scalable framework for evaluating the feasibility of conducting traditional machinery space maintenance through the implementation of telexistence capabilities. Our case study employs a physical robot for real-motion behaviors and a simulated virtual environment. Maintenance scenarios were developed through responses to a contextual questionnaire by two experts. In future steps, the study will compare telexistence with direct teleoperation in terms of presence, workload, and usability for vessel machinery space maintenance. Ministry of Defence - UK Ministry of Defence - France DSTL
- Published
- 2023
- Full Text
- View/download PDF
30. Simultaneous estimation of joint angle and interaction force towards sEMG-driven human-robot interaction during constrained tasks
- Author
-
Qin Zhang, Caihua Xiong, Qining Zhang, and Li Fang
- Subjects
Coupling ,medicine.diagnostic_test ,Computer science ,business.industry ,Cognitive Neuroscience ,Term memory ,Feature extraction ,Electromyography ,Motion (physics) ,Human–robot interaction ,Computer Science Applications ,Artificial Intelligence ,Joint angle ,medicine ,Computer vision ,Artificial intelligence ,business ,Decoding methods - Abstract
Human has excellent motor capability and performance in completing various manipulation tasks. During some tasks such as tightening/loosening a screw with a screwdriver, the motion is accompanied by force exertion to the environment (that is, constrained motion). To obtain natural human-robot interaction (HRI) as human interacts/collaborates with the environment, interpreting the human’s intention in a way of motion and interaction force is meaningful for carrying out constrained tasks. This paper proposes a long-short term memory (LSTM) network-based decoding method for the simultaneous estimation of human motion and interactive force from surface electromyography (sEMG) signals. The surface EMG recorded from the muscles of forearm is used to decode human’s motion intention. In order to extract smooth features from non-stationary sEMG signals, Bayesian filter is applied instead of traditional time-domain feature extraction method. From the real-time experiments on eight subjects, the LSTM-based decoding method represents high accuracy of motion estimates (91.7%) and force estimates (96.1%) despite of the existence of muscle coupling and non-stationary mapping during such constrained tasks. It indicates that the estimated motion and interactive force can be further applied for HRI in accomplishing such constrained tasks.
- Published
- 2022
- Full Text
- View/download PDF
31. A Systematic Review of Experimental Work on Persuasive Social Robots
- Author
-
Baisong Liu, Daniel Tetteroo, and Panos Markopoulos
- Subjects
Human-Computer Interaction ,Philosophy ,General Computer Science ,Social Psychology ,Control and Systems Engineering ,Persuasion ,Social robotics ,Behaviour change ,Electrical and Electronic Engineering ,Human–robot interaction - Abstract
There is a growing body of work reporting on experimental work on social robotics (SR) used for persuasive purposes. We report a comprehensive review on persuasive social robotics research with the aim to better inform their design, by summarizing literature on factors impacting their persuasiveness. From 54 papers, we extracted the SR’s design features evaluated in the studies and the evidence of their efficacy. We identified five main categories in the factors that were evaluated: modality, interaction, social character, context and persuasive strategies. Our literature review finds generally consistent effects for factors in modality, interaction and context, whereas more mixed results were shown for social character and persuasive strategies. This review further summarizes findings on interaction effects of multiple factors for the persuasiveness of social robots. Finally, based on the analysis of the papers reviewed, suggestions for factor expression design and evaluation, and the potential for using qualitative methods and more longer-term studies are discussed.
- Published
- 2022
- Full Text
- View/download PDF
32. An adaptive decision-making system supported on user preference predictions for human–robot interactive communication
- Author
-
Marcos Maroto-Gómez, Álvaro Castro-González, José Carlos Castillo, María Malfaz, Miguel Ángel Salichs, Ministerio de Ciencia, Innovación y Universidades (España), and Universidad Carlos III de Madrid
- Subjects
Informática ,Human-Computer Interaction ,Robótica e Informática Industrial ,Preference learning ,Personalised robotics ,Social robots ,Adaptation ,Human-robot interaction ,Autonomous decision-making ,Computer Science Applications ,Education - Abstract
Part of a collection: Personalization and Adaptation in Human-Robot Interactive Communication. Adapting to dynamic environments is essential for artificial agents, especially those aiming to communicate with people interactively. In this context, a social robot that adapts its behaviour to different users and proactively suggests their favourite activities may produce a more successful interaction. In this work, we describe how the autonomous decision-making system embedded in our social robot Mini can produce a personalised interactive communication experience by considering the preferences of the user the robot interacts with. We compared the performance of Top Label as Class and Ranking by Pairwise Comparison, two promising algorithms in the area, to find the one that best predicts the user preferences. Although both algorithms provide robust results in preference prediction, we decided to integrate Ranking by Pairwise Comparison since it provides better estimations. The method proposed in this contribution allows the autonomous decision-making system of the robot to work on different modes, balancing activity exploration with the selection of the favourite entertaining activities. The operation of the preference learning system is shown in three real case studies where the decision-making system works differently depending on the user the robot is facing. Then, we conducted a human-robot interaction experiment to investigate whether the robot users perceive the personalised selection of activities more appropriate than selecting the activities at random. The results show how the study participants found the personalised activity selection more appropriate, improving their likeability towards the robot and how intelligent they perceive the system. query Please check the edit made in the article title. The research leading to these results has received funding from the projects Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), funded by the Ministerio de Ciencia, Innovación y Universidades and Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2022).
- Published
- 2022
- Full Text
- View/download PDF
33. RRT* and Goal-Driven Variable Admittance Control for Obstacle Avoidance in Manual Guidance Applications
- Author
-
Davide Bazzi, Giorgio Priora, Andrea Maria Zanchettin, and Paolo Rocco
- Subjects
Human-Computer Interaction ,Human-robot collaboration ,Control and Optimization ,Physical human-robot interaction ,Compliance and impedance control ,Artificial Intelligence ,Control and Systems Engineering ,Mechanical Engineering ,Biomedical Engineering ,Computer Vision and Pattern Recognition ,Computer Science Applications - Published
- 2022
- Full Text
- View/download PDF
34. Bidirectional Human–Robot Bimanual Handover of Big Planar Object With Vertical Posture
- Author
-
Wei He, Zichen Yan, Jiashu Li, and Fei Chen
- Subjects
0209 industrial biotechnology ,Computer science ,GRASP ,02 engineering and technology ,Object (computer science) ,Human–robot interaction ,Task (computing) ,020901 industrial engineering & automation ,Handover ,Control and Systems Engineering ,Control theory ,Robot ,Electrical and Electronic Engineering ,Focus (optics) ,Simulation - Abstract
Object handover is one of the basic abilities for the robot to interact with the human. Most of the previous works only focus on the limited handover scenarios where the robot uses one hand to give small objects to the human. In this article, we design a bidirectional bimanual handover system that enables the robot to both give and receive the big planar object with vertical grasp posture. In addition to the basic object handover function, the designed handover system also integrates a position adjustment mechanism to improve the human experience. According to different task states, the system is divided into four modes. In each mode, the robot performs a subtask and switches to the next mode at an appropriate time. We propose a two-finger grip force controller and a dual-arm admittance NN controller to control the robot to generate actual motions. By applying specific locating, trajectory planning, and signal identifying methods, we implement the designed handover system on a Baxter robot. The system is tested on two wooden plates with different widths, thicknesses, and weights. The results show that the robot can perform the handover task safely and effectively with the designed handover system.
- Published
- 2022
- Full Text
- View/download PDF
35. Visual Goal Human-Robot Communication Framework With Few-Shot Learning: A Case Study in Robot Waiter System
- Author
-
Nat Dilokthanakul, Guntitat Sawadwuthikul, Poramate Manoonpong, Tanyatep Tothong, Puchong Soisudarat, Sarana Nutanong, and Thanawat Lodkaew
- Subjects
Service (systems architecture) ,Computer science ,Event (computing) ,Active learning (machine learning) ,Mobile robot ,Human–robot interaction ,Computer Science Applications ,Control and Systems Engineering ,Robustness (computer science) ,Human–computer interaction ,Active learning ,Robot ,Electrical and Electronic Engineering ,Information Systems - Abstract
A conventional adopted method for operating a waiter robot is based on the static position control, where predefined goal positions are marked on a map. However, this solution is not optimal in a dynamic setting, such as in a coffee shop or an outdoor catering event, because the customers often change their positions. This article explores an alternative human-robot interface design where a human operator communicates the identity of the customer to the robot instead. Inspired by how human communicates, we propose a framework for communicating a visual goal to the robot, through interactive two-way communications. The framework exploits concepts from two machine learning domains: human-in-the-loop machine learning, where active learning is used to acquire informative data, and deep metric learning, where a suitable embedding can improve the learning ability of a classifier. We also propose novel class imbalance handling techniques, which aim to actively alleviate the class imbalance problem found to be important in this mode of communication. The framework is evaluated using publicly available pedestrian datasets. We demonstrate that the proposed framework can help reduce the number of required two-way interactions and increases the robustness of the predictive model. We successfully implement the framework on a mobile robot for a delivery service in a cafe-like environment. Through the online visual goal human-robot communication, the robot can detect, recognize, and autonomously navigate to the target customer.
- Published
- 2022
- Full Text
- View/download PDF
36. Learn How to Assist Humans Through Human Teaching and Robot Learning in Human–Robot Collaborative Assembly
- Author
-
Yunyi Jia, Weitian Wang, Yi Sun, and Yi Chen
- Subjects
Human-Computer Interaction ,Control and Systems Engineering ,Computer science ,Human–computer interaction ,Electrical and Electronic Engineering ,Robot learning ,Software ,Human–robot interaction ,Computer Science Applications - Published
- 2022
- Full Text
- View/download PDF
37. Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety
- Author
-
Samira, Rasouli, Garima, Gupta, Elizabeth, Nilsen, and Kerstin, Dautenhahn
- Subjects
Human-Computer Interaction ,Philosophy ,General Computer Science ,Social Psychology ,Control and Systems Engineering ,Psychological interventions ,Social robots ,Electrical and Electronic Engineering ,Social anxiety disorder ,Article ,Human–robot interaction - Abstract
Social anxiety disorder or social phobia is a condition characterized by debilitating fear and avoidance of different social situations. We provide an overview of social anxiety and evidence-based behavioural and cognitive treatment approaches for this condition. However, treatment avoidance and attrition are high in this clinical population, which calls for innovative approaches, including computer-based interventions, that could minimize barriers to treatment and enhance treatment effectiveness. After reviewing existing assistive technologies for mental health interventions, we provide an overview of how social robots have been used in many clinical interventions. We then propose to integrate social robots in conventional behavioural and cognitive therapies for both children and adults who struggle with social anxiety. We categorize the different therapeutic roles that social robots can potentially play in activities rooted in conventional therapies for social anxiety and oriented towards symptom reduction, social skills development, and improvement in overall quality of life. We discuss possible applications of robots in this context through four scenarios. These scenarios are meant as ‘food for thought’ for the research community which we hope will inspire future research. We discuss risks and concerns for using social robots in clinical practice. This article concludes by highlighting the potential advantages as well as limitations of integrating social robots in conventional interventions to improve accessibility and standard of care as well as outlining future steps in relation to this research direction. Clearly recognizing the need for future empirical work in this area, we propose that social robots may be an effective component in robot-assisted interventions for social anxiety, not replacing, but complementing the work of clinicians. We hope that this article will spark new research, and research collaborations in the highly interdisciplinary field of robot-assisted interventions for social anxiety.
- Published
- 2022
- Full Text
- View/download PDF
38. Human Preferences for Robot Eye Gaze in Human-to-Robot Handovers
- Author
-
Tair Faibish, Alap Kshirsagar, Guy Hoffman, and Yael Edan
- Subjects
Human-Computer Interaction ,Philosophy ,Robot eye gaze ,General Computer Science ,Social Psychology ,Control and Systems Engineering ,Human-human-handovers ,Non-verbal communication ,Human-robot handovers ,Electrical and Electronic Engineering ,Human-robot interaction ,Article - Abstract
This paper investigates human’s preferences for a robot’s eye gaze behavior during human-to-robot handovers. We studied gaze patterns for all three phases of the handover process: reach, transfer, and retreat, as opposed to previous work which only focused on the reaching phase. Additionally, we investigated whether the object’s size or fragility or the human’s posture affect the human’s preferences for the robot gaze. A public data-set of human-human handovers was analyzed to obtain the most frequent gaze behaviors that human receivers perform. These were then used to program the robot’s receiver gaze behaviors. In two sets of user studies (video and in-person), a collaborative robot exhibited these gaze behaviors while receiving an object from a human. In the video studies, 72 participants watched and compared videos of handovers between a human actor and a robot demonstrating each of the three gaze behaviors. In the in-person studies, a different set of 72 participants physically performed object handovers with the robot and evaluated their perception of the handovers for the robot’s different gaze behaviors. Results showed that, for both observers and participants in a handover, when the robot exhibited Face-Hand-Face gaze (gazing at the giver’s face and then at the giver’s hand during the reach phase and back at the giver’s face during the retreat phase), participants considered the handover to be more likable, anthropomorphic, and communicative of timing \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(p < 0.0001)$$\end{document}(p
- Published
- 2022
- Full Text
- View/download PDF
39. Is a Wizard-of-Oz Required for Robot-Led Conversation Practice in a Second Language?
- Author
-
Olov Engwall, Ronald Cumbal, and Jose David Aguas Lopes
- Subjects
Human-Computer Interaction ,Conversational practice ,Philosophy ,General Computer Science ,Social Psychology ,Dialogue management for spoken human-robot interaction ,Control and Systems Engineering ,Robot-assisted language learning ,Non-native speech recognition ,Electrical and Electronic Engineering ,Språkteknologi (språkvetenskaplig databehandling) ,Language Technology (Computational Linguistics) - Abstract
The large majority of previous work on human-robot conversations in a second language has been performed with a human wizard-of-Oz. The reasons are that automatic speech recognition of non-native conversational speech is considered to be unreliable and that the dialogue management task of selecting robot utterances that are adequate at a given turn is complex in social conversations. This study therefore investigates if robot-led conversation practice in a second language with pairs of adult learners could potentially be managed by an autonomous robot. We first investigate how correct and understandable transcriptions of second language learner utterances are when made by a state-of-the-art speech recogniser. We find both a relatively high word error rate (41%) and that a substantial share (42%) of the utterances are judged to be incomprehensible or only partially understandable by a human reader. We then evaluate how adequate the robot utterance selection is, when performed manually based on the speech recognition transcriptions or autonomously using (a) predefined sequences of robot utterances, (b) a general state-of-the-art language model that selects utterances based on learner input or the preceding robot utterance, or (c) a custom-made statistical method that is trained on observations of the wizard’s choices in previous conversations. It is shown that adequate or at least acceptable robot utterances are selected by the human wizard in most cases (96%), even though the ASR transcriptions have a high word error rate. Further, the custom-made statistical method performs as well as manual selection of robot utterances based on ASR transcriptions. It was also found that the interaction strategy that the robot employed, which differed regarding how much the robot maintained the initiative in the conversation and if the focus of the conversation was on the robot or the learners, had marginal effects on the word error rate and understandability of the transcriptions but larger effects on the adequateness of the utterance selection. Autonomous robot-led conversations may hence work better with some robot interaction strategies.
- Published
- 2022
- Full Text
- View/download PDF
40. Robust Multi-User In-Hand Object Recognition in Human-Robot Collaboration Using a Wearable Force-Myography Device
- Author
-
Eran Bamani, Inbar Ben-David, Avishai Sintov, and Nadav D. Kahanowich
- Subjects
Class (computer programming) ,Control and Optimization ,business.industry ,Computer science ,Mechanical Engineering ,Biomedical Engineering ,Cognitive neuroscience of visual object recognition ,Wearable computer ,Image segmentation ,Object (computer science) ,Autoencoder ,Human–robot interaction ,Computer Science Applications ,Data modeling ,Human-Computer Interaction ,Artificial Intelligence ,Control and Systems Engineering ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business - Abstract
Applicable human-robot collaboration requires intuitive recognition of human intention during shared work. A grasped object such as a tool held by the human provides vital information about the upcoming task. In this letter, we explore the use of a wearable device to non-visually recognize objects within the human hand in various possible grasps. The device is based on Force-Myography (FMG) where simple and affordable force sensors measure perturbations of forearm muscles. We propose a novel Deep Neural-Network architecture termed Flip-U-Net inspired by the familiar U-Net architecture used for image segmentation. The Flip-U-Net is trained over data collected from several human participants and with multiple objects of each class. Data is collected while manipulating the objects between different grasps and arm postures. The data is also pre-processed with data augmentation and used to train a Variational Autoencoder for dimensionality reduction mapping. While prior work did not provide a transferable FMG-based model, we show that the proposed network can classify objects grasped by multiple new users without additional training efforts. Experiment with 12 test participants show classification accuracy of approximately 95% over multiple grasps and objects. Correlations between accuracy and various anthropometric measures are also presented. Furthermore, we show that the model can be fine-tuned to a particular user based on an anthropometric measure.
- Published
- 2022
- Full Text
- View/download PDF
41. Shared Control With Efficient Subgoal Identification and Adjustment for Human–Robot Collaborative Tasks
- Author
-
Prabhakar R. Pagilla and Zongyao Jin
- Subjects
Automatic control ,Computer science ,business.industry ,Process (computing) ,Robotics ,Human–robot interaction ,Task (project management) ,Hyperrectangle ,Identification (information) ,Control and Systems Engineering ,Metric (mathematics) ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
In this article, we describe a novel method to facilitate the process of subgoal identification from a robotics task demonstrated by a human operator. The method defines a unified metric to quantify the human operator's commands, which can be employed to effectively extract subgoal distributions and identify subgoals. We then address the problem of online subgoal adjustment for shared control (SC) applications, which is initiated by the human operator and is necessary to sustain performance in a dynamically changing environment. The adjustment is facilitated by designing a hyperrectangle around the subgoal whose volume is obtained via a hyperbolic slope transition function (HSTF) based on the distance between subgoals. The adjustment actions within the hyperrectangle are encoded by a skill-weighted action integral. A practical implementation of the automatic control input is also provided where proper scaling for input-blending is considered. The developed SC method is corroborated on a scaled hydraulic excavator platform with human operators. Experimental results are presented and discussed.
- Published
- 2022
- Full Text
- View/download PDF
42. Fast User Adaptation for Human Motion Prediction in Physical Human–Robot Interaction
- Author
-
Hee-Seung Moon and Jiwon Seo
- Subjects
FOS: Computer and information sciences ,Control and Optimization ,Computer science ,Process (engineering) ,Biomedical Engineering ,Initialization ,Machine learning ,computer.software_genre ,Human–robot interaction ,Computer Science - Robotics ,Artificial Intelligence ,Adaptive system ,Hidden Markov model ,Adaptation (computer science) ,Artificial neural network ,business.industry ,Mechanical Engineering ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Robot ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Robotics (cs.RO) ,computer - Abstract
Accurate prediction of human movements is required to enhance the efficiency of physical human-robot interaction. Behavioral differences across various users are crucial factors that limit the prediction of human motion. Although recent neural network-based modeling methods have improved their prediction accuracy, most did not consider an effective adaptations to different users, thereby employing the same model parameters for all users. To deal with this insufficiently addressed challenge, we introduce a meta-learning framework to facilitate the rapid adaptation of the model to unseen users. In this study, we propose a model structure and a meta-learning algorithm specialized to enable fast user adaptation in predicting human movements in cooperative situations with robots. The proposed prediction model comprises shared and adaptive parameters, each addressing the user's general and individual movements. Using only a small amount of data from an individual user, the adaptive parameters are adjusted to enable user-specific prediction through a two-step process: initialization via a separate network and adaptation via a few gradient steps. Regarding the motion dataset that has 20 users collaborating with a robotic device, the proposed method outperforms existing meta-learning and non-meta-learning baselines in predicting the movements of unseen users., Comment: To be published in IEEE Robotics and Automation Letters (RA-L)
- Published
- 2022
- Full Text
- View/download PDF
43. Designing human-robot collaboration (HRC) workspaces in industrial settings: A systematic literature review
- Author
-
Ana Margarida Pinto, Joana Santos, Ana Correia Simões, David Romero, and Sofia Pinheiro
- Subjects
Flexibility (engineering) ,Process management ,Workstation ,Computer science ,Process (engineering) ,Industrial and Manufacturing Engineering ,Human–robot interaction ,law.invention ,Systematic review ,Hardware and Architecture ,Control and Systems Engineering ,Multidisciplinary approach ,law ,Content analysis ,Robot ,Software - Abstract
In the pursuit of increasing efficiency, productivity and flexibility at production lines and their corresponding workstations, manufacturing companies have started to heavily invest in “collaborative workspaces” where close interaction between humans and robots promises to lead to these goals that neither can achieve on their own. Therefore, it is necessary to know the contributions, recommendations and guidelines that literature presents in terms of designing a manufacturing workplace where humans and cobots interact with each other to accomplish the defined objectives. These aspects need to be explored in an integrated and multidisciplinary way to maximize human involvement in the decision chain and to promote wellbeing and quality of work. This paper presents a systematic literature review on designing human-robot collaboration (HRC) workspaces for humans and robots in industrial settings. The study involved 252 articles in international journals and conferences proceedings published till 2019. A detailed selection process led to including 65 articles to further analysis. A framework that represents the complexity levels of the influencing factors presented in human-robot interaction (HRI) contexts was developed for the content analysis. Based on this framework the guidelines and recommendations of the analysed articles are presented in three categories: Category 1 – the first level of complexity, which considers only one specific influencing factor in the HRI. This category was split into two: human operator, and technology; Category 2 – the second level of complexity, includes recommendations and guidelines related to human-robot team’s performance, and thus several influencing factors are present in the HRI; and, finally, Category 3 – the third level of complexity, where recommendations and guidelines for more complex and holistic approaches in the HRI are presented. The literature offers contributions from several knowledge areas capable to design safe, ergonomic, sustainable, and healthy human-centred workplaces where not only technical but also social and psychophysical aspects of collaboration are considered.
- Published
- 2022
- Full Text
- View/download PDF
44. Data-Driven Human-Robot Interaction Without Velocity Measurement Using Off-Policy Reinforcement Learning
- Author
-
Hamidreza Modares, Yongliang Yang, Donald C. Wunsch, Zihao Ding, and Rui Wang
- Subjects
Computer science ,Control engineering ,Filter (signal processing) ,Robot end effector ,Impedance parameters ,Human–robot interaction ,law.invention ,Task (project management) ,Artificial Intelligence ,Control and Systems Engineering ,law ,Control theory ,Robot ,Reinforcement learning ,Information Systems - Abstract
In this paper, we present a novel data-driven design method for the human-robot interaction (HRI) system, where a given task is achieved by cooperation between the human and the robot. The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design. The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop, while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop. Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters. In the inner-loop, a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement. On this basis, an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space. The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.
- Published
- 2022
- Full Text
- View/download PDF
45. ToD4IR: A Humanised Task-Oriented Dialogue System for Industrial Robots
- Author
-
Chen Li, Xiaochun Zhang, Dimitrios Chrysostomou, and Hongji Yang
- Subjects
Artificial intelligence ,data collection ,Neural Networks ,User experience ,interactive systems ,General Computer Science ,Service robots ,Natural language processing ,General Engineering ,Human – Robot Interaction ,human-robot interaction ,Oral communication ,General Materials Science ,Electrical and Electronic Engineering ,Robots - Abstract
Despite the fact that task-oriented conversation systems have received much attention from the dialogue research community, only a handful of them have been studied in a real-world manufacturing context using industrial robots. One stumbling block is the lack of a domain-specific discourse corpus for training these systems. Another difficulty is that earlier attempts to integrate natural language interfaces (such as chatbots) into the industrial sector have primarily focused on task completion rates. When designing a dialogue system for social robots, the user experience is prioritized above industrial robots. To overcome these challenges, we provide the Industrial Robots Domain Wizard-of-Oz dataset (IRWoZ), a fully-labeled discourse dataset covering four robotics domains. It delivers simulated discussions between shop floor workers and industrial robots, with over 401 dialogues, to promote language-assisted Human-Robot Interaction (HRI) in industrial settings. Small talk concepts and human-to-human conversation strategies are provided to support human-like answer generation and give a more natural and adaptable dialogue environment to increase user experience and engagement. Finally, we propose and evaluate an end-to-end Task-oriented Dialogue for Industrial Robots (ToD4IR) using two types of pre-trained backbone models: GPT-2 and GPT-Neo, on the IRWoZ dataset. We performed a series of trials to validate ToD4IR's performance in a real manufacturing context. Our experiments demonstrate that ToD4IR outperforms three downstream task-oriented dialogue tasks, i.e., dialogue state tracking, dialogue act generation, and response generation, on the IRWoZ dataset. Our source code of ToD4IR and the IRWoZ dataset is accessible at https://github.com/lcroy/ToD4IR for reproducible research.
- Published
- 2022
- Full Text
- View/download PDF
46. What Norms Are Social Robots Reflecting? A Socio-Legal Exploration on HRI Developers
- Author
-
Tanqueray, Laetitia, Larsson, Stefan, Hakli, R, Mäkelä, P, and Seibt, J
- Subjects
Interaction Technologies ,Other Engineering and Technologies not elsewhere specified ,ethnography ,Social Robots ,Gender Studies ,human-robot interaction ,HRI ,sociology of law ,socio-legal robotics ,data feminism ,socio-legal data feminism ,Social Sciences Interdisciplinary ,Law and Society ,social norms - Abstract
By relying on theory from sociology of law and data feminism, this study showcases the norms guiding development in human-robot interaction. This qualitative study consists of an ethnography of the HRI conference 2021 and expert interviews which were merged and analysed using an ethnographic content analysis method on NVivo. The socio-legal data feminist lens enables to pinpoint the lack of clear legal involvement, the reliance on the HRI community to develop, and the normative impact this has on the overall development of social robots. This study not only aims to showcase the vital role of HRI developers, but also the need for more critical scholars in this area.
- Published
- 2023
- Full Text
- View/download PDF
47. Ergonomic human-robot collaboration in industry: A review
- Author
-
Marta Lorenzini, Marta Lagomarsino, Luca Fortini, Soheil Gholami, and Arash Ajoudani
- Subjects
human-robot interaction ,industry ,ergonomics ,Artificial Intelligence ,collaborative robots ,human-robot collaboration ,human factors ,Computer Science Applications - Abstract
In the current industrial context, the importance of assessing and improving workers’ health conditions is widely recognised. Both physical and psycho-social factors contribute to jeopardising the underlying comfort and well-being, boosting the occurrence of diseases and injuries, and affecting their quality of life. Human-robot interaction and collaboration frameworks stand out among the possible solutions to prevent and mitigate workplace risk factors. The increasingly advanced control strategies and planning schemes featured by collaborative robots have the potential to foster fruitful and efficient coordination during the execution of hybrid tasks, by meeting their human counterparts’ needs and limits. To this end, a thorough and comprehensive evaluation of an individual’s ergonomics, i.e. direct effect of workload on the human psycho-physical state, must be taken into account. In this review article, we provide an overview of the existing ergonomics assessment tools as well as the available monitoring technologies to drive and adapt a collaborative robot’s behaviour. Preliminary attempts of ergonomic human-robot collaboration frameworks are presented next, discussing state-of-the-art limitations and challenges. Future trends and promising themes are finally highlighted, aiming to promote safety, health, and equality in worldwide workplaces.
- Published
- 2023
- Full Text
- View/download PDF
48. Can a robot lie? Young children's understanding of intentionality beneath false statements
- Author
-
Giulia Peretti, Federico Manzi, Cinzia Di Dio, Angelo Cangelosi, Paul L. Harris, Davide Massaro, and Antonella Marchetti
- Subjects
Settore M-PSI/04 - PSICOLOGIA DELLO SVILUPPO E PSICOLOGIA DELL'EDUCAZIONE ,children ,lie-mistake ,Developmental and Educational Psychology ,human–robot interaction ,intentionality understanding ,theory of mind - Published
- 2023
- Full Text
- View/download PDF
49. Determinants of Human-Robot Trust during Interactive Norm Teaching: Interactivity Study
- Author
-
Chi, Vivienne Bihe and Malle, Bertram F
- Subjects
FOS: Psychology ,Cognitive Psychology ,Human-Robot Trust ,Psychology ,Experimental Analysis of Behavior ,Social and Behavioral Sciences ,Human-Robot Interaction - Abstract
Does a human’s form of interaction (interactive teaching vs. observing and critique) with a learning robot in a training setting predict more or less teacher’s self-attribution and trust?
- Published
- 2023
- Full Text
- View/download PDF
50. Moral Justifications to Foster Human-Machine Trust
- Author
-
Malle, Bertram F, Phillips, Elizabeth, and Kim, Boyoung
- Subjects
FOS: Psychology ,human-robot interaction ,justification ,moral trust ,Psychology ,trust ,moral blame ,explanation ,Social and Behavioral Sciences - Abstract
AFOSR 2021-2024 Study 3 (DNR replication, Toxic leak scenario added, mens rea Justifications formulated in belief and desire, Trust 1 measure added before “should recommendation,” Trust 2 measure added after “agent decision” and Trust 3 measure added after agent justification, 2 variants of Explanation and two variants of Justification also used. Human comparison conditions added to both dilemmas. A measure of mind perception added at the end of the study.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.