1,576 results on '"Human Robot Interaction"'
Search Results
202. Lend a Hand to Service Robots: Overcoming System Limitations by Asking Humans
- Author
-
Schüssel, Felix, Walch, Marcel, Rogers, Katja, Honold, Frank, Weber, Michael, Jokinen, Kristiina, editor, and Wilcock, Graham, editor
- Published
- 2017
- Full Text
- View/download PDF
203. Keep It Simple and Sparse: Real-Time Action Recognition
- Author
-
Fanello, Sean Ryan, Gori, Ilaria, Metta, Giorgio, Odone, Francesca, Escalante, Hugo Jair, Series editor, Guyon, Isabelle, Series editor, Escalera, Sergio, Series editor, and Athitsos, Vassilis, editor
- Published
- 2017
- Full Text
- View/download PDF
204. Robots: Asleep, Awake, Alone, and in Love
- Author
-
Eckersall, Peter, Grehan, Helena, Scheer, Edward, Turner, Cathy, Series editor, Behrndt, Synne, Series editor, Eckersall, Peter, Grehan, Helena, and Scheer, Edward
- Published
- 2017
- Full Text
- View/download PDF
205. An Imitation Framework for Social Robots Based on Visual Input, Motion Sensation, and Instruction
- Author
-
Falahi, Mohsen, Shamshirdar, Faraz, Heydari, Mohammad Hosein, Shangari, Taher Abbas, Zhang, Dan, editor, and Wei, Bin, editor
- Published
- 2017
- Full Text
- View/download PDF
206. Human Safety Index Based on Impact Severity and Human Behavior Estimation
- Author
-
Garcia Ricardez, Gustavo Alfonso, Yamaguchi, Akihiko, Takamatsu, Jun, Ogasawara, Tsukasa, Zhang, Dan, editor, and Wei, Bin, editor
- Published
- 2017
- Full Text
- View/download PDF
207. A Neurophysiological Assessment of Multi-robot Control During NASA’s Pavilion Lake Research Project
- Author
-
Blitch, John G., Kacprzyk, Janusz, Series editor, Savage-Knepshield, Pamela, editor, and Chen, Jessie, editor
- Published
- 2017
- Full Text
- View/download PDF
208. A Neurophysiological Examination of Multi-robot Control During NASA’s Extreme Environment Mission Operations Project
- Author
-
Blitch, John G., Kacprzyk, Janusz, Series editor, Savage-Knepshield, Pamela, editor, and Chen, Jessie, editor
- Published
- 2017
- Full Text
- View/download PDF
209. Starting a Conversation by Multi-robot Cooperative Behavior
- Author
-
Iio, Takamasa, Yoshikawa, Yuichiro, Ishiguro, Hiroshi, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Kheddar, Abderrahmane, editor, Yoshida, Eiichi, editor, Ge, Shuzhi Sam, editor, Suzuki, Kenji, editor, Cabibihan, John-John, editor, Eyssel, Friederike, editor, and He, Hongsheng, editor
- Published
- 2017
- Full Text
- View/download PDF
210. Robust Joint Visual Attention for HRI Using a Laser Pointer for Perspective Alignment and Deictic Referring
- Author
-
Maravall, Darío, de Lope, Javier, Fuentes, Juan Pablo, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Ferrández Vicente, José Manuel, editor, Álvarez-Sánchez, José Ramón, editor, de la Paz López, Félix, editor, Toledo Moreo, Javier, editor, and Adeli, Hojjat, editor
- Published
- 2017
- Full Text
- View/download PDF
211. Sex with Robots for Love Free Encounters
- Author
-
Hall, Lynne, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Cheok, Adrian David, editor, Devlin, Kate, editor, and Levy, David, editor
- Published
- 2017
- Full Text
- View/download PDF
212. A design of Human and overhead Robot Interaction (HoRI) framework for cooperative robotic applications in copper industry.
- Author
-
Aivaliotis, P., Kaliakatsos-Georgopoulos, D., Papavasileiou, A., and Makris, S.
- Abstract
This paper deals with a Human overhead Robot Interaction (HoRI) framework which is focused on enabling the interaction of human and robots in shared workspaces involving both fixed and overhead robotic structures. Scope of this study is to design a safe human robot interaction framework in the shopfloor aiming to minimize ergonomics issues and to increase the overall productivity. Vision and safety sensors are utilized to ensure the efficient and safe interaction among the operators and robots. Scope of the paper is the presentation of a conceptual implementation of the proposed framework in copper industry. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
213. Predicting short-term next-active-object through visual attention and hand position.
- Author
-
Jiang, Jingjing, Nan, Zhixiong, Chen, Hui, Chen, Shitao, and Zheng, Nanning
- Subjects
- *
ARTIFICIAL neural networks , *HUMAN-robot interaction , *HAND , *DISTRIBUTION (Probability theory) , *MACHINE learning - Abstract
Human intention prediction is of great significance in many applications, such as human-robot interaction, intelligent rehabilitation robots. This paper studies the problem of short-term next-active-object prediction in egocentric images. The short-term next-active-object refers to the object that a human is going to interact with in the short-term future, which is an embodiment of human intention. Most current methods usually use object-centered cues, such as the deviation of object appearance change and the unique shape of the egocentric object trajectory, to predict the next-active-object. In this paper, inspired by the fact that human intention is also revealed by human-centered cues, we propose a deep neural network model that integrates the cues from visual attention and hand positions to predict the next-active-object. Firstly, the probability maps of visual attention and hand positions are constructed, and then the probability distribution of next-active-object is generated. We experimentally compare our method with several baseline methods using two datasets and confirm its effectiveness. In addition, ablation experiments are conducted, and crucial points concerning the next-active-object are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
214. Generating Robotic Speech Prosody for Human Robot Interaction: A Preliminary Study.
- Author
-
Lee, Jaeryoung and Kwak, Keun-Chang
- Subjects
SOCIAL interaction ,PROSODIC analysis (Linguistics) ,EMOTIONAL state ,EMOTIONS ,TELECOMMUNICATION systems ,AUTOMATIC speech recognition - Abstract
The use of affective speech in robotic applications has increased in recent years, especially regarding the developments or studies of emotional prosody for a specific group of people. The current work proposes a prosody-based communication system that considers the limited parameters found in speech recognition for the elderly, for example. This work explored what types of voices were more effective for understanding presented information, and if the affects of robot voices reflected on the emotional states of listeners. By using functions of a small humanoid robot, two different experiments conducted to find out comprehension level and the affective reflection respectively. University students participated in both tests. The results showed that affective voices helped the users understand the information, as well as that they felt corresponding negative emotions in conversations with negative voices. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
215. Assistive Robots for the Social Management of Health: A Framework for Robot Design and Human–Robot Interaction Research.
- Author
-
Chita-Tegmark, Meia and Scheutz, Matthias
- Subjects
ROBOT design & construction ,SOCIAL robots ,SOCIAL bonds ,SOCIAL interaction ,SOCIAL skills ,HUMAN-robot interaction ,SUCCESSIVE approximation analog-to-digital converters - Abstract
There is a close connection between health and the quality of one's social life. Strong social bonds are essential for health and wellbeing, but often health conditions can detrimentally affect a person's ability to interact with others. This can become a vicious cycle resulting in further decline in health. For this reason, the social management of health is an important aspect of healthcare. We propose that socially assistive robots (SARs) could help people with health conditions maintain positive social lives by supporting them in social interactions. This paper makes three contributions, as detailed below. We develop a framework of social mediation functions that robots could perform, motivated by the special social needs that people with health conditions have. In this framework we identify five types of functions that SARs could perform: (a) changing how the person is perceived, (b) enhancing the social behavior of the person, (c) modifying the social behavior of others, (d) providing structure for interactions, and (e) changing how the person feels. We thematically organize and review the existing literature on robots supporting human–human interactions, in both clinical and non-clinical settings, and explain how the findings and design ideas from these studies can be applied to the functions identified in the framework. Finally, we point out and discuss challenges in designing SARs for supporting social interactions, and highlight opportunities for future robot design and HRI research on the mediator role of robots. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
216. Lexical vagueness handling using fuzzy logic in human robot interaction
- Author
-
Guo, Xiao
- Subjects
006.3 ,G710 Speech and Natural Language Processing ,natural language processing ,fuzzy logic ,lexical vagueness ,human robot interaction - Abstract
Lexical vagueness is a ubiquitous phenomenon in natural language. Most of previous works in natural language processing (NLP) consider lexical ambiguity as the main problem in natural language understanding rather than lexical vagueness. Lexical vagueness is usually considered as a solution rather than a problem in natural language understanding since precise information is usually failed to be provided in conversations. However, lexical vagueness is obviously an obstacle in human robot interaction (HRI) since the robots are expected to precisely understand their users' utterances in order to provide reliable services to their users. This research aims to develop novel lexical vagueness handling techniques to enable service robots to precisely understand their users' utterance so that they can provide the reliable services to their users. A novel integrated system to handle lexical vagueness is proposed in this research based on an in-depth understanding of lexical ambiguity and lexical vagueness including why they exist, how they are presented, what differences are in between them, and the mainstream techniques to handle lexical ambiguity and lexical vagueness. The integrated system consists of two blocks: the block of lexical ambiguity handling and the block of lexical vagueness handling. The block of lexical ambiguity handling first removes syntactic ambiguity and lexical ambiguity. The block of lexical vagueness handling is then used to model and remove lexical vagueness. Experimental results show that the robots endowed with the developed integrated system are able to understand their users' utterances. The reliable services to their users, therefore, can be provided by the robots.
- Published
- 2011
217. Design and Development of a Smart IoT-Based Robotic Solution for Wrist Rehabilitation
- Author
-
Yassine Bouteraa, Ismail Ben Abdallah, Khaled Alnowaiser, Md Rasedul Islam, Atef Ibrahim, and Fayez Gebali
- Subjects
wrist modeling ,robotic rehabilitation ,human robot interaction ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
In this study, we present an IoT-based robot for wrist rehabilitation with a new protocol for determining the state of injured muscles as well as providing dynamic model parameters. In this model, the torque produced by the robot and the torque provided by the patient are determined and updated taking into consideration the constraints of fatigue. Indeed, in the proposed control architecture based on the EMG signal extraction, a fuzzy classifier was designed and implemented to estimate muscle fatigue. Based on this estimation, the patient’s torque is updated during the rehabilitation session. The first step of this protocol consists of calculating the subject-related parameters. This concerns axis offset, inertial parameters, passive stiffness, and passive damping. The second step is to determine the remaining component of the wrist model, including the interaction torque. The subject must perform the desired movements providing the torque necessary to move the robot in the desired direction. In this case, the robot applies a resistive torque to calculate the torque produced by the patient. After that, the protocol considers the patient and the robot as active and all exercises are performed accordingly. The developed robotics-based solution, including the proposed protocol, was tested on three subjects and showed promising results.
- Published
- 2022
- Full Text
- View/download PDF
218. Fuzzy logic-based connected robot for home rehabilitation.
- Author
-
Bouteraa, Yassine, Abdallah, Ismail Ben, Ibrahim, Atef, and Ahanger, Tariq Ahamed
- Subjects
- *
HOME rehabilitation , *ELECTRONIC health records , *ELECTROMYOGRAPHY , *ROBOT control systems , *MUSCLE contraction , *COMPUTER vision , *FEATURE extraction , *PSYCHOLOGICAL feedback - Abstract
In this paper, a robotic system dedicated to remote wrist rehabilitation is proposed as an Internet of Things (IoT) application. The system offers patients home rehabilitation. Since the physiotherapist and the patient are on different sites, the system guarantees that the physiotherapist controls and supervises the rehabilitation process and that the patient repeats the same gestures made by the physiotherapist. A human-machine interface (HMI) has been developed to allow the physiotherapist to remotely control the robot and supervise the rehabilitation process. Based on a computer vision system, physiotherapist gestures are sent to the robot in the form of control instructions. Wrist range of motion (RoM), EMG signal, sensor current measurement, and streaming from the patient's environment are returned to the control station. The various acquired data are displayed in the HMI and recorded in its database, which allows later monitoring of the patient's progress. During the rehabilitation process, the developed system makes it possible to follow the muscle contraction thanks to an extraction of the Electromyography (EMG) signal as well as the patient's resistance thanks to a feedback from a current sensor. Feature extraction algorithms are implemented to transform the EMG raw signal into a relevant data reflecting the muscle contraction. The solution incorporates a cascade fuzzy-based decision system to indicate the patient's pain. As measurement safety, when the pain exceeds a certain threshold, the robot should stop the action even if the desired angle is not yet reached. Information on the patient, the evolution of his state of health and the activities followed, are all recorded, which makes it possible to provide an electronic health record. Experiments on 3 different subjects showed the effectiveness of the developed robotic solution. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
219. A Novel Speech to Mouth Articulation System for Realistic Humanoid Robots.
- Author
-
Strathearn, Carl and Ma, Eunice Minhua
- Abstract
A significant ongoing issue in realistic humanoid robotics (RHRs) is inaccurate speech to mouth synchronisation. Even the most advanced robotic systems cannot authentically emulate the natural movements of the human jaw, lips and tongue during verbal communication. These visual and functional irregularities have the potential to propagate the Uncanny Valley Effect (UVE) and reduce speech understanding in human-robot interaction (HRI). This paper outlines the development and testing of a novel Computer Aided Design (CAD) robotic mouth prototype with buccinator actuators for emulating the fluidic movements of the human mouth. The robotic mouth system incorporates a custom Machine Learning (ML) application that measures the acoustic qualities of speech synthesis (SS) and translates this data into servomotor triangulation for triggering jaw, lip and tongue positions. The objective of this study is to improve current robotic mouth design and provide engineers with a framework for increasing the authenticity, accuracy and communication capabilities of RHRs for HRI. The primary contributions of this study are the engineering of a robotic mouth prototype and the programming of a speech processing application that achieved a 79.4% syllable accuracy, 86.7% lip synchronisation accuracy and 0.1s speech to mouth articulation differential. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
220. Robot learning human stiffness regulation for hybrid manufacture
- Author
-
Zeng, Chao, Yang, Chenguang, Chen, Zhaopeng, and Dai, Shi-Lu
- Published
- 2018
- Full Text
- View/download PDF
221. Predictive Scheduling of Collaborative Mobile Robots for Improved Crop-transport Logistics of Manually Harvested Crops
- Author
-
Peng, Chen
- Subjects
Robotics ,Agriculture engineering ,Mechanical engineering ,Agricultural field navigation of mobile robot ,Autonomous mobile robot ,Co-robotic system design and implementation ,Harvesting activity modeling ,Human robot interaction ,Multiple vehicle scheduling - Abstract
Mechanizing the manual harvesting of fresh market fruits constitutes one of the biggest challenges to the sustainability of the fruit industry. During manual harvesting of some fresh-market crops like strawberries and table grapes, pickers spend significant amounts of time walking to carry full trays to a collection station at the edge of the field. A step toward increasing harvest automation for such crops is to deploy harvest-aid robots that transport the empty and full trays, thus increasing harvest efficiency by reducing pickers’ non-productive walking times. Given the large sizes of commercial harvesting crews (e.g., strawberry harvesting in California involves crews of twenty to forty people) and the expected cost and complexity of deploying equally large numbers of robots, this dissertation explored an operational scenario in which a crew of pickers is served by a smaller team of robots. Thus, the robots are a shared resource with each robot serving multiple pickers.If the robots are not properly scheduled, then robot sharing among the workers may introduce non-productive waiting delays between the time when a tray becomes full and a robot arrives to collect it. Reactive scheduling (e.g., “start traveling to a picker when the tray becomes full”) is not efficient enough, because robots must traverse large distances to reach the pickers in the field, thus introducing long wait times. Predictive scheduling (e.g., “predict when and where a picker’s tray will become full and dispatch a robot to start traveling there earlier, at an appropriate time”) is better suited to this task, because it can reduce or eliminate pickers’ waiting for robot travel. However, uncertainty is always present in any prediction, and can be detrimental for predictive scheduling algorithms that assume perfect information. Therefore, the main goal of this dissertation was to develop a predictive scheduling algorithm for the robotic team that incorporates prediction uncertainty and investigates the efficiency improvements in simulations and field experiments. In the first part of this dissertation, strawberry harvesting was modeled as a stochastic process and dynamic predictive scheduling was modeled under the assumption that, once a picker starts filling a tray (a stochastic event), the time and location when the tray becomes full - and a tray transport request is generated - are known exactly. The resulting scheduling is dynamic and deterministic, and we refer to it as ‘deterministic predictive scheduling’ to juxtapose it against stochastic predictive scheduling under uncertainty, which is addressed afterwards. Given perfect ‘predictions’, near-optimal dynamic scheduling was implemented to provide efficiency upper-bounds for stochastic predictive scheduling algorithms that incorporate uncertainty in the predicted requests. Robot-aided harvesting was simulated using manual-harvest data collected from a commercial picking crew. The simulation results showed that given a robot-picker ratio of 1:3 and robot travel speed of 1.5 m/s, the mean non-productive time was reduced by over 90% and the corresponding efficiency increased by more than 15% compared to all-manual harvesting.In the second part, the uncertainty in the predictions of tray-transport requests was incorporated into scheduling. This uncertainty is a result of stochastic picker performance, geospatial crop yield variation, and other random effects. Robot predictive scheduling under stochastic tray-transport requests was modeled and solved by an online stochastic scheduling algorithm, using the multiple scenario approach (MSA). The algorithm was evaluated using the calibrated simulator, and the effects of the uncertainty on harvesting efficiency were explored. The results showed that when the robot-to-picker ratio was 1:3 and the robot speed was 1.5 m/s, the non-productive time was reduced by approximately 70%, and the corresponding harvesting efficiency improved by more than 8.5% relative to all-manual harvesting.The last part of the dissertation presents the implementation and integration of the co-robotic harvest-aid system and its deployment during commercial strawberry harvesting. The evaluation experiments demonstrated that the proof-of-concept system was fully functional. The co-robots improved the mean harvesting efficiency by around 10% and reduced the mean non-productive time by 60%, when the robot-to-picker ratio was 1:3. The concepts developed in this dissertation can be applied to robotic harvest-aids for other manually harvested crops that involve a substantial human-powered produce transport, as well as to in-field harvesting logistics for highly mechanized field crops that involve coordination of harvesters and autonomous transport trucks.
- Published
- 2021
222. The design space for robot appearance and behaviour for social robot companions
- Author
-
Walters, Michael L.
- Subjects
502.85 ,Human robot interaction ,robot appearance ,robot behaviour ,human robot proximity ,HR proxemic framework - Abstract
To facilitate necessary task-based interactions and to avoid annoying or upsetting people a domestic robot will have to exhibit appropriate non-verbal social behaviour. Most current robots have the ability to sense and control for the distance of people and objects in their vicinity. An understanding of human robot proxemic and associated non-verbal social behaviour is crucial for humans to accept robots as domestic or servants. Therefore, this thesis addressed the following hypothesis: Attributes of robot appearance, behaviour, task context and situation will affect the distances that people will find comfortable between themselves and a robot. Initial exploratory Human-Robot Interaction (HRI) experiments replicated human-human studies into comfortable approach distances with a mechanoid robot in place of one of the human interactors. It was found that most human participants respected the robot's interpersonal space and there were systematic differences for participants' comfortable approach distances to robots with different voice styles. It was proposed that greater initial comfortable approach distances to the robot were due to perceived inconsistencies between the robots overall appearance and voice style. To investigate these issues further it was necessary to develop HRI experimental set-ups, a novel Video-based HRI (VHRI) trial methodology, trial data collection methods and analytical methodologies. An exploratory VHRI trial then investigated human perceptions and preferences for robot appearance and non-verbal social behaviour. The methodological approach highlighted the holistic and embodied nature of robot appearance and behaviour. Findings indicated that people tend to rate a particular behaviour less favourably when the behaviour is not consistent with the robot’s appearance. A live HRI experiment finally confirmed and extended from these previous findings that there were multiple factors which significantly affected participants preferences for robot to human approach distances. There was a significant general tendency for participants to prefer either a tall humanoid robot or a short mechanoid robot and it was suggested that this may be due to participants internal or demographic factors. Participants' preferences for robot height and appearance were both found to have significant effects on their preferences for live robot to Human comfortable approach distances, irrespective of the robot type they actually encountered. The thesis confirms for mechanoid or humanoid robots, results that have previously been found in the domain of human-computer interaction (cf. Reeves & Nass (1996)), that people seem to automatically treat interactive artefacts socially. An original empirical human-robot proxemic framework is proposed in which the experimental findings from the study can be unified in the wider context of human-robot proxemics. This is seen as a necessary first step towards the desired end goal of creating and implementing a working robot proxemic system which can allow the robot to: a) exhibit socially acceptable social spatial behaviour when interacting with humans, b) interpret and gain additional valuable insight into a range of HRI situations from the relative proxemic behaviour of humans in the immediate area. Future work concludes the thesis.
- Published
- 2008
223. Machine Gaze: Self-Identification Through Play With a computer Vision-Based Projection and Robotics System
- Author
-
RAY LC, Aaliyah Alcibar, Alejandro Baez, and Stefanie Torossian
- Subjects
robotic art ,human machine communication technology ,projection mapping ,computer vision ,human robot interaction ,child psychology ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Children begin to develop self-awareness when they associate images and abilities with themselves. Such “construction of self” continues throughout adult life as we constantly cycle through different forms of self-awareness, seeking, to redefine ourselves. Modern technologies like screens and artificial intelligence threaten to alter our development of self-awareness, because children and adults are exposed to machines, tele-presences, and displays that increasingly become part of human identity. We use avatars, invent digital lives, and augment ourselves with digital imprints that depart from reality, making the development of self-identification adjust to digital technologies that blur the boundary between us and our devices. To empower children and adults to see themselves and artificially intelligent machines as separately aware entities, we created the persona of a salvaged supermarket security camera refurbished and enhanced with the power of computer vision to detect human faces, and project them on a large-scale 3D face sculpture. The surveillance camera system moves its head to point to human faces at times, but at other times, humans have to get its attention by moving to its vicinity, creating a dynamic where audiences attempt to see their own faces on the sculpture by gazing into the machine's eye. We found that audiences began attaining an understanding of machines that interpret our faces as separate from our identities, with their own agendas and agencies that show by the way they serendipitously interact with us. The machine-projected images of us are their own interpretation rather than our own, distancing us from our digital analogs. In the accompanying workshop, participants learn about how computer vision works by putting on disguises in order to escape from an algorithm detecting them as the same person by analyzing their faces. Participants learn that their own agency affects how machines interpret them, gaining an appreciation for the way their own identities and machines' awareness of them can be separate entities that can be manipulated for play. Together the installation and workshop empower children and adults to think beyond identification with digital technology to recognize the machine's own interpretive abilities that lie separate from human being's own self-awareness.
- Published
- 2020
- Full Text
- View/download PDF
224. Repetitive Robot Behavior Impacts Perception of Intentionality and Gaze-Related Attentional Orienting
- Author
-
Abdulaziz Abubshait and Agnieszka Wykowska
- Subjects
social cognition ,human robot interaction ,gaze cueing ,intentional stance ,intention attribution ,attention orienting ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Gaze behavior is an important social signal between humans as it communicates locations of interest. People typically orient their attention to where others look as this informs about others' intentions and future actions. Studies have shown that humans can engage in similar gaze behavior with robots but presumably more so when they adopt the intentional stance toward them (i.e., believing robot behaviors are intentional). In laboratory settings, the phenomenon of attending toward the direction of others' gaze has been examined with the use of the gaze-cueing paradigm. While the gaze-cueing paradigm has been successful in investigating the relationship between adopting the intentional stance toward robots and attention orienting to gaze cues, it is unclear if the repetitiveness of the gaze-cueing paradigm influences adopting the intentional stance. Here, we examined if the duration of exposure to repetitive robot gaze behavior in a gaze-cueing task has a negative impact on subjective attribution of intentionality. Participants performed a short, medium, or long face-to-face gaze-cueing paradigm with an embodied robot while subjective ratings were collected pre and post the interaction. Results show that participants in the long exposure condition had the smallest change in their intention attribution scores, if any, while those in the short exposure condition had a positive change in their intention attribution, indicating that participants attributed more intention to the robot after short interactions. The results also show that attention orienting to robot gaze-cues was positively related to how much intention was attributed to the robot, but this relationship became more negative as the length of exposure increased. In contrast to subjective ratings, the gaze-cueing effects (GCEs) increased as a function of the duration of exposure to repetitive behavior. The data suggest a tradeoff between the desired number of trials needed for observing various mechanisms of social cognition, such as GCEs, and the likelihood of adopting the intentional stance toward a robot.
- Published
- 2020
- Full Text
- View/download PDF
225. The Influence of Distance and Lateral Offset of Follow Me Robots on User Perception
- Author
-
Felix Wilhelm Siebert, Jacobe Klein, Matthias Rötting, and Eileen Roesler
- Subjects
human robot interaction ,proxemics ,human following robots ,affinity for technology ,robot movement conventions ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Robots that are designed to work in close proximity to humans are required to move and act in a way that ensures social acceptance by their users. Hence, a robot's proximal behavior toward a human is a main concern, especially in human-robot interaction that relies on relatively close proximity. This study investigated how the distance and lateral offset of “Follow Me” robots influences how they are perceived by humans. To this end, a Follow Me robot was built and tested in a user study for a number of subjective variables. A total of 18 participants interacted with the robot, with the robot's lateral offset and distance varied in a within-subject design. After each interaction, participants were asked to rate the movement of the robot on the dimensions of comfort, expectancy conformity, human likeness, safety, trust, and unobtrusiveness. Results show that users generally prefer robot following distances in the social space, without a lateral offset. However, we found a main influence of affinity for technology, as those participants with a high affinity for technology preferred closer following distances than participants with low affinity for technology. The results of this study show the importance of user-adaptiveness in human-robot-interaction.
- Published
- 2020
- Full Text
- View/download PDF
226. How does the robot feel? Perception of valence and arousal in emotional body language
- Author
-
Marmpena Mina, Lim Angelica, and Dahl Torbjørn S.
- Subjects
social robots ,human robot interaction ,dimensional affect ,robot emotion expression ,Technology - Abstract
Human-robot interaction in social robotics applications could be greatly enhanced by robotic behaviors that incorporate emotional body language. Using as our starting point a set of pre-designed, emotion conveying animations that have been created by professional animators for the Pepper robot, we seek to explore how humans perceive their affect content, and to increase their usability by annotating them with reliable labels of valence and arousal, in a continuous interval space. We conducted an experiment with 20 participants who were presented with the animations and rated them in the two-dimensional affect space. An inter-rater reliability analysis was applied to support the aggregation of the ratings for deriving the final labels. The set of emotional body language animations with the labels of valence and arousal is available and can potentially be useful to other researchers as a ground truth for behavioral experiments on robotic expression of emotion, or for the automatic selection of robotic emotional behaviors with respect to valence and arousal. To further utilize the data we collected, we analyzed it with an exploratory approach and we present some interesting trends with regard to the human perception of Pepper’s emotional body language, that might be worth further investigation.
- Published
- 2018
- Full Text
- View/download PDF
227. Implementation of a Brain-Computer Interface on a Lower-Limb Exoskeleton
- Author
-
Can Wang, Xinyu Wu, Zhouyang Wang, and Yue Ma
- Subjects
Brain-computer interface ,human robot interaction ,multi-modal robotic cognition ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In this paper, we propose to use brain-computer interface (BCI) to control a lower-limb exoskeleton. The exoskeleton follows the wearer's motion intention through decoding of electroencephalography (EEG) signals and multi-modal cognition. Motion patterns as standing up, sitting down, and walking forward can be performed. We implemented two types of BCIs, one based on steady-state visual evoked potentials, which used canonical correlation analysis to extract the frequency the subject focused on. The other BCI is based on motor imagery, where the common spatial patterns method was employed to extract the features from the EEG signal. Then, the features were classified by support vector machine to recognize the intention of the subject. We invited four healthy subjects to participate in the experiments, including off-line and online. The off-line experiments trained the classifier and then were used online to test the performance of the BCI controlled exoskeleton system. The results showed high accuracy rate in motion intention classification tasks for both BCIs.
- Published
- 2018
- Full Text
- View/download PDF
228. A Human-Tracking Robot Using Ultra Wideband Technology
- Author
-
Tao Feng, Yao Yu, Lin Wu, Yanru Bai, Ziang Xiao, and Zhen Lu
- Subjects
Human robot interaction ,mobile robots ,ultra wideband technology ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In this paper, a target person tracking method based on the ultra-wideband (UWB) technology is proposed for the implementation of the human-tracking robots. For such robots, the detection and calculation of the position and orientation of the target person play an essential role in the implementation of the human-following function. A modified hyperbolic positioning algorithm is devised to overcome the challenge of the measurement errors. In addition, a modified virtual spring model is implemented in the robot to track the target person. One important advantage of such a UWB-based method compared with the computer vision-based methods lies in its robustness against the environment condition. In fact, the computer vision methods can be very sensitive to lighting conditions which make them suffer from the unconstrained outdoor environment. The robust performance of our methods is illustrated by the experimental results on a real-world robot.
- Published
- 2018
- Full Text
- View/download PDF
229. Human interaction behavior modeling using Generative Adversarial Networks.
- Author
-
Nishimura, Yusuke, Nakamura, Yutaka, and Ishiguro, Hiroshi
- Subjects
- *
HUMAN behavior models , *HUMAN-machine relationship , *PERSONAL assistants , *HUMAN behavior - Abstract
Recently, considerable research has focused on personal assistant robots, and robots capable of rich human-like communication are expected. Among humans, non-verbal elements contribute to effective and dynamic communication. However, people use a wide range of diverse gestures, and a robot capable of expressing various human gestures has not been realized. In this study, we address human behavior modeling during interaction using a deep generative model. In the proposed method, to consider interaction motion, three factors, i.e., interaction intensity, time evolution, and time resolution, are embedded in the network structure. Subjective evaluation results suggest that the proposed method can generate high-quality human motions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
230. A fuzzy-based driver assistance system using human cognitive parameters and driving style information.
- Author
-
Vasconez, Juan Pablo, Viscaino, Michelle, Guevara, Leonardo, and Auat Cheein, Fernando
- Subjects
- *
DRIVER assistance systems , *TRAFFIC monitoring , *HUMAN-robot interaction , *TRAFFIC signs & signals , *HUMAN error - Abstract
• A driver assistance system based on driver cognitive parameters and driving style is proposed. • Deep neural networks-based approaches are used to detect human cognitive parameters. • Driving style assessment is done based on traffic sign detection and vehicle speed analysis. • A fuzzy-based approach is used to combine the whole information about driver behavior. Reducing the number of traffic accidents due to human errors is an urgent need in several countries around the world. In this scenario, the use of human-robot interaction (HRI) strategies has recently shown to be a feasible solution to compensate human limitations while driving. In this work we propose a HRI system which uses the driver's cognitive factors and driving style information to improve safety. To achieve this, deep neural networks based approaches are used to detect human cognitive parameters such as sleepiness, driver's age and head posture. Additionally, driving style information is also obtained through speed analysis and external traffic information. Finally, a fuzzy-based decision-making stage is proposed to manage both human cognitive information and driving style, and then limit the maximum allowed speed of a vehicle. The results showed that we were able to detect human cognitive parameters such as sleepiness –63% to 88% accuracy–, driver's age –80% accuracy– and head posture –90.42% to 97.86% accuracy– as well as driving style –87.8% average accuracy. Based on such results, the fuzzy-based architecture was able to limit the maximum allowed speed for different scenarios, reducing it from 50 km/h to 17 km/h. Moreover, the fuzzy-based method showed to be more sensitive with respect to inputs changes than a previous published weighted-based inference method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
231. Pragmatics in the False-Belief Task: Let the Robot Ask the Question!
- Author
-
Baratgin, Jean, Dubois-Sage, Marion, Jacquet, Baptiste, Stilgenbauer, Jean-Louis, and Jamet, Frank
- Subjects
PRAGMATICS ,HUMANOID robots ,MENTAL representation ,ROBOTS ,FOSTER home care - Abstract
The poor performances of typically developing children younger than 4 in the first-order false-belief task "Maxi and the chocolate" is analyzed from the perspective of conversational pragmatics. An ambiguous question asked by an adult experimenter (perceived as a teacher) can receive different interpretations based on a search for relevance, by which children according to their age attribute different intentions to the questioner, within the limits of their own meta-cognitive knowledge. The adult experimenter tells the child the following story of object-transfer: "Maxi puts his chocolate into the green cupboard before going out to play. In his absence, his mother moves the chocolate from the green cupboard to the blue one." The child must then predict where Maxi will pick up the chocolate when he returns. To the child, the question from an adult (a knowledgeable person) may seem surprising and can be understood as a question of his own knowledge of the world, rather than on Maxi's mental representations. In our study, without any modification of the initial task, we disambiguate the context of the question by (1) replacing the adult experimenter with a humanoid robot presented as "ignorant" and "slow" but trying to learn and (2) placing the child in the role of a "mentor" (the knowledgeable person). Sixty-two typical children of 3 years-old completed the first-order false belief task "Maxi and the chocolate," either with a human or with a robot. Results revealed a significantly higher success rate in the robot condition than in the human condition. Thus, young children seem to fail because of the pragmatic difficulty of the first-order task, which causes a difference of interpretation between the young child and the experimenter. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
232. Force quantification and simulation of pedicle screw tract palpation using direct visuo-haptic volume rendering.
- Author
-
Zoller, Esther I., Faludi, Balázs, Gerig, Nicolas, Jost, Gregory F., Cattin, Philippe C., and Rauter, Georg
- Abstract
Purpose: We present a feasibility study for the visuo-haptic simulation of pedicle screw tract palpation in virtual reality, using an approach that requires no manual processing or segmentation of the volumetric medical data set. Methods: In a first experiment, we quantified the forces and torques present during the palpation of a pedicle screw tract in a real boar vertebra. We equipped a ball-tipped pedicle probe with a 6-axis force/torque sensor and a motion capture marker cluster. We simultaneously recorded the pose of the probe relative to the vertebra and measured the generated forces and torques during palpation. This allowed us replaying the recorded palpation movements in our simulator and to fine-tune the haptic rendering to approximate the measured forces and torques. In a second experiment, we asked two neurosurgeons to palpate a virtual version of the same vertebra in our simulator, while we logged the forces and torques sent to the haptic device. Results: In the experiments with the real vertebra, the maximum measured force along the longitudinal axis of the probe was 7.78 N and the maximum measured bending torque was 0.13 Nm. In an offline simulation of the motion of the pedicle probe recorded during the palpation of a real pedicle screw tract, our approach generated forces and torques that were similar in magnitude and progression to the measured ones. When surgeons tested our simulator, the distributions of the computed forces and torques were similar to the measured ones; however, higher forces and torques occurred more frequently. Conclusions: We demonstrated the suitability of direct visual and haptic volume rendering to simulate a specific surgical procedure. Our approach of fine-tuning the simulation by measuring the forces and torques that are prevalent while palpating a real vertebra produced promising results. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
233. Adaptive Personalized Multiple Machine Learning Architecture for Estimating Human Emotional States.
- Author
-
Matsufuji, Akihiro, Sato-Shimokawara, Eri, and Yamaguchi, Toru
- Subjects
- *
MACHINE learning , *HUMAN-robot interaction , *COMMUNICATION , *ARCHITECTURE , *SOCIAL interaction - Abstract
Robots have the potential to facilitate the future education of all generations, particularly children. However, existing robots are limited in their ability to automatically perceive and respond to a human emotional states. We hypothesize that these sophisticated models suffer from individual differences in human personality. Therefore, we proposed a multi-characteristic model architecture that combines personalized machine learning models and utilizes the prediction score of each model. This architecture is formed with reference to an ensemble machine learning architecture. In this study, we presented a method for calculating the weighted average in a multi-characteristic architecture by using the similarities between a new sample and the trained characteristics. We estimated the degree of confidence during a communication as a human internal state. Empirical results demonstrate that using the multi-model training of each person's information to account for individual differences provides improvements over a traditional machine learning system and insight into dealing with various individual differences. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
234. Augmented Reality (AR) based framework for supporting human workers in flexible manufacturing.
- Author
-
Lotsaris, Konstantinos, Fousekis, Nikos, Koukas, Spyridon, Aivaliotis, Sotiris, Kousi, Niki, Michalos, George, and Makris, Sotiris
- Abstract
This paper presents an Augmented Reality (AR) application that aims to facilitate the operator's work in an industrial, human-robot collaboration environment with mobile robots. In such a flexible environment, with robots and humans working and moving in the same area, the ease of communication between the two sides is critical and prerequisite. The developed application provides the user with handy tools to interact with the mobile platform, give direct instructions to it and receive information about the robot's and the broader system's state, through an AR headset. The communication between the headset and the robot is achieved through a ROS based system, that interconnects the resources. The discussed tool has been deployed and tested in a use case inspired from the automotive industry, assisting the operators during the collaborative assembly tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
235. AR based robot programming using teaching by demonstration techniques.
- Author
-
Lotsaris, Konstantinos, Gkournelos, Christos, Fousekis, Nikos, Kousi, Niki, and Makris, Sotiris
- Abstract
This paper presents an Augmented Reality tool for supporting the operator's interaction with the robot hardware in production systems. We are focusing on the development of an AR application, which allows the user to interact with a robotic arm and move it by demonstration. The application's purpose is to simplify and accelerate the industrial manufacturing process by introducing an easy and intuitive way of interaction with the hardware, without requiring special programming skills or long training time from the worker. The proposed software is developed for the Microsoft's HoloLens Mixed Reality Headset, integrated with ROS and it has been tested in a case study inspired from the automotive industry. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
236. The evolution of the curiosity rover sampling chain.
- Author
-
Verma, Vandi, Carsten, Joseph, and Kuhn, Stephen
- Subjects
GALE Crater (Mars) ,CURIOSITY ,AGILE software development ,MINERAL collecting ,LABORATORIES - Abstract
The Mars Science Laboratory (MSL) Curiosity rover landed in Gale crater in August of 2012 on its mission to explore Mt. Sharp as the first planetary rover to collect and analyze rock and regolith samples. On this new mission, sampling operations were conceived to be executed serially and in situ, on a "sample chain" along which sample would be collected, then processed, then delivered to sample analysis instruments, analyzed there, and then discarded so the chain could be repeated. This paper describes the evolution of this relatively simple chain into a richer sampling network, responding to science and engineering desires that came into focus only as the mission matured, scientific discoveries were made, and anomalies were encountered. The rover flight and ground system architectures retained significant heritage from past missions, while extending capabilities in anticipation of the need for adaptation. As evolution occurred, the architecture permitted nimble extension of sampling behavior without time‐consuming flight software updates or significant impact to daily operations. This paper presents the major components of this architecture and discusses some of the results of successful adaptation across thousands of Sols of Mars operations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
237. Webcam controlled robotic arm for persons with SSMI.
- Author
-
Sharma, Vinay Krishna, Murthy, L.R.D., Singh Saluja, KamalPreet, Mollyn, Vimal, Sharma, Gourav, and Biswas, Pradipta
- Subjects
- *
ARM physiology , *ROBOTICS equipment , *ALGORITHMS , *ANALYSIS of variance , *AUDIOVISUAL materials , *COMPUTER software , *COMPUTERS , *EYE movements , *GRAPHIC arts , *KINEMATICS , *MOVEMENT disorders , *PEOPLE with disabilities , *PROSTHETICS , *QUADRIPLEGIA , *RESEARCH funding , *SPEECH disorders , *STUDENTS , *TEXTILES , *USER interfaces , *WEARABLE technology , *ASSISTIVE technology , *TASK performance , *BODY movement , *AUGMENTED reality , *DESCRIPTIVE statistics , *ONE-way analysis of variance - Abstract
BACKGROUND: People with severe speech and motor impairment (SSMI) often uses a technique called eye pointing to communicate with outside world. One of their parents, caretakers or teachers hold a printed board in front of them and by analyzing their eye gaze manually, their intentions are interpreted. This technique is often error prone and time consuming and depends on a single caretaker. OBJECTIVE: We aimed to automate the eye tracking process electronically by using commercially available tablet, computer or laptop and without requiring any dedicated hardware for eye gaze tracking. The eye gaze tracker is used to develop a video see through based AR (augmented reality) display that controls a robotic device with eye gaze and deployed for a fabric printing task. METHODS: We undertook a user centred design process and separately evaluated the web cam based gaze tracker and the video see through based human robot interaction involving users with SSMI. We also reported a user study on manipulating a robotic arm with webcam based eye gaze tracker. RESULTS: Using our bespoke eye gaze controlled interface, able bodied users can select one of nine regions of screen at a median of less than 2 secs and users with SSMI can do so at a median of 4 secs. Using the eye gaze controlled human-robot AR display, users with SSMI could undertake representative pick and drop task at an average duration less than 15 secs and reach a randomly designated target within 60 secs using a COTS eye tracker and at an average time of 2 mins using the webcam based eye gaze tracker. CONCLUSION: The proposed system allows users with SSMI to manipulate physical objects without any dedicated eye gaze tracker. The novelty of the system is in terms of non-invasiveness as earlier work mostly used glass based wearable trackers or head/face tracking but no other earlier work reported use of webcam based eye tracking for controlling robotic arm by users with SSMI. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
238. Interactive task learning via embodied corrective feedback.
- Author
-
Appelgren, Mattias and Lascarides, Alex
- Abstract
This paper addresses a task in Interactive Task Learning (Laird et al. IEEE Intell Syst 32:6–21, 2017). The agent must learn to build towers which are constrained by rules, and whenever the agent performs an action which violates a rule the teacher provides verbal corrective feedback: e.g. “No, red blocks should be on blue blocks”. The agent must learn to build rule compliant towers from these corrections and the context in which they were given. The agent is not only ignorant of the rules at the start of the learning process, but it also has a deficient domain model, which lacks the concepts in which the rules are expressed. Therefore an agent that takes advantage of the linguistic evidence must learn the denotations of neologisms and adapt its conceptualisation of the planning domain to incorporate those denotations. We show that by incorporating constraints on interpretation that are imposed by discourse coherence into the models for learning (Hobbs in On the coherence and structure of discourse, Stanford University, Stanford, 1985; Asher et al. in Logics of conversation, Cambridge University Press, Cambridge, 2003), an agent which utilizes linguistic evidence outperforms a strong baseline which does not. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
239. Exploring Low-Cost Mobile Manipulation for Elder Care Within a Community Based Setting.
- Author
-
Mucchiani, Caio, Cacchione, Pamela, Torres, Wilson, Johnson, Michelle J., and Yim, Mark
- Abstract
This paper identifies tasks an affordable mobile manipulator service robot could do to benefit older adults' independence in a supportive apartment living facility, and a series of tests validating the highest ranked tasks. Previous deployments considered a mobile only robotic base, performing exercises through walking encouragement and hydration by water delivery, both followed by pain assessment and are briefly described. Current tests investigated the efficacy of mobile manipulation tasks by adapting a novel, low-cost telescopic robotic arm to the same mobile base, with aspects of human-robot interaction investigated through a physical interactive game with the older adults. All deployments took place at a Program of All-inclusive Care (PACE) center and interactions were evaluated by two observers, along with post-interaction surveys with the older adults. Previous work on elder care robotics is discussed. Results of the mobile manipulation deployments, along with design guidelines are presented. Future work includes the development of a new mobile manipulator capable of performing the investigated tasks with a greater level of autonomy and efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
240. Singing Robots: How Embodiment Affects Emotional Responses to Non-Linguistic Utterances.
- Author
-
Wolfe, Hannah, Peljhan, Marko, and Visell, Yon
- Abstract
Robots are often envisaged as embodied agents that might be able to intelligently and expressively communicate with humans. This could be due to their physical embodiment, their animated nature, or to other factors, such as cultural associations. In this study, we investigated emotional responses of humans to affective non-linguistic utterances produced by an embodied agent, with special attention to the way that these responses depended on the nature of the embodiment and the extent to which the robot actively moved in proximity to the human. To this end, we developed a new singing robot platform, ROVER, that could interact with humans in its surroundings. We used affective sound design methods to endow ROVER with the ability to communicate through song, via musical, non-linguistic utterances that could, as we demonstrate, evoke emotional responses in humans. We indeed found that the embodiment of the computational agent had an affect on emotional responses. However, contrary to our expectations, we found singing computers to be more emotionally arousing than singing robots. Whether the robot moved or not did not affect arousal. The results may have implications for the design of affective non-speech audio displays for human-computer or human-robot interaction. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
241. Incremental Learning of an Open-Ended Collaborative Skill Library.
- Author
-
Koert, Dorothea, Trick, Susanne, Ewerton, Marco, Lutter, Michael, and Peters, Jan
- Subjects
MACHINE learning ,LIBRARY research ,ROBOT programming ,COLLABORATIVE learning ,OLDER people ,HUMAN mechanics - Abstract
Intelligent assistive robots can potentially contribute to maintaining an elderly person's independence by supporting everyday life activities. However, the number of different and personalized activities to be supported renders pre-programming of all respective robot behaviors prohibitively difficult. Instead, to cope with a continuous and potentially open-ended stream of cooperative tasks, new collaborative robot behaviors need to be continuously learned and updated from demonstrations. To this end, we introduce an online learning method to incrementally build a cooperative skill library of probabilistic interaction primitives. The resulting model chooses a corresponding robot response to a human movement where the human intention is extracted from previously demonstrated movements. While existing batch learning methods for movement primitives usually learn such skill libraries only once for a pre-defined number of different skills, our approach enables extending the skill library in an open-ended and online fashion from new incoming demonstrations. The proposed approach is evaluated on a low-dimensional benchmark task and in a collaborative scenario with a 7DoF robot, where we also investigate the generalization of learned skills between different subjects. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
242. Can Human-Inspired Learning Behaviour Facilitate Human–Robot Interaction?
- Author
-
Carfì, Alessandro, Villalobos, Jessica, Coronado, Enrique, Bruno, Barbara, and Mastrogiovanni, Fulvio
- Subjects
HUMAN-robot interaction ,AUTONOMOUS robots ,ROBOTS ,BEHAVIOR ,INTERPERSONAL relations ,SHARED workspaces - Abstract
The evolution of production systems for smart factories foresees a tight relation between human operators and robots. Specifically, when robot task reconfiguration is needed, the operator must be provided with an easy and intuitive way to do it. A useful tool for robot task reconfiguration is Programming by Demonstration (PbD). PbD allows human operators to teach a robot new tasks by showing it a number of examples. The article presents two studies investigating the role of the robot in PbD. A preliminary study compares standard PbD with human–human teaching and suggests that a collaborative robot should actively participate in the teaching process as human practitioners typically do. The main study uses a wizard of oz approach to determine the effects of having a robot actively participating in the teaching process, specifically by controlling the end-effector. The results suggest that active behaviour inspired by humans can lead to a more intuitive PbD. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
243. A Learn by Demonstration Approach for Closed-Loop, Robust, Anthropomorphic Grasp Planning
- Author
-
Liarokapis, Minas V., Bechlioulis, Charalampos P., Boutselis, George I., Kyriakopoulos, Kostas J., Ferre, Manuel, Series editor, Ernst, Marc O., Series editor, Wing, Alan, Series editor, Bianchi, Matteo, editor, and Moscatelli, Alessandro, editor
- Published
- 2016
- Full Text
- View/download PDF
244. Naturalistic Human-Robot Interaction Design for Control of Unmanned Ground Vehicles
- Author
-
Soo, John Kok Tiong, Tan, Angela Li Sin, Ho, Andrew Si Yong, Diniz Junqueira Barbosa, Simone, Series editor, Chen, Phoebe, Series editor, Du, Xiaoyong, Series editor, Filipe, Joaquim, Series editor, Kara, Orhun, Series editor, Liu, Ting, Series editor, Kotenko, Igor, Series editor, Sivalingam, Krishna M., Series editor, Washio, Takashi, Series editor, and Stephanidis, Constantine, editor
- Published
- 2016
- Full Text
- View/download PDF
245. Personalizing Intelligent Systems and Robots with Human Motion Data
- Author
-
Venture, Gentiane, Baddoura, Ritta, Kawashima, Yuta, Kawashima, Noritaka, Yabuki, Takumi, Siciliano, Bruno, Series editor, Khatib, Oussama, Series editor, Inaba, Masayuki, editor, and Corke, Peter, editor
- Published
- 2016
- Full Text
- View/download PDF
246. Interactive Semantic Mapping: Experimental Evaluation
- Author
-
Gemignani, Guglielmo, Nardi, Daniele, Bloisi, Domenico Daniele, Capobianco, Roberto, Iocchi, Luca, Siciliano, Bruno, Series editor, Khatib, Oussama, Series editor, Hsieh, M. Ani, editor, and Kumar, Vijay, editor
- Published
- 2016
- Full Text
- View/download PDF
247. Engineering the Arts
- Author
-
Herath, Damith, Kroos, Christian, Powers, David M.W., Series editor, Herath, Damith, editor, Kroos, Christian, editor, and Stelarc, editor
- Published
- 2016
- Full Text
- View/download PDF
248. Analysis of Stress and Strain in Head Based Control of Collaborative Robots—A Literature Review
- Author
-
Nelles, Jochen, Kohns, Susanne, Spies, Julia, Brandl, Christopher, Mertens, Alexander, Schlick, Christopher M., Kacprzyk, Janusz, Series editor, Goonetilleke, Ravindra, editor, and Karwowski, Waldemar, editor
- Published
- 2016
- Full Text
- View/download PDF
249. Development of a Nao Humanoid Robot Able to Play Tic-Tac-Toe Game on a Tactile Tablet
- Author
-
Calvo-Varela, Luis, Regueiro, Carlos V., Canzobre, David S., Iglesias, Roberto, Kacprzyk, Janusz, Series editor, Reis, Luís Paulo, editor, Moreira, António Paulo, editor, Lima, Pedro U., editor, Montano, Luis, editor, and Muñoz-Martinez, Victor, editor
- Published
- 2016
- Full Text
- View/download PDF
250. YuMi, Come and Play with Me! A Collaborative Robot for Piecing Together a Tangram Puzzle
- Author
-
Kirschner, David, Velik, Rosemarie, Yahyanejad, Saeed, Brandstötter, Mathias, Hofbaur, Michael, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Ronzhin, Andrey, editor, Rigoll, Gerhard, editor, and Meshcheryakov, Roman, editor
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.