44 results on '"robot teaching"'
Search Results
2. Semantic constraints to represent common sense required in household actions for multimodal learning-from-observation robot.
- Author
-
Ikeuchi, Katsushi, Wake, Naoki, Sasabuchi, Kazuhiro, and Takamatsu, Jun
- Subjects
- *
COMMON sense , *HOUSEHOLDS , *WORK environment , *ROBOTS , *REINFORCEMENT learning - Abstract
The learning-from-observation (LfO) paradigm allows a robot to learn how to perform actions by observing human actions. Previous research in top-down learning-from-observation has mainly focused on the industrial domain, which consists only of the real physical constraints between a manipulated tool and the robot's working environment. To extend this paradigm to the household domain, which consists of imaginary constraints derived from human common sense, we introduce the idea of semantic constraints, which are represented similarly to the physical constraints by defining an imaginary contact with an imaginary environment. By studying the transitions between contact states under physical and semantic constraints, we derive a necessary and sufficient set of task representations that provides the upper bound of the possible task set. We then apply the task representations to analyze various actions in top-rated household YouTube videos and real home cooking recordings, classify frequently occurring constraint patterns into physical, semantic, and multi-step task groups, and determine a subset that covers standard household actions. Finally, we design and implement task models, corresponding to these task representations in the subset, with the necessary daemon functions to collect the necessary parameters to perform the corresponding household actions. Our results provide promising directions for incorporating common sense into the robot teaching literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Modeling Adaptive Expression of Robot Learning Engagement and Exploring Its Effects on Human Teachers.
- Author
-
SHUAI MA, MINGFEI SUN, and XIAOJUAN MA
- Subjects
GAZE ,MACHINE learning ,DEEP reinforcement learning ,REINFORCEMENT learning ,ARTIFICIAL intelligence ,AUTONOMOUS robots ,EYE tracking ,ROBOT programming - Published
- 2023
- Full Text
- View/download PDF
4. UniRoVE: Unified Robot Virtual Environment Framework.
- Author
-
Zafra Navarro, Alberto, Rodriguez Juan, Javier, Igelmo García, Victor, Ruiz Zúñiga, Enrique, and Garcia-Rodriguez, Jose
- Subjects
ROBOTS ,ROBOT control systems ,ROBOTICS ,INDUSTRIAL robots ,SATISFACTION ,AUTHENTIC learning ,ROBOT programming - Abstract
With robotics applications playing an increasingly significant role in our daily lives, it is crucial to develop effective methods for teaching and understanding their behavior. However, limited access to physical robots in educational institutions and companies poses a significant obstacle for many individuals. To overcome this barrier, a novel framework that combines realistic robot simulation and intuitive control mechanisms within a virtual reality environment is presented. By accurately emulating the physical characteristics and behaviors of various robots, this framework offers an immersive and authentic learning experience. Through an intuitive control interface, users can interact naturally with virtual robots, facilitating the acquisition of practical robotics skills. In this study, a qualitative assessment to evaluate the effectiveness and user satisfaction with the framework is conducted. The results highlighted its usability, realism, and educational value. Specifically, the framework bridges the gap between theoretical knowledge and practical application in robotics, enabling users to gain hands-on experience and develop a deeper understanding of robot behavior and control strategies. Compared to existing approaches, the framework provides a more accessible and effective alternative for interacting with robots, particularly for individuals with limited physical access to such devices. In conclusion, the study presents a comprehensive framework that leverages virtual reality technology to enhance the learning and training process in robotics. By combining realistic simulations and intuitive controls, this framework represents a significant advancement in providing an immersive and effective learning environment. The positive user feedback obtained from the study reinforces the value and potential of the framework in facilitating the acquisition of essential robotics skills. Ultimately, this work contributes to flattening the robotics learning curve and promoting broader access to robotics education. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Text-driven object affordance for guiding grasp-type recognition in multimodal robot teaching.
- Author
-
Wake, Naoki, Saito, Daichi, Sasabuchi, Kazuhiro, Koike, Hideki, and Ikeuchi, Katsushi
- Abstract
In robot teaching, the grasping strategies taught to robots by users are critical information, because these strategies contain the implicit knowledge necessary to successfully perform a series of manipulations; however, limited practical knowledge exists on how to utilize linguistic information for supporting grasp-type recognition in multimodal teaching. This study focused on the effects of text-driven object affordance—a prior distribution of grasp types for each object—on image-based grasp-type recognition. To this end, we created the datasets of first-person grasping-hand images labeled with grasp types and object names and tested if the object affordance enhanced the performance of image-based recognition. We evaluated two scenarios with real and illusory objects to be grasped, considering a teaching condition in mixed reality, where the lack of visual object information can make image-based recognition challenging. The results show that object affordance guided the image-based recognition in two scenarios, that is, increasing the recognition accuracy by (1) excluding the unlikely grasp types from the candidates and (2) enhancing the likely grasp types. Additionally, the “enhancing effect” was more pronounced with greater grasp-type bias for each object in a test dataset. These results indicate the effectiveness of object affordance for guiding grasp-type recognition in multimodal robot teaching applications. The contributions of this study are (1) demonstrating the effectiveness of object affordance in guiding grasp-type recognition both with and without the real objects in images, (2) demonstrating the conditions under which the merits of object affordance are pronounced, and (3) providing a dataset of first-person grasping images labeled with possible grasp types for each object. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. 基于HTC Vive的虚拟工业机器人示教编程系统.
- Author
-
何汉武, 余秋硕, 聂 晖, 何明桐, and 李晋芳
- Subjects
INDUSTRIAL robots ,ROBOT programming ,SYNTAX (Grammar) ,USER interfaces ,VIRTUAL reality ,ROBOTS - Abstract
Copyright of Journal of Guangdong University of Technology is the property of Journal of Guangdong University of Technology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
7. Interactive robot teaching based on finger trajectory using multimodal RGB-D-T-data
- Author
-
Yan Zhang, Richard Fütterer, and Gunther Notni
- Subjects
multimodal image processing ,RGB-D-T-data ,point cloud processing ,finger trajectory recognition ,robot teaching ,meshless finite difference solution ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The concept of Industry 4.0 brings the change of industry manufacturing patterns that become more efficient and more flexible. In response to this tendency, an efficient robot teaching approach without complex programming has become a popular research direction. Therefore, we propose an interactive finger-touch based robot teaching schema using a multimodal 3D image (color (RGB), thermal (T) and point cloud (3D)) processing. Here, the resulting heat trace touching the object surface will be analyzed on multimodal data, in order to precisely identify the true hand/object contact points. These identified contact points are used to calculate the robot path directly. To optimize the identification of the contact points we propose a calculation scheme using a number of anchor points which are first predicted by hand/object point cloud segmentation. Subsequently a probability density function is defined to calculate the prior probability distribution of true finger trace. The temperature in the neighborhood of each anchor point is then dynamically analyzed to calculate the likelihood. Experiments show that the trajectories estimated by our multimodal method have significantly better accuracy and smoothness than only by analyzing point cloud and static temperature distribution.
- Published
- 2023
- Full Text
- View/download PDF
8. Metrics to evaluate human teaching engagement from a robot's point of view
- Author
-
Novanda, Ori
- Subjects
629.8 ,robot teaching ,engagement evaluation ,effort measurement ,human-robot interaction ,humanoid robot ,multimodal interaction ,input modality preference - Abstract
This thesis was motivated by a study of how robots can be taught by humans, with an emphasis on allowing persons without programming skills to teach robots. The focus of this thesis was to investigate what criteria could or should be used by a robot to evaluate whether a human teacher is (or potentially could be) a good teacher in robot learning by demonstration. In effect, choosing the teacher that can maximize the benefit to the robot using learning by imitation/demonstration. The study approached this topic by taking a technology snapshot in time to see if a representative example of research laboratory robot technology is capable of assessing teaching quality. With this snapshot, this study evaluated how humans observe teaching quality to attempt to establish measurement metrics that can be transferred as rules or algorithms that are beneficial from a robot's point of view. To evaluate teaching quality, the study looked at the teacher-student relationship from a human-human interaction perspective. Two factors were considered important in defining a good teacher: engagement and immediacy. The study gathered more literature reviews relating to further detailed elements of engagement and immediacy. The study also tried to link physical effort as a possible metric that could be used to measure the level of engagement of the teachers. An investigatory experiment was conducted to evaluate which modality the participants prefer to employ in teaching a robot if the robot can be taught using voice, gesture demonstration, or physical manipulation. The findings from this experiment suggested that the participants appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robot's perceived ability to imitate. A main experiment was conducted to study the detailed elements that might be used by a robot in identifying a 'good' teacher. The main experiment was conducted in two subexperiments. The first part recorded the teacher's activities and the second part analysed how humans evaluate the perception of engagement when assessing another human teaching a robot. The results from the main experiment suggested that in human teaching of a robot (human-robot interaction), humans (the evaluators) also look for some immediacy cues that happen in human-human interaction for evaluating the engagement.
- Published
- 2017
9. UniRoVE: Unified Robot Virtual Environment Framework
- Author
-
Alberto Zafra Navarro, Javier Rodriguez Juan, Victor Igelmo García, Enrique Ruiz Zúñiga, and Jose Garcia-Rodriguez
- Subjects
virtual reality ,industrial robots ,collaborative robots ,robot teaching ,human–robot interaction ,immersive learning ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
With robotics applications playing an increasingly significant role in our daily lives, it is crucial to develop effective methods for teaching and understanding their behavior. However, limited access to physical robots in educational institutions and companies poses a significant obstacle for many individuals. To overcome this barrier, a novel framework that combines realistic robot simulation and intuitive control mechanisms within a virtual reality environment is presented. By accurately emulating the physical characteristics and behaviors of various robots, this framework offers an immersive and authentic learning experience. Through an intuitive control interface, users can interact naturally with virtual robots, facilitating the acquisition of practical robotics skills. In this study, a qualitative assessment to evaluate the effectiveness and user satisfaction with the framework is conducted. The results highlighted its usability, realism, and educational value. Specifically, the framework bridges the gap between theoretical knowledge and practical application in robotics, enabling users to gain hands-on experience and develop a deeper understanding of robot behavior and control strategies. Compared to existing approaches, the framework provides a more accessible and effective alternative for interacting with robots, particularly for individuals with limited physical access to such devices. In conclusion, the study presents a comprehensive framework that leverages virtual reality technology to enhance the learning and training process in robotics. By combining realistic simulations and intuitive controls, this framework represents a significant advancement in providing an immersive and effective learning environment. The positive user feedback obtained from the study reinforces the value and potential of the framework in facilitating the acquisition of essential robotics skills. Ultimately, this work contributes to flattening the robotics learning curve and promoting broader access to robotics education.
- Published
- 2023
- Full Text
- View/download PDF
10. Teaching of a Welding Robot : An intuitive method using virtual reality tracking devices
- Author
-
Peters, Jannik and Peters, Jannik
- Abstract
In this research, the use of the SteamVR tracking system as a teaching method forindustrial robots was investigated, in particular, how it can make the setup of weldingapplications more intuitive. Therefore, an application was developed, that handles therecording of the teaching data and the control of the robot, which allow fast setup timesof a few minutes only. Tests were conducted and a static accuracy of 10 mm determinedat the TCP. This is not sufficient for welding. Further investigation of the tracking systemshowed directional dependencies, a slow dynamic response of the tracking, which can addanother 10 mm of error, and deviations at the pose determination between 3 and 20 mm,making this tracking setup only usable for applications, where no precision is needed.
- Published
- 2024
11. User-Friendly Intuitive Teaching Tool for Easy and Efficient Robot Teaching in Human-Robot Collaboration
- Author
-
Do, Hyunmin, Choi, Taeyong, Park, Dong Il, Kim, Hwi-su, Park, Chanhun, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Strand, Marcus, editor, Dillmann, Rüdiger, editor, Menegatti, Emanuele, editor, and Ghidoni, Stefano, editor
- Published
- 2019
- Full Text
- View/download PDF
12. Contact Web Status Presentation for Freehand Grasping in MR-based Robot-teaching.
- Author
-
Daichi Saito, Naoki Wake, Kazuhiro Sasabuchi, Hideki Koike, and Katsushi Ikeuchi
- Subjects
PREHENSION (Physiology) ,DEPTH perception ,MIXED reality ,HAPTIC devices - Abstract
Presenting realistic grasping interaction with virtual objects in mixed reality (MR) is one of the important issues to promote the use of MR, such as in the robot-teaching domain. However, the intended interaction is difficult to achieve in MR due to the difficulty of depth perception and the lack of haptic feedback. To make intended grasping interactions in MR easier to achieve, we propose visual cues (contact web) that represent the contact state between the user's hand and a MR-object. To evaluate the effect of the proposed method, we performed two experiments. The first experiment determines the grasp type and object to be used in the evaluation of the second experiment, and the second experiment measures the time taken to complete grasping tasks of the object. Both objective and subjective evaluations show that the proposed visual cues entailed a significant reduction in the time required to complete the task. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Offline Direct Teaching for a Robotic Manipulator in the Computational Space.
- Author
-
Makita, Satoshi, Sasaki, Takuya, and Urakawa, Tatsuhiro
- Subjects
MANIPULATORS (Machinery) ,ROBOTICS ,VIRTUAL reality ,AUGMENTED reality ,INDUSTRIAL productivity - Abstract
This paper proposes a robot teaching method using augmented and virtual reality technologies. Robot teaching is essential for robots to accomplish several tasks in industrial production. Although there are various approaches to perform motion planning for robot manipulation, robot teaching is still required for precision and reliability. Online teaching, in which a physical robot moves in the real space to obtain the desired motion, is widely performed because of its ease and reliability. However, actual robot movements are required. In contrast, offline teaching can be accomplished entirely in the computational space, and it requires constructing the robot's surroundings as computer graphic models. Additionally, planar displays do not provide sufficient information on 3D scenes. Our proposed method can be employed as offline teaching, but the operator can manipulate the robot intuitively using a head-mounted device and the specified controllers in the virtual 3D space. We demonstrate two approaches for robot teaching with augmented and virtual reality technologies and show some experimental results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. A Robot Teaching Method Based on Motion Tracking with Particle Filter
- Author
-
Huang, Yanjiang, Xie, Jie, Zhou, Haopeng, Zheng, Yanglong, Zhang, Xianmin, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Huang, YongAn, editor, Wu, Hao, editor, Liu, Honghai, editor, and Yin, Zhouping, editor
- Published
- 2017
- Full Text
- View/download PDF
15. UniRoVE: Unified Robot Virtual Environment Framework
- Author
-
Universidad de Alicante. Departamento de Tecnología Informática y Computación, Zafra Navarro, Alberto, Rodríguez Juan, Javier, Igelmo García, Victor, Ruiz Zúñiga, Enrique, Garcia-Rodriguez, Jose, Universidad de Alicante. Departamento de Tecnología Informática y Computación, Zafra Navarro, Alberto, Rodríguez Juan, Javier, Igelmo García, Victor, Ruiz Zúñiga, Enrique, and Garcia-Rodriguez, Jose
- Abstract
With robotics applications playing an increasingly significant role in our daily lives, it is crucial to develop effective methods for teaching and understanding their behavior. However, limited access to physical robots in educational institutions and companies poses a significant obstacle for many individuals. To overcome this barrier, a novel framework that combines realistic robot simulation and intuitive control mechanisms within a virtual reality environment is presented. By accurately emulating the physical characteristics and behaviors of various robots, this framework offers an immersive and authentic learning experience. Through an intuitive control interface, users can interact naturally with virtual robots, facilitating the acquisition of practical robotics skills. In this study, a qualitative assessment to evaluate the effectiveness and user satisfaction with the framework is conducted. The results highlighted its usability, realism, and educational value. Specifically, the framework bridges the gap between theoretical knowledge and practical application in robotics, enabling users to gain hands-on experience and develop a deeper understanding of robot behavior and control strategies. Compared to existing approaches, the framework provides a more accessible and effective alternative for interacting with robots, particularly for individuals with limited physical access to such devices. In conclusion, the study presents a comprehensive framework that leverages virtual reality technology to enhance the learning and training process in robotics. By combining realistic simulations and intuitive controls, this framework represents a significant advancement in providing an immersive and effective learning environment. The positive user feedback obtained from the study reinforces the value and potential of the framework in facilitating the acquisition of essential robotics skills. Ultimately, this work contributes to flattening the robotics learning curve and p
- Published
- 2023
16. A Shared Control Interface for Online Teleoperated Teaching of Combined High- and Low-level Skills
- Author
-
Rots, Astrid (author) and Rots, Astrid (author)
- Abstract
We propose a novel shared control interface that enables teleoperated teaching of both high-level decision-making skills and low-level impedance modulation skills using a single haptic device. In the proposed method, high-level teaching is achieved by repurposing the haptic device to remotely modify Behaviour Trees (BTs), allowing human operators to guide decision-making. Repurposing of the haptic device is achieved by exploiting its degrees of freedom for different functionalities. Low-level skill teaching involves an impedance command interface, that is used to command endpoint stiffness by manipulating a 3D virtual stiffness ellipsoid with the haptic device. Both teaching modes are connected: a newly demonstrated low-level skill appears in the BT at a user-specified index. Control is shared between the human and the autonomous system on a high- and low-level. At the higher level, the human can change the BT online, while ongoing execution of the low-level actions within behavior tree remains uninterrupted. During low-level teaching, shared control is implemented between the robotic motion skill and human-demonstrated stiffness. To provide a proof-of-concept and demonstrate the main features of the proposed interface, we performed several experiments in a teleoperation setup operating a remote shelf-stocker robot in a supermarket environment. A predefined BT encodes high-level decisions for a pick-and-place task. The impedance command interface is evaluated in a “peg-in-hole”-like task of placing a product on a cluttered shelf. Ultimately, the proposed interface can facilitate teleoperation-based Learning from Demonstration for the transfer of both high- and low-level skills in an integrated manner., Mechanical Engineering | Vehicle Engineering | Cognitive Robotics
- Published
- 2023
17. Automatic teaching of a robotic remote laser 3D processing system based on an integrated laser-triangulation profilometry
- Author
-
Matija Jezeršek, Matjaž Kos, Hubert Kosler, and Janez Možina
- Subjects
edge detection ,laser triangulation ,remote laser processing ,robot teaching ,three-dimensional measurement ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
One of the key challenges in robotic remote laser 3D processing (RL3DP) is to achieve high accuracy for the laser’s working trajectory relative to the features of the workpiece. This paper presents a novel RL3DP system with an automatic 3D teaching functionality for a precise and rapid determination of the working trajectory which comprises a robot manipulator, 3D scanning head, fibre laser and an off-axis positioned camera. The 3D measurement is based on laser triangulation with laser-stripe illumination using the laser’s pilot beam and scanning head. The experimental results show that the system has a precision better than 70 μm and 120 μm along lateral and vertical direction respectively inside the measuring range of 100 × 100 mm. The teaching time is 30-times shorter compared to a visual teaching procedure. Therefore, such a system can lead to large cost reductions for modern production lines that have constant changes to the products’ geometries and functionalities.
- Published
- 2017
- Full Text
- View/download PDF
18. Not Your Cup of Tea? How Interacting With a Robot Can Increase Perceived Self-efficacy in HRI and Evaluation.
- Author
-
Rosenthal-von der Pütten, Astrid M., Bock, Nikolai, and Brockmann, Katharina
- Subjects
HUMAN-robot interaction ,SELF-efficacy ,CUSTOMIZATION ,UNCERTAINTY ,GERIATRIC psychology - Abstract
The goal of this work is to explore the influence of do-it-yourself customization of a robot on technologically experienced students and unexperienced elderly users' perceived self-efficacy in HRI, uncertainty, and evaluation of the robot and interaction. We introduce the Self-Efficacy in HRI Scale and present two experimental studies. In study 1 (students, n=60) we found that any interaction with the robot increased self-efficacy, regardless of whether this interaction involves customization or not. Moreover, individual increases in self-efficacy predict more positive evaluations. In a second study with elderly users (n=60) we could not replicate the general positive effect of the interaction on self-efficacy. Again, we did not find the hypothesized stronger effect of customization on self-efficacy, nor did we find that relationship between self-efficacy increase and evaluation. We discuss limitations of the setting and for questionnaire design for elderly participants. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Developing Robot Motions by Simulated Touch Sensors
- Author
-
Dalla Libera, Fabio, Minato, Takashi, Ishiguro, Hiroshi, Pagello, Enrico, Menegatti, Emanuele, Carbonell, Jaime G., editor, Siekmann, Jörg, editor, Carpin, Stefano, editor, Noda, Itsuki, editor, Pagello, Enrico, editor, Reggiani, Monica, editor, and von Stryk, Oskar, editor
- Published
- 2008
- Full Text
- View/download PDF
20. Not your cup of tea? How interacting with a robot can increase perceived self-efficacy in HRI and technology acceptance.
- Subjects
HUMAN-robot interaction ,SELF-efficacy ,ROBOTS ,TECHNOLOGY Acceptance Model ,INTERPERSONAL relations - Abstract
The goal of this work is to explore the influence of do-it-yourself customization of a robot on technologically experienced students and unexperienced elderly users' perceived self-efficacy in HRI, uncertainty, and technology acceptance. We introduce the Self-Efficacy in HRI Scale and present two experimental studies. In study 1 (students, n=60) we found that any interaction with the robot increased self-efficacy, regardless of whether this interaction involves customization or not. Moreover, individual increases in self-efficacy predict more positive evaluations. In a second study with elderly users (n=60) we could replicate the general positive effect of the interaction on self-efficacy. Again, we did not find the hypothesized stronger effect of customization on self-efficacy, nor did we find that relationship between self-efficacy increase and evaluation. We discuss limitations of the setting and implications for questionnaire design for elderly participants. [ABSTRACT FROM AUTHOR]
- Published
- 2017
21. Robot teaching by teleoperation based on visual interaction and extreme learning machine.
- Author
-
Xu, Yang, Yang, Chenguang, Zhong, Junpei, Wang, Ning, and Zhao, Lijun
- Subjects
- *
MACHINE learning , *REMOTE control , *ROBUST control , *HUMAN-robot interaction , *HUMAN-computer interaction - Abstract
Compared with traditional robot teaching methods, robots can learn various human-like skills in a more efficient and natural manner through teleoperation. In this paper, we propose a teleoperation method based on human-robot interaction (HRI), which mainly uses visual information. With only one teleoperation, the robot can reproduce a trajectory. There is a certain error between this trajectory and the optimal trajectory due to the cause of the human demonstrator or the robot. So we use an extreme learning machine (ELM) based algorithm to transfer the demonstrator’s motions to the robot. To verify the method, we use a Microsoft KinectV2 to capture the human body motion and the hand state, according to which a Baxter robot in Virtual Robot Experimentation Platform (V-REP) will be controlled by the command. Through learning and training by the ELM, the robot in V-REP can complete a certain task autonomously and the robot in reality can reproduce this trajectory well. The experimental results show that the developed method has achieved satisfactory performance. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
22. A novel force feedback model for virtual robot teaching of belt lapping.
- Author
-
Li, Jing-Rong, Ni, Jian-Long, Xie, Hai-Long, and Wang, Qing-Hui
- Subjects
- *
VIRTUAL machine systems , *COMPUTER systems , *ROBOT programming , *COMPUTER programming , *ELASTICITY (Economics) - Abstract
Virtual offline robot teaching provides a flexible and more economic robot programming solution to belt lapping. Due to the elasticity of the components, the applied force directly influences the polishing result, and the corresponding reaction force sensed by the user affects his or her decision on next operation. However, most of the simulation works reported so far focus mainly on the machining effect of belt lapping but not the contacting force. Therefore, this paper proposes a novel force feedback model for virtual robot teaching of belt lapping. By analyzing the force conditions at different process stages, three kinds of forces, namely natural force, colliding force, and resistance force, are defined to facilitate users with continuous force feedback during a virtual lapping process. To validate the model, a comparative study is done between the actual force measured during the belt lapping process and that simulated by the proposed force model. Moreover, evaluators are involved to conduct the virtual lapping of a mechanical part using the prototype developed. Both the quantitative and the evaluation experiments have validated that the force model is able to provide users with realistic force feedback and better ergonomic feelings of immersion. Moreover, they are also positive on the flexibility and efficiency of using the prototype for virtual robot teaching of belt lapping processes. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
23. Solpen: An Accurate 6-DOF Positioning Tool for Vision-Guided Robotics
- Author
-
Trung-Son Le, Quoc-Viet Tran, Xuan-Loc Nguyen, and Chyi-Yeu Lin
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Control and Systems Engineering ,Signal Processing ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Electrical and Electronic Engineering ,target tracking ,bundle adjustment ,optimization ,image processing ,human-computer interaction ,robot teaching - Abstract
A robot trajectory teaching system with a vision-based positioning pen, which we called Solpen, is developed to generate pose paths of six degrees of freedom (6-DoF) for vision-guided robotics applications such as welding, cutting, painting, or polishing, which can achieve a millimeter dynamic accuracy within a meter working distance from the camera. The system is simple and requires only a 2D camera and the printed ArUco markers which are hand-glued on 31 surfaces of the designed 3D-printed Solpen. Image processing techniques are implemented to remove noise and sharpen the edge of the ArUco images and also enhance the contrast of the ArUco edge intensity generated by the pyramid reconstruction. In addition, the least squares method is implemented to optimize parameters for the center pose of the truncated Icosahedron center, and the vector of the Solpen-tip. From dynamic experiments conducted with ChArUco board to verify exclusively the pen performance, the developed system is robust within its working range, and achieves a minimum axis-accuracy at approximately 0.8 mm.
- Published
- 2022
- Full Text
- View/download PDF
24. Pragmatic Frames for Teaching and Learning in Human-Robot interaction: Review and Challenges.
- Author
-
Vollmer, Anna-Lisa, Wrede, Britta, Rohlfing, Katharina J., Oudeyer, Pierre-Yves, Boucenna, Sofiane, and Lallée, Stéphane
- Subjects
HUMAN-robot interaction ,SOCIAL skills ,DEVELOPMENTAL psychology - Abstract
One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning-teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human-human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
25. First Application of Robot Teaching in an Existing Industry 4.0 Environment: Does It Really Work?
- Author
-
Weiss, Astrid, Huber, Andreas, Minichberger, Jürgen, and Ikeda, Markus
- Subjects
INDUSTRIAL robots ,HUMAN-robot interaction ,ROBOTS - Abstract
This article reports three case studies on the usability and acceptance of an industrial robotic prototype in the context of human-robot cooperation. The three case studies were conducted in the framework of a two-year project named AssistMe, which aims at developing different means of interaction for programming and using collaborative robots in a user-centered manner. Together with two industrial partners and a technological partner, two different application scenarios were implemented and studied with an off-the-shelf robotic system. The operators worked with the robotic prototype in laboratory conditions (two days), in a factory context (one day) and in an automotive assembly line (three weeks). In the article, the project and procedures are described in detail, including the quantitative and qualitative methodology. Our results show that close human-robot cooperation in the industrial context needs adaptive pacing mechanisms in order to avoid a change of working routines for the operators and that an off-the-shelf robotic system is still limited in terms of usability and acceptance. The touch panel, which is needed for controlling the robot, had a negative impact on the overall user experience. It creates a further intermediate layer between the user, the robot and the work piece and potentially leads to a decrease in productivity. Finally, the fear of the worker of being replaced by an improved robotic system was regularly expressed and adds an additional anthropocentric dimension to the discussion of human-robot cooperation, smart factories and the upcoming Industry 4.0. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
26. Diseño e implementación de una interfaz basada en RA para la interacción humano-robot usando Microsoft HoloLens 2
- Author
-
Valera Fernández, Ángel, Rossmann, Jürgen, Atanasyan, Alexander, Universitat Politècnica de València. Departamento de Ingeniería de Sistemas y Automática - Departament d'Enginyeria de Sistemes i Automàtica, Universitat Politècnica de València. Escuela Técnica Superior de Ingenieros Industriales - Escola Tècnica Superior d'Enginyers Industrials, Serra Pérez, Marina, Valera Fernández, Ángel, Rossmann, Jürgen, Atanasyan, Alexander, Universitat Politècnica de València. Departamento de Ingeniería de Sistemas y Automática - Departament d'Enginyeria de Sistemes i Automàtica, Universitat Politècnica de València. Escuela Técnica Superior de Ingenieros Industriales - Escola Tècnica Superior d'Enginyers Industrials, and Serra Pérez, Marina
- Abstract
[ES] A pesar del número cada vez mayor de aplicaciones de robots y la disminución de las barreras de entrada a la robótica en general, la programación de robots a menudo sigue siendo engorrosa y requiere mucho tiempo. La programación de robots colaborativos industriales es una tarea desafiante ya que requiere el conocimiento de un trabajador con habilidades en robótica. Esto se traduce en un aumento de los gastos para las empresas, ya que los robots se reprogramarán cuando se instalen en una línea de producción diferente. Por este motivo, normalmente solo se utilizan en la producción a gran escala. Los investigadores están evaluando varios enfoques para reducir el tiempo de capacitación y aumentar la facilidad de uso, de modo que trabajadores con habilidades comunes puedan realizar la tarea de programación fácilmente. Debido a la naturaleza de la tarea, los medios emergentes de Realidad Virtual (VR) y Realidad Aumentada (AR) parecen ser las herramientas más adecuadas e intuitivas en estas labores. Se basan en la realidad aumentada espacial, la cognición y la entrada y salida multimodal. A medida que "hand tracking" se está volviendo más omnipresente en los sistemas de RV y RM ya disponibles, permite explorar nuevos paradigmas de interacción para la interacción hombre-máquina. En el campo de esta tesis, el "hand tracking" facilita a obtener una definición de posición más rápida e intuitiva para crear las trayectorias que deben enseñarse al robot manipulador. Los escenarios en los que los objetos a manipular no son estacionarios constituyen un desafío particular para las tareas de programación. La tarea de esta tesis incluye la implementación de uno o más paradigmas de interacción y su comparación con los métodos de enseñanza clásicos. El proceso de enseñanza se realiza para un manipulador UR10 en un escenario de celda de trabajo robótica con una tarea de pick-and-place. Para ello, el presente trabajo hace uso de la celda de trabajo con el robot, una pantalla de reali, [EN] Robot programming often remains cumbersome and time-consuming. For this reason, researchers are evaluating various approaches to reduce training time and increase userfriendliness, so that ordinary-skilled workers are able to perform the programming task easily. Due to the nature of the task, the emerging media of Virtual Reality (VR) and Augmented Reality (AR) appear to be the best-suited and most intuitive tools in these endeavors. This thesis’ task includes the implementation of a new interaction paradigm and its comparison to classical teaching methods. The teaching process is carried out for a UR10 manipulator in a robotic work cell scenario with a pick-and-place task. For this, the present work makes use of the work cell with the robot, a Mixed Reality (MR) Head-Mounted Display (HMD) — HoloLens 2 — and a 3D simulation system, for which a Digital Twin (DT) of the UR10 already exists.
- Published
- 2021
27. POMDP based robot teaching for high precision assembly in manufacturing automation.
- Author
-
Cheng, Hongtai, Chen, Heping, and Liu, Yong
- Abstract
Robot teaching is a necessary process for setting up a robot for a new task from a simulation environment to real environment or improving the robot performance with new batch of workpieces. Typically robot teaching process is performed by human beings with a teach pendant. This will increase the operational cost and reduce the efficiency. In this paper we present a framework for autonomous robot teaching using one camera without hand-eye calibration. A mobile robot with a camera is an “adult” performing teaching tasks in a production line. To overcome the problem brought by single uncalibrated vision sensor, the concept of “View Cone” is utilized to provide an effective observations of the underlying metric information. Geometrical model of the “View Cone” is built to describe the relationship between block property and the underlying metric information. The robot teaching process is modeled as a Partial Observable Markov Decision Process(POMDP) and solved by the Successive Approximation of the Reachable Space under Optimal Policies (SARSOP) algorithm. Simulations were performed and the results verify the effectiveness of the proposed method. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
28. Towards Robot teaching based on Virtual and Augmented Reality Concepts.
- Author
-
Ennakr, Said, Domingues, Christophe, Benchikh, Laredj, Otmane, Samir, and Mallem, Malik
- Subjects
- *
ROBOTICS , *VIRTUAL reality , *COMPUTER simulation , *MICROELECTROMECHANICAL systems , *INDUSTRIAL productivity - Abstract
A complex system is a system made up of a great number of entities in local and simultaneous interaction. Its design requires the collaboration of engineers of various complementary specialties, so that it is necessary to invent new design methods. Indeed, currently the industry loses much time between the moment when the product model is designed and when the latter is serially produced on the lines of factories. This production is generally ensured by automated and more often robotized means. A deadline is thus necessary for the development of the automatisms and the robots work on a new product model. In this context we launched a study based on the principle of the mechatronics design in Augmented Reality-Virtual Reality. This new approach will bring solutions to problems encountered in many application scopes, but also to problems involved in the distance which separates the offices from design of vehicles and their production sites. This new approach will minimize the differences of errors between the design model and real prototype. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
29. Diseño e implementación de una interfaz basada en RA para la interacción humano-robot usando Microsoft HoloLens 2
- Author
-
Serra Pérez, Marina
- Subjects
Augmented Reality ,Programación de robots ,Pose definitions ,Gaze tracking ,INGENIERIA DE SISTEMAS Y AUTOMATICA ,Seguimiento de la mirada ,Diseño de experiencia de usuario industrial ,Seguimiento de las manos ,Máster Universitario en Ingeniería Industrial-Màster Universitari en Enginyeria Industrial ,HoloLens ,Industrial UX design ,Realidad aumentada ,Robot teaching ,Definición de posición ,Hand tracking - Abstract
[ES] A pesar del número cada vez mayor de aplicaciones de robots y la disminución de las barreras de entrada a la robótica en general, la programación de robots a menudo sigue siendo engorrosa y requiere mucho tiempo. La programación de robots colaborativos industriales es una tarea desafiante ya que requiere el conocimiento de un trabajador con habilidades en robótica. Esto se traduce en un aumento de los gastos para las empresas, ya que los robots se reprogramarán cuando se instalen en una línea de producción diferente. Por este motivo, normalmente solo se utilizan en la producción a gran escala. Los investigadores están evaluando varios enfoques para reducir el tiempo de capacitación y aumentar la facilidad de uso, de modo que trabajadores con habilidades comunes puedan realizar la tarea de programación fácilmente. Debido a la naturaleza de la tarea, los medios emergentes de Realidad Virtual (VR) y Realidad Aumentada (AR) parecen ser las herramientas más adecuadas e intuitivas en estas labores. Se basan en la realidad aumentada espacial, la cognición y la entrada y salida multimodal. A medida que "hand tracking" se está volviendo más omnipresente en los sistemas de RV y RM ya disponibles, permite explorar nuevos paradigmas de interacción para la interacción hombre-máquina. En el campo de esta tesis, el "hand tracking" facilita a obtener una definición de posición más rápida e intuitiva para crear las trayectorias que deben enseñarse al robot manipulador. Los escenarios en los que los objetos a manipular no son estacionarios constituyen un desafío particular para las tareas de programación. La tarea de esta tesis incluye la implementación de uno o más paradigmas de interacción y su comparación con los métodos de enseñanza clásicos. El proceso de enseñanza se realiza para un manipulador UR10 en un escenario de celda de trabajo robótica con una tarea de pick-and-place. Para ello, el presente trabajo hace uso de la celda de trabajo con el robot, una pantalla de realidad mixta montada en la cabeza - HoloLens 2 - y un sistema de simulación 3D, para lo cual un gemelo digital del UR10 y una interfaz inicial para el HoloLens HMD ya existe. Sobre la base de la investigación del estado del arte en el campo, los enfoques existentes se combinarán o se diseñarán e implementarán nuevos para explorar: ¿Qué modalidades y métodos de entrada de AR son los más prometedores para la tarea dada? ¿Qué métricas son más útiles para evaluar el impacto positivo de estos métodos? ¿Cuál es el flujo de trabajo de enseñanza más eficiente utilizando los métodos? ¿Se beneficia de ellos un programador de robots experimentado? ¿La precisión proporcionada es suficiente para la tarea de enseñanza? ¿Puede una persona no iniciada aprender a programar un robot más rápido usando AR y los errores pueden evitarse más fácilmente? ¿De qué manera se puede mejorar la tecnología de AR de última generación para adaptarse aún mejor a tales tareas? Para evaluar la función y efectividad del sistema, se debe seleccionar un escenario de evaluación concreto como paso final y se llevará a cabo un breve test de usuarios., [EN] Robot programming often remains cumbersome and time-consuming. For this reason, researchers are evaluating various approaches to reduce training time and increase userfriendliness, so that ordinary-skilled workers are able to perform the programming task easily. Due to the nature of the task, the emerging media of Virtual Reality (VR) and Augmented Reality (AR) appear to be the best-suited and most intuitive tools in these endeavors. This thesis’ task includes the implementation of a new interaction paradigm and its comparison to classical teaching methods. The teaching process is carried out for a UR10 manipulator in a robotic work cell scenario with a pick-and-place task. For this, the present work makes use of the work cell with the robot, a Mixed Reality (MR) Head-Mounted Display (HMD) — HoloLens 2 — and a 3D simulation system, for which a Digital Twin (DT) of the UR10 already exists.
- Published
- 2021
30. First Application of Robot Teaching in an Existing Industry 4.0 Environment: Does It Really Work?
- Author
-
Astrid Weiss, Andreas Huber, Jürgen Minichberger, and Markus Ikeda
- Subjects
human-robot interaction ,Industry 4.0 ,case study ,field test ,robot teaching ,Social sciences (General) ,H1-99 - Abstract
This article reports three case studies on the usability and acceptance of an industrial robotic prototype in the context of human-robot cooperation. The three case studies were conducted in the framework of a two-year project named AssistMe, which aims at developing different means of interaction for programming and using collaborative robots in a user-centered manner. Together with two industrial partners and a technological partner, two different application scenarios were implemented and studied with an off-the-shelf robotic system. The operators worked with the robotic prototype in laboratory conditions (two days), in a factory context (one day) and in an automotive assembly line (three weeks). In the article, the project and procedures are described in detail, including the quantitative and qualitative methodology. Our results show that close human-robot cooperation in the industrial context needs adaptive pacing mechanisms in order to avoid a change of working routines for the operators and that an off-the-shelf robotic system is still limited in terms of usability and acceptance. The touch panel, which is needed for controlling the robot, had a negative impact on the overall user experience. It creates a further intermediate layer between the user, the robot and the work piece and potentially leads to a decrease in productivity. Finally, the fear of the worker of being replaced by an improved robotic system was regularly expressed and adds an additional anthropocentric dimension to the discussion of human-robot cooperation, smart factories and the upcoming Industry 4.0.
- Published
- 2016
- Full Text
- View/download PDF
31. Teaching robot companions: the role of scaffolding and event structuring.
- Author
-
Otero, Nuno, Saunders, Joe, Dautenhahn, Kerstin, and Nehaniv, ChrystopherL.
- Subjects
- *
HUMAN-robot interaction , *SCAFFOLDED instruction , *ROBOT design & construction , *COGNITIVE learning , *EXPERIMENTAL programs , *INTERACTION model (Communication) , *TEACHING methods , *TEACHERS , *STUDENTS - Abstract
For robots to be more capable interaction partners they will necessarily need to adapt to the needs and requirements of their human companions. One way that the human could aid this adaptation may be by teaching the robot new ways of doing things by physically demonstrating different behaviours and tasks such that the robot learns new skills by imitating the learnt behaviours in appropriate contexts. In human-human teaching, the concept of scaffolding describes the process whereby the teacher guides the pupil to new competence levels by exploiting and extending existing competencies. In addition, the idea of event structuring can be used to describe how the teacher highlights important moments in an overall interaction episode. Scaffolding and event structuring robot skills in this way may be an attractive route in achieving robot adaptation; however, there are many ways in which a particular behaviour might be scaffolded or structured and the interaction process itself may have an effect on the robot's resulting performance. Our overall research goal is to understand how to design an appropriate human-robot interaction paradigm where the robot will be able to intervene and elicit knowledge from the human teacher in order to better understand the taught behaviour. In this article we examine some of these issues in two exploratory human-robot teaching scenarios. The first considers task structuring from the robot's viewpoint by varying the way in which a robot is taught. The experimental results illustrate that the way in which teaching is carried out, and primarily how the teaching steps are decomposed, has a critical effect on the efficiency of human teaching and the effectiveness of robot learning. The second experiment studies the problem from the human's viewpoint in an attempt to study the human teacher's spontaneous levels of event segmentation when analysing their own demonstrations of a routine home task to a robot. The results suggest the existence of some individual differences regarding the level of granularity spontaneously considered for the task segmentation and for those moments in the interaction which are viewed as most important. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
32. Solpen: An Accurate 6-DOF Positioning Tool for Vision-Guided Robotics.
- Author
-
Le, Trung-Son, Tran, Quoc-Viet, Nguyen, Xuan-Loc, and Lin, Chyi-Yeu
- Subjects
ROBOTICS ,SINGLE-degree-of-freedom systems ,IMAGE denoising ,IMAGE processing ,ROBOT vision ,ROBOTIC welding ,LEAST squares ,CAMERAS - Abstract
A robot trajectory teaching system with a vision-based positioning pen, which we called Solpen, is developed to generate pose paths of six degrees of freedom (6-DoF) for vision-guided robotics applications such as welding, cutting, painting, or polishing, which can achieve a millimeter dynamic accuracy within a meter working distance from the camera. The system is simple and requires only a 2D camera and the printed ArUco markers which are hand-glued on 31 surfaces of the designed 3D-printed Solpen. Image processing techniques are implemented to remove noise and sharpen the edge of the ArUco images and also enhance the contrast of the ArUco edge intensity generated by the pyramid reconstruction. In addition, the least squares method is implemented to optimize parameters for the center pose of the truncated Icosahedron center, and the vector of the Solpen-tip. From dynamic experiments conducted with ChArUco board to verify exclusively the pen performance, the developed system is robust within its working range, and achieves a minimum axis-accuracy at approximately 0.8 mm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. 'Teach Me–Show Me'—End-User Personalization of a Smart Home and Companion Robot
- Author
-
Joe Saunders, Kheng Lee Koay, Kerstin Dautenhahn, Dag Sverre Syrdal, Nathan W. Burke, School of Computer Science, Science & Technology Research Institute, Centre for Computer Science and Informatics Research, and Adaptive Systems
- Subjects
0209 industrial biotechnology ,Ubiquitous robot ,Personal robot ,Computer Networks and Communications ,Computer science ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Human Factors and Ergonomics ,02 engineering and technology ,computer.software_genre ,Robot learning ,Robot personalisation ,Personalization ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,activity recognition ,robot teaching ,Social robot ,Multimedia ,System usability scale ,robot learning ,Mobile robot ,Autonomous robot ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,robot companion ,Signal Processing ,020201 artificial intelligence & image processing ,computer - Abstract
(c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works, Care issues and costs associated with an increasing elderly population is becoming a major concern for many countries. The use of assistive robots in ‘smart-home’ environments has been suggested as a possible partial solution to these concerns. One of the many challenges faced is the personalisation of the robot to meet the changing needs of the elderly person over time. One approach is to allow the elderly person, or their carers or relatives, to teach the robot to both recognise activities in the smart home and to teach it to carry out behaviours in response to these activities. The overriding premise being that such teaching is both intuitive and ‘non-technical’. As part of a European project researching and evaluating these issues a commercially available autonomous robot has been deployed in a fully sensorised but otherwise ordinary suburban house. Occupants of the house are equipped with a non-technical teaching and learning system. This paper details the design approach to the teaching, learning, robot and smart home systems as an integrated unit and presents results from an evaluation of the teaching component and a preliminary evaluation of the learning component in a Human-Robot interaction experiment. Results from this evaluation indicated that participants overall found this approach to personalisation useful, easy to use, and felt that they would be capable of using it in a real-life situation both for themselves and for others. However there were also some salient individual differences within the sample., authorsversion, Final Published version
- Published
- 2016
- Full Text
- View/download PDF
34. Metrics to Evaluate Human Teaching Engagement From a Robot's Point of View
- Author
-
Novanda, Ori
- Subjects
robot teaching ,human-robot interaction ,effort measurement ,humanoid robot ,engagement evaluation ,multimodal interaction ,input modality preference - Abstract
This thesis was motivated by a study of how robots can be taught by humans, with an emphasis on allowing persons without programming skills to teach robots. The focus of this thesis was to investigate what criteria could or should be used by a robot to evaluate whether a human teacher is (or potentially could be) a good teacher in robot learning by demonstration. In effect, choosing the teacher that can maximize the benefit to the robot using learning by imitation/demonstration. The study approached this topic by taking a technology snapshot in time to see if a representative example of research laboratory robot technology is capable of assessing teaching quality. With this snapshot, this study evaluated how humans observe teaching quality to attempt to establish measurement metrics that can be transferred as rules or algorithms that are beneficial from a robot’s point of view. To evaluate teaching quality, the study looked at the teacher-student relationship from a human-human interaction perspective. Two factors were considered important in defining a good teacher: engagement and immediacy. The study gathered more literature reviews relating to further detailed elements of engagement and immediacy. The study also tried to link physical effort as a possible metric that could be used to measure the level of engagement of the teachers. An investigatory experiment was conducted to evaluate which modality the participants prefer to employ in teaching a robot if the robot can be taught using voice, gesture demonstration, or physical manipulation. The findings from this experiment suggested that the participants appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robot’s perceived ability to imitate. A main experiment was conducted to study the detailed elements that might be used by a robot in identifying a “good” teacher. The main experiment was conducted in two subexperiments. The first part recorded the teacher’s activities and the second part analysed how humans evaluate the perception of engagement when assessing another human teaching a robot. The results from the main experiment suggested that in human teaching of a robot (human-robot interaction), humans (the evaluators) also look for some immediacy cues that happen in human-human interaction for evaluating the engagement.
- Published
- 2018
35. The use of motion tracking for teaching of paint robots
- Author
-
Haus, Eivind Sandve and Mossige, Morten
- Subjects
robot teaching ,automated painting ,Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550 [VDP] ,informasjonsteknologi ,signalbehandling ,HTC Vive ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,paint robots ,kybernetikk ,motion tracking ,ABB ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Master's thesis in Automation and signal processing Most industrial production processes require parts to be painted, both for aesthetic and quality purposes. The painting process is often automated with the benefits of better paint coverage, less waste of paint, high consistency between batches and reduced production time. One way to automate this process is to mount a spray-painting gun on an industrial robot. The robot then moves along a preprogrammed path and covers the object in paint. Before it can be used the robot must be programmed. This consists of determining the path the robot should follow and when it should spray paint. A new paint program must be made every time the product is changed or a new part needs to be painted. This project has developed an intuitive and efficient way of creating said paint programs. The approach is to use the fluid motion of the human arm and experience of painters to create the robot’s path. This is performed with the use of motion tracking technology. The operator holds a hand held device which is tracked in three-dimensional space. When the operator moves the device as if painting the object, the traced path is recorded and used to create the paint program. The virtual reality system HTC Vive has been used for the motion tracking. It has shown promising results in precision and reliability in the past, which have been confirmed by this project. The project has implemented an interface to retrieve information from the HTC Vive system. Two approaches to the paint program creation has been developed. The first approach records the path independently of the robot and creates the paint program. It is then loaded onto the robot and can be used to paint the object. The second approach uses motion tracking to control the robot in real-time. While the operator controls the robot the object is painted and the path is recorded. The path is then used to create the paint program which can be loaded onto the robot to paint new objects.Both of the implemented solutions have shown promising results. They allow the user to create simple paint programs in a matter of minutes with little to no training needed. There are some limitations when creating more advanced paint programs, especially in regards to determining the correct orientation of the paint gun tool. More work would be necessary to make the implemented solutions ready to be used commercially.
- Published
- 2018
36. Automatsko učenje robotskih laserskih daljinskih 3D obradnih sustava na osnovu laserske triangulacije
- Author
-
Matjaž Kos, Hubert Kosler, Matija Jezeršek, and Janez Možina
- Subjects
daljinska laserska obrada ,davanje uputa robotu ,detekcija rubova ,laserska triangulacija ,trodimenzijska mjerenja ,0209 industrial biotechnology ,Engineering ,edge detection ,laser triangulation ,remote laser processing ,robot teaching ,three-dimensional measurement ,Laser triangulation ,02 engineering and technology ,01 natural sciences ,Edge detection ,law.invention ,010309 optics ,020901 industrial engineering & automation ,Optics ,law ,0103 physical sciences ,Computer vision ,business.industry ,General Engineering ,Laser ,Three dimensional measurement ,lcsh:TA1-2040 ,Artificial intelligence ,Profilometer ,lcsh:Engineering (General). Civil engineering (General) ,business - Abstract
Jedan od bitnih izazova u daljinskoj laserskoj 3D obradi (RL3DP) je postizanje visoke točnosti jer laser radi po rubovima predmeta koji obrađuje. Ovaj rad prikazuje novi RL3DP sustav s automatskom 3D nastavnom funkcionalnošću radi točnog i brzog određivanja prijenosa detektiranih rubova koje obrađuje robot sa 3D skenerom, fiber laserom i izvan aksijalno pozicioniranom kamerom. 3D skeniranje se bazira na laserskoj triangulaciji sa pilotskom laserskom trakom. Eksperimentalni rezultati pokazuju da sustav ima preciznost bolju od 70 μm i 120 μm u bočnom i vertikalnom smjeru u mjernom području od 100 × 100 mm. Vrijeme učenja je 30-puta kraće u odnosu na vizualni postupak. Stoga, takav sustav može značajno smanjiti troškove obrade sa modernim proizvodnim sistemima koji se moraju prilagođavati stalnim promjenama geometrije i funkcionalnosti proizvoda., One of the key challenges in robotic remote laser 3D processing (RL3DP) is to achieve high accuracy for the laser’s working trajectory relative to the features of the workpiece. This paper presents a novel RL3DP system with an automatic 3D teaching functionality for a precise and rapid determination of the working trajectory which comprises a robot manipulator, 3D scanning head, fibre laser and an off-axis positioned camera. The 3D measurement is based on laser triangulation with laser-stripe illumination using the laser’s pilot beam and scanning head. The experimental results show that the system has a precision better than 70 μm and 120 μm along lateral and vertical direction respectively inside the measuring range of 100 × 100 mm. The teaching time is 30-times shorter compared to a visual teaching procedure. Therefore, such a system can lead to large cost reductions for modern production lines that have constant changes to the products’ geometries and functionalities.
- Published
- 2017
- Full Text
- View/download PDF
37. Interactive Null Space Control for Intuitively Interpretable Reconfiguration of Redundant Manipulators
- Author
-
Nico Mansfeld, Alexander Dietrich, Sami Haddadin, and Fabian Beck
- Subjects
Flexibility (engineering) ,0209 industrial biotechnology ,Robot Teaching ,Computer science ,Property (programming) ,Work (physics) ,Control reconfiguration ,02 engineering and technology ,Object (computer science) ,Task (computing) ,020901 industrial engineering & automation ,Redundancy ,Human–computer interaction ,Null Space ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,Control (linguistics) - Abstract
Kinematic redundancy is a characteristic and beneficial property in collaborative robots nowadays as it enhances the flexibility and dexterity of the system. While the robot is manipulating an object, it is often necessary to kinematically reconfigure the robot, for example, when it obstructs the human. For this, internal or so-called null space motions can be carried out which do not affect the main task. In general, it is desirable that the human coworker can anticipate how the robot will move at any time. However, for null space motions this is typically not the case as they are non-intuitive and not suitable for interaction. In this work, we develop intuitive null space interaction behaviors for redundant manipulators, where the human can easily guide the robot. We want to provide users with a tool, that is straightforward to implement and solves real-world problems effectively. Two practical applications for an eight- and ten-DOF robot demonstrate the performance of the proposed method.
- Published
- 2017
38. Pragmatic Frames for Teaching and Learning in Human-Robot interaction: Review and Challenges
- Author
-
Pierre-Yves Oudeyer, Britta Wrede, Anna-Lisa Vollmer, Katharina J. Rohlfing, Flowing Epigenetic Robots and Systems (Flowers), Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Cognitive Interaction Technology [Bielefeld] (CITEC), Universität Bielefeld = Bielefeld University, University of Paderborn, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Unité d'Informatique et d'Ingénierie des Systèmes (U2IS), and École Nationale Supérieure de Techniques Avancées (ENSTA Paris)-École Nationale Supérieure de Techniques Avancées (ENSTA Paris)
- Subjects
0209 industrial biotechnology ,Computer science ,Biomedical Engineering ,02 engineering and technology ,Review ,Machine learning ,computer.software_genre ,Robot learning ,Human–robot interaction ,Developmental robotics ,language learning ,lcsh:RC321-571 ,cognitive ,human-robot interaction ,020901 industrial engineering & automation ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,Artificial Intelligence ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,human–robot interaction ,Action learning ,lcsh:Neurosciences. Biological psychiatry. Neuropsychiatry ,robot teaching ,pragmatic frames ,developmental robotics ,business.industry ,robot learning ,Language acquisition ,Social learning ,social learning ,action learning ,cognitive developmental robotics ,Action (philosophy) ,frames ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Cognitive robotics ,computer ,pragmatic ,Neuroscience - Abstract
International audience; One of the big challenges in robotics today is to learn from human users that are inexperienced in interacting with robots but yet are often used to teach skills flexibly to other humans and to children in particular. A potential route toward natural and efficient learning and teaching in Human-Robot Interaction (HRI) is to leverage the social competences of humans and the underlying interactional mechanisms. In this perspective, this article discusses the importance of pragmatic frames as flexible interaction protocols that provide important contextual cues to enable learners to infer new action or language skills and teachers to convey these cues. After defining and discussing the concept of pragmatic frames, grounded in decades of research in developmental psychology, we study a selection of HRI work in the literature which has focused on learning–teaching interaction and analyze the interactional and learning mechanisms that were used in the light of pragmatic frames. This allows us to show that many of the works have already used in practice, but not always explicitly, basic elements of the pragmatic frames machinery. However, we also show that pragmatic frames have so far been used in a very restricted way as compared to how they are used in human–human interaction and argue that this has been an obstacle preventing robust natural multi-task learning and teaching in HRI. In particular, we explain that two central features of human pragmatic frames, mostly absent of existing HRI studies, are that (1) social peers use rich repertoires of frames, potentially combined together, to convey and infer multiple kinds of cues; (2) new frames can be learnt continually, building on existing ones, and guiding the interaction toward higher levels of complexity and expressivity. To conclude, we give an outlook on the future research direction describing the relevant key challenges that need to be solved for leveraging pragmatic frames for robot learning and teaching.
- Published
- 2016
- Full Text
- View/download PDF
39. A Design of a Teaching Mode for an Upper Limb Therapy Robot
- Author
-
Harris, Jason and Abdullah, Hussein
- Subjects
body regions ,robot teaching ,rehabilitation robot ,linear generalization ,therapy robot ,human activities ,stroke - Abstract
Stroke is an age-related illness with significant individual and societal impacts. The long term impacts associated with many strokes can be mitigated with timely rehabilitation. Therapy robots have been introduced to these programs in an effort to reduce the economic burden to society and to improve the level of care provided to stroke survivors. The purpose of this thesis is to develop a teaching mode for an upper limb therapy robot. The system will allow physiotherapists to interact with the therapy robot without the need for any specialized industrial training. At the same, the system will reduce the data associated with patient movements to reduce requirements for robot safety and motion systems. The proposed system was successfully confirmed using a laboratory scale industrial robot and a standalone motion control system consisting of commercially available AC servo motors and a motion controller with both generated and recorded paths.
- Published
- 2013
40. Robust dataglove mapping for recording human hand postures
- Author
-
Jan Steffen, Helge Ritter, and Jonathan Maycock
- Subjects
Engineering ,business.industry ,Wired glove ,Hand postures ,Thumb ,Object (computer science) ,Dataglove ,Cross coupled ,medicine.anatomical_structure ,medicine ,Robot ,Robot teaching ,Computer vision ,Artificial intelligence ,Focus (optics) ,business - Abstract
We present a novel dataglove mapping technique based on parameterisable models that handle both the cross coupled sensors of the fingers and thumb, and the under-specified abduction sensors for the fingers. Our focus is on realistically reproducing the posture of the hand as a whole, rather than on accurate fingertip positions. The method proposed in this paper is a vision-free, object free, data glove mapping and calibration method that has been successfully used in robot manipulation tasks.
- Published
- 2011
41. Robust Tracking of Human Hand Postures for Robot Teaching
- Author
-
Jan Steffen, Helge Ritter, Jonathan Maycock, and Robert Haschke
- Subjects
Ground truth ,Engineering ,Inverse kinematics ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Kinematics ,Wired glove ,Manual Interaction databases ,Thumb ,Tracking (particle physics) ,Replication (computing) ,medicine.anatomical_structure ,medicine ,Robot ,Robot teaching ,Computer vision ,Artificial intelligence ,Hand tracking ,business - Abstract
To enable the creation of manual interaction databases, aiding the replication of dexterous capabilities with anthropomorphic robot hands by utilizing information about how humans perform complex manipulation tasks, requires the capability to record and analyze large amounts of manual interaction sequences. For this goal we have studied and compared three mappings from captured human hand motion data to a simulated model, which allow for robust and accurate real-time hand posture tracking. We evaluate the effectiveness of these mappings and discuss their pros and cons in various real-world scenarios. The first method is based on data glove data and aims for direct gaging of hand joints. The other two methods utilize a VICON motion tracking system which monitors markers placed on all finger segments. Here we compare two approaches: a direct computation of hand postures from angles between adjacent markers and an iterative inverse kinematics approach to optimally reproduce fingertip positions. For a quantitative evaluation, we employ a “calibration objects” technique to obtain a reliable ground truth of task-relevant hand posture data.
- Published
- 2011
- Full Text
- View/download PDF
42. 産業用ロボットの教示のための対話型知識獲得と再利用に関する研究
- Author
-
Wang, Lei, 椹木, 哲夫, 松原, 厚, and 西脇, 眞二
- Subjects
robot teaching ,interactive knowledge acquisition - Published
- 2010
43. Developing robot motions by simulated touch sensors
- Author
-
Dalla Libera, F., Minato, T., Ishiguro, H., Pagello, E., and Menegatti, E.
- Subjects
Touch ,Motion development ,Robot teaching ,Simulation - Abstract
Touch is a very powerful but not much studied communication mean in human-robot interaction. Nonetheless many robots are not equipped with touch sensors, because it is often difficult to place such sensors over the robot surface or simply because the main task of the robot does not require them. We propose an approach that allows developing motions for a real humanoid robot by touching its 3D representation. This simulated counterpart can be equipped with touch sensors not physically available and allows the user to interact with a robot moving in slow-play, which is not possible in real world due to the changes in the dynamics. The developed interface, employing simulated touch sensors, allows inexperienced users to program robot movements in an intuitive way without any modification of the robot's hardware. Thanks to this tool we can also study how humans employ touch for communication. We then report how simulation can be used to study user dependence of touch instructions assuring all the subjects to be in exactly the same conditions
- Published
- 2008
- Full Text
- View/download PDF
44. Studies on Interactive Knowledge Acquisition and Reuse for Teaching Industrial Robots
- Author
-
Wang, Lei and Wang, Lei
- Published
- 2010
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.