822 results on '"Programming by demonstration"'
Search Results
2. Unifying Skill-Based Programming and Programming by Demonstration through Ontologies
- Author
-
Eiband, Thomas, Lay, Florian, Nottensteiner, Korbinian, and Lee, Dongheui
- Published
- 2024
- Full Text
- View/download PDF
3. Motion capture and AR based programming by demonstration for industrial robots using handheld teaching device
- Author
-
Guoliang Liu, Wenlei Sun, and Pinwen Li
- Subjects
Robots ,Motion capture ,AR ,Programming by demonstration ,Medicine ,Science - Abstract
Abstract In the industrial robots field, efficient and convenient programming methods have been a hot research topic. In recent years, immersive simulation technology has been developing rapidly in many fields, which provides new horizons for the development of industrial robots. This paper presents a HTC VIVE laser scan motion capture and Holohens augmented reality (AR) based interactive Programming by Demonstration (PbD) system for industrial robot. A portable Handheld Teaching Device (HTD) and its calibration algorithm are designed in the system. The portable HTD which is tracked by a laser motion capture system can be viewed as an AR robot end-effector to teach paths. Meanwhile, the AR robot can be simulated in real time during programing. In addition, the robot reproducing the operator’s actions at the same position in space is the focus of programming. So, Multi-system registration methods are proposed to determine the relationship between robot systems, motion capture systems and virtual robot systems. Meanwhile, a path planning algorithm is proposed to convert the captured raw path points into robot-executable code. For unskilled operators, they can easily perform complex programming using the HTD. For skilled senior workers, their skills can be quickly learned by robots using the system.
- Published
- 2024
- Full Text
- View/download PDF
4. Motion capture and AR based programming by demonstration for industrial robots using handheld teaching device.
- Author
-
Liu, Guoliang, Sun, Wenlei, and Li, Pinwen
- Subjects
INDUSTRIALISM ,ROBOT programming ,INDUSTRIALIZATION ,SKILLED labor ,ROBOTS ,INDUSTRIAL robots ,MOTION capture (Human mechanics) - Abstract
In the industrial robots field, efficient and convenient programming methods have been a hot research topic. In recent years, immersive simulation technology has been developing rapidly in many fields, which provides new horizons for the development of industrial robots. This paper presents a HTC VIVE laser scan motion capture and Holohens augmented reality (AR) based interactive Programming by Demonstration (PbD) system for industrial robot. A portable Handheld Teaching Device (HTD) and its calibration algorithm are designed in the system. The portable HTD which is tracked by a laser motion capture system can be viewed as an AR robot end-effector to teach paths. Meanwhile, the AR robot can be simulated in real time during programing. In addition, the robot reproducing the operator's actions at the same position in space is the focus of programming. So, Multi-system registration methods are proposed to determine the relationship between robot systems, motion capture systems and virtual robot systems. Meanwhile, a path planning algorithm is proposed to convert the captured raw path points into robot-executable code. For unskilled operators, they can easily perform complex programming using the HTD. For skilled senior workers, their skills can be quickly learned by robots using the system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Learning human actions from complex manipulation tasks and their transfer to robots in the circular factory.
- Author
-
Zaremski, Manuel, Handwerker, Blanca, Dreher, Christian R. G., Leven, Fabian, Schneider, David, Roitberg, Alina, Stiefelhagen, Rainer, Neumann, Gerhard, Heizmann, Michael, Asfour, Tamim, and Deml, Barbara
- Subjects
INDUSTRIAL robots ,HUMAN behavior ,LEARNING ,DEEP learning ,FACTORY location - Abstract
Copyright of Automatisierungstechnik is the property of De Gruyter and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
6. AI-Powered Human-Centred Robot Interactions: Challenges in Human-Robot Collaboration Across Diverse Industrial Scenarios
- Author
-
Fraile, Francisco, Akkaladevi, Sharath Chandra, Siciliano, Bruno, Series Editor, Khatib, Oussama, Series Editor, Antonelli, Gianluca, Advisory Editor, Fox, Dieter, Advisory Editor, Harada, Kensuke, Advisory Editor, Hsieh, M. Ani, Advisory Editor, Kröger, Torsten, Advisory Editor, Kulic, Dana, Advisory Editor, Park, Jaeheung, Advisory Editor, Secchi, Cristian, editor, and Marconi, Lorenzo, editor
- Published
- 2024
- Full Text
- View/download PDF
7. On the Development of Programming by Demonstration Environment for Human-Robot Collaboration in a Furniture Painting Cell
- Author
-
Lario, Joan, Fraile, Francisco, Ioana, Emima, Blanes, Francisco, Siciliano, Bruno, Series Editor, Khatib, Oussama, Series Editor, Antonelli, Gianluca, Advisory Editor, Fox, Dieter, Advisory Editor, Harada, Kensuke, Advisory Editor, Hsieh, M. Ani, Advisory Editor, Kröger, Torsten, Advisory Editor, Kulic, Dana, Advisory Editor, Park, Jaeheung, Advisory Editor, Secchi, Cristian, editor, and Marconi, Lorenzo, editor
- Published
- 2024
- Full Text
- View/download PDF
8. Exploitation of Similarities in Point Clouds for Simplified Robot Programming by Demonstration
- Author
-
Möhl, Philipp, Ikeda, Markus, Hofmann, Michael, Pichler, Andreas, Siciliano, Bruno, Series Editor, Khatib, Oussama, Series Editor, Antonelli, Gianluca, Advisory Editor, Fox, Dieter, Advisory Editor, Harada, Kensuke, Advisory Editor, Hsieh, M. Ani, Advisory Editor, Kröger, Torsten, Advisory Editor, Kulic, Dana, Advisory Editor, Park, Jaeheung, Advisory Editor, Secchi, Cristian, editor, and Marconi, Lorenzo, editor
- Published
- 2024
- Full Text
- View/download PDF
9. Dynamic Via-points and Improved Spatial Generalization for Online Trajectory Generation with Dynamic Movement Primitives.
- Author
-
Sidiropoulos, Antonis and Doulgeri, Zoe
- Abstract
Dynamic Movement Primitives (DMP) have found remarkable applicability and success in various robotic tasks, which can be mainly attributed to their generalization, modulation and robustness properties. However, the spatial generalization of DMP can be problematic in some cases, leading to excessive overscaling and in turn large velocities and accelerations. While other DMP variants have been proposed in the literature to tackle this issue, they can also exhibit excessive overscaling as we show in this work. Moreover, incorporating intermediate points (via-points) for adjusting the DMP trajectory to account for the geometry of objects related to the task, or to avoid or push aside objects that obstruct a specific task, is not addressed by the current DMP literature. In this work we tackle these unresolved so far issues by proposing an improved online spatial generalization, that remedies the shortcomings of the classical DMP generalization, and moreover allows the incorporation of dynamic via-points. This is achieved by designing an online adaptation scheme for the DMP weights which is proved to minimize the distance from the demonstrated acceleration profile to retain the shape of the demonstration, subject to dynamic via-point and initial/final state constraints. Extensive comparative simulations with the classical and other DMP variants are conducted, while experimental results validate the practical usefulness and efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. User Study to Validate the Performance of an Offline Robot Programming Method That Enables Robot-Independent Kinesthetic Instruction through the Use of Augmented Reality and Motion Capturing.
- Author
-
Müller, Fabian, Koch, Michael, and Hasse, Alexander
- Subjects
MOTION capture (Cinematography) ,ROBOT programming ,MOTION capture (Human mechanics) ,AUGMENTED reality ,INDUSTRIALISM ,HAPTIC devices - Abstract
The paper presents a novel offline programming (OLP) method based on programming by demonstration (PbD), which has been validated through user study. PbD is a programming method that involves physical interaction with robots, and kinesthetic teaching (KT) is a commonly used online programming method in industry. However, online programming methods consume significant robot resources, limiting the speed advantages of PbD and emphasizing the need for an offline approach. The method presented here, based on KT, uses a virtual representation instead of a physical robot, allowing independent programming regardless of the working environment. It employs haptic input devices to teach a simulated robot in augmented reality and uses automatic path planning. A benchmarking test was conducted to standardize equipment, procedures, and evaluation techniques to compare different PbD approaches. The results indicate a 47 % decrease in programming time when compared to traditional KT methods in established industrial systems. Although the accuracy is not yet at the level of industrial systems, users have shown rapid improvement, confirming the learnability of the system. User feedback on the perceived workload and the ease of use was positive. In conclusion, this method has potential for industrial use due to its learnability, reduction in robot downtime, and applicability across different robot sizes and types. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Robot programming by demonstration with a monocular RGB camera
- Author
-
Wang, Kaimeng and Tang, Te
- Published
- 2023
- Full Text
- View/download PDF
12. Robot Programming from a Single Demonstration for High Precision Industrial Insertion.
- Author
-
Wang, Kaimeng, Fan, Yongxiang, and Sakuma, Ichiro
- Subjects
- *
ROBOT programming , *OBJECT tracking (Computer vision) , *INDUSTRIAL robots , *HUMAN cloning , *ROBOTS , *PRIOR learning - Abstract
We propose a novel approach for robotic industrial insertion tasks using the Programming by Demonstration technique. Our method allows robots to learn a high-precision task by observing human demonstration once, without requiring any prior knowledge of the object. We introduce an Imitated-to-Finetuned approach that generates imitated approach trajectories by cloning the human hand's movements and then fine-tunes the goal position with a visual servoing approach. To identify features on the object used in visual servoing, we model object tracking as the moving object detection problem, separating each demonstration video frame into the moving foreground that includes the object and demonstrator's hand and the static background. Then a hand keypoints estimation function is used to remove the redundant features on the hand. The experiment shows that the proposed method can make robots learn precision industrial insertion tasks from a single human demonstration. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. User Study to Validate the Performance of an Offline Robot Programming Method That Enables Robot-Independent Kinesthetic Instruction through the Use of Augmented Reality and Motion Capturing
- Author
-
Fabian Müller, Michael Koch, and Alexander Hasse
- Subjects
robot programming ,programming by demonstration ,motion capture ,augmented reality ,performance evaluation ,user study ,Mechanical engineering and machinery ,TJ1-1570 - Abstract
The paper presents a novel offline programming (OLP) method based on programming by demonstration (PbD), which has been validated through user study. PbD is a programming method that involves physical interaction with robots, and kinesthetic teaching (KT) is a commonly used online programming method in industry. However, online programming methods consume significant robot resources, limiting the speed advantages of PbD and emphasizing the need for an offline approach. The method presented here, based on KT, uses a virtual representation instead of a physical robot, allowing independent programming regardless of the working environment. It employs haptic input devices to teach a simulated robot in augmented reality and uses automatic path planning. A benchmarking test was conducted to standardize equipment, procedures, and evaluation techniques to compare different PbD approaches. The results indicate a 47% decrease in programming time when compared to traditional KT methods in established industrial systems. Although the accuracy is not yet at the level of industrial systems, users have shown rapid improvement, confirming the learnability of the system. User feedback on the perceived workload and the ease of use was positive. In conclusion, this method has potential for industrial use due to its learnability, reduction in robot downtime, and applicability across different robot sizes and types.
- Published
- 2024
- Full Text
- View/download PDF
14. Industrial robot programming by demonstration using stereoscopic vision and inertial sensing
- Author
-
de Souza, João Pedro C., Amorim, António M., Rocha, Luís F., Pinto, Vítor H., and Moreira, António Paulo
- Published
- 2022
- Full Text
- View/download PDF
15. Collaborative programming of robotic task decisions and recovery behaviors.
- Author
-
Eiband, Thomas, Willibald, Christoph, Tannert, Isabel, Weber, Bernhard, and Lee, Dongheui
- Subjects
ROBOT programming ,KNOWLEDGE representation (Information theory) ,ROBOTICS ,DECISION making ,ROBOTS - Abstract
Programming by demonstration is reaching industrial applications, which allows non-experts to teach new tasks without manual code writing. However, a certain level of complexity, such as online decision making or the definition of recovery behaviors, still requires experts that use conventional programming methods. Even though, experts cannot foresee all possible faults in a robotic application. To encounter this, we present a framework where user and robot collaboratively program a task that involves online decision making and recovery behaviors. Hereby, a task-graph is created that represents a production task and possible alternative behaviors. Nodes represent start, end or decision states and links define actions for execution. This graph can be incrementally extended by autonomous anomaly detection, which requests the user to add knowledge for a specific recovery action. Besides our proposed approach, we introduce two alternative approaches that manage recovery behavior programming and compare all approaches extensively in a user study involving 21 subjects. This study revealed the strength of our framework and analyzed how users act to add knowledge to the robot. Our findings proclaim to use a framework with a task-graph based knowledge representation and autonomous anomaly detection not only for initiating recovery actions but particularly to transfer those to a robot. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Contextual Programming of Collaborative Robots
- Author
-
Huang, Chien-Ming, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Degen, Helmut, editor, and Reinerman-Jones, Lauren, editor
- Published
- 2020
- Full Text
- View/download PDF
17. Erfassung und Interpretation menschlicher Handlungen für die Programmierung von Robotern in der Produktion.
- Author
-
Dreher, Christian R. G., Zaremski, Manuel, Leven, Fabian, Schneider, David, Roitberg, Alina, Stiefelhagen, Rainer, Heizmann, Michael, Deml, Barbara, and Asfour, Tamim
- Subjects
ELECTRIC motors ,GAZE ,REMANUFACTURING ,ROBOT hands ,HUMAN activity recognition ,ROBOTS - Abstract
Copyright of Automatisierungstechnik is the property of De Gruyter and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
18. A spatial information inference method for programming by demonstration of assembly tasks by integrating visual observation with CAD model
- Author
-
Zhou, Zhongxiang, Ji, Liang, Xiong, Rong, and Wang, Yue
- Published
- 2020
- Full Text
- View/download PDF
19. Progressive Automation of Repetitive Tasks Involving both Translation and Rotation
- Author
-
Dimeas, Fotios, Doulgeri, Zoe, Ceccarelli, Marco, Series Editor, Hernandez, Alfonso, Editorial Board Member, Huang, Tian, Editorial Board Member, Velinsky, Steven A., Editorial Board Member, Takeda, Yukio, Editorial Board Member, Corves, Burkhard, Editorial Board Member, Aspragathos, Nikos A., editor, Koustoumpardis, Panagiotis N., editor, and Moulianitis, Vassilis C., editor
- Published
- 2019
- Full Text
- View/download PDF
20. Teaching and Learning How to Program Without Writing Code
- Author
-
Adam, Michel, Daoud, Moncef, Frison, Patrice, Howlett, Robert James, Series Editor, Jain, Lakhmi C., Series Editor, Rocha, Álvaro, editor, and Serrhini, Mohammed, editor
- Published
- 2019
- Full Text
- View/download PDF
21. DupRobo: Interactive Robotic Autocompletion of Physical Block-Based Repetitive Structure
- Author
-
Chen, Taizhou, Wu, Yi-Shiun, Zhu, Kening, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lamas, David, editor, Loizides, Fernando, editor, Nacke, Lennart, editor, Petrie, Helen, editor, Winckler, Marco, editor, and Zaphiris, Panayiotis, editor
- Published
- 2019
- Full Text
- View/download PDF
22. iRoPro: An interactive Robot Programming Framework.
- Author
-
Liang, Ying Siu, Pellier, Damien, Fiorino, Humbert, and Pesty, Sylvie
- Subjects
ROBOT programming ,PUBLIC demonstrations ,GRAPHICAL user interfaces ,HUMAN-robot interaction ,EDUCATIONAL background ,PLANNING techniques - Abstract
The great diversity of end-user tasks ranging from manufacturing environments to personal homes makes pre-programming robots for general purpose applications extremely challenging. In fact, teaching robots new actions from scratch that can be reused for previously unseen tasks remains a difficult challenge and is generally left up to robotics experts. In this work, we present iRoPro, an interactive Robot Programming framework that allows end-users with little to no technical background to teach a robot new reusable actions. We combine Programming by Demonstration and Automated Planning techniques to allow the user to construct the robot's knowledge base by teaching new actions by kinesthetic demonstration. The actions are generalised and reused with a task planner to solve previously unseen problems defined by the user. We implement iRoPro as an end-to-end system on a Baxter Research Robot to simultaneously teach low- and high-level actions by demonstration that the user can customise via a Graphical User Interface to adapt to their specific use case. To evaluate the feasibility of our approach, we first conducted pre-design experiments to better understand the user's adoption of involved concepts and the proposed robot programming process. We compare results with post-design experiments, where we conducted a user study to validate the usability of our approach with real end-users. Overall, we showed that users with different programming levels and educational backgrounds can easily learn and use iRoPro and its robot programming process. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Autonomous assembly planning of demonstrated skills with reinforcement learning in simulation.
- Author
-
De Winter, Joris, EI Makrini, Ilias, Van de Perre, Greet, Nowé, Ann, Verstraten, Tom, and Vanderborght, Bram
- Subjects
INDUSTRIAL robots ,HUMAN-robot interaction ,REINFORCEMENT learning - Abstract
Industrial robots used to assemble customized products in small batches require a lot of reprogramming. With this work we aim to reduce the programming complexity by autonomously finding the fastest assembly plans without any collisions with the environment. First, a digital twin of the robot uses a gym in simulation to learn which assembly skills (programmed by demonstration) are physically possible (i.e. no collisions with the environment). Only from this reduced solution space will the physical twin look for the fastest assembly plans. Experiments show that the system indeed converges to the fastest assembly plans. Moreover, pre-training in simulation drastically reduces the number of interactions before convergence compared to directly learning on the physical robot. This two-step procedure allows for the robot to autonomously find correct and fast assembly sequences, without any additional human input or mismanufactured products. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. A Bayesian tracker for synthesizing mobile robot behaviour from demonstration.
- Author
-
Magnenat, Stéphane and Colas, Francis
- Subjects
ROBOT programming ,MOBILE robots ,ROBOTS ,ROBOTICS - Abstract
Programming robots often involves expert knowledge in both the robot itself and the task to execute. An alternative to direct programming is for a human to show examples of the task execution and have the robot perform the task based on these examples, in a scheme known as learning or programming from demonstration. We propose and study a generic and simple learning-from-demonstration framework. Our approach is to combine the demonstrated commands according to the similarity between the demonstrated sensory trajectories and the current replay trajectory. This tracking is solely performed based on sensor values and time and completely dispenses with the usually expensive step of precomputing an internal model of the task. We analyse the behaviour of the proposed model in several simulated conditions and test it on two different robotic platforms. We show that it can reproduce different capabilities with a limited number of meta parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. SeeWhat I See: Enabling User-Centric Robotic Assistance Using First-Person Demonstrations.
- Author
-
Yeping Wang, Ajaykumar, Gopika, and Chien-Ming Huang
- Subjects
ROBOTICS ,TASK performance ,HUMAN-robot interaction ,ROBOT programming ,CEREBRAL dominance - Abstract
We explore first-person demonstration as an intuitive way of producing task demonstrations to facilitate user-centric robotic assistance. First-person demonstration directly captures the human experience of task performance via head-mounted cameras and naturally includes productive viewpoints for task actions. We implemented a perception system that parses natural first-person demonstrations into task models consisting of sequential task procedures, spatial configurations, and unique task viewpoints. We also developed a robotic system capable of interacting autonomously with users as it follows previously acquired task demonstrations. To evaluate the effectiveness of our robotic assistance, we conducted a user study contextualized in an assembly scenario; we sought to determine how assistance based on a first-person demonstration (user-centric assistance) versus that informed only by the cover image of the official assembly instruction (standard assistance) may shape users' behaviors and overall experience when working alongside a collaborative robot. Our results show that participants felt that their robot partner was more collaborative and considerate when it provided user-centric assistance than when it offered only standard assistance. Additionally, participants were more likely to exhibit unproductive behaviors, such as using their non-dominant hand, when performing the assembly task without user-centric assistance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. Robust One-Shot Robot Programming by Demonstration Using Entity-Based Resources
- Author
-
Orendt, Eric M., Riedl, Michael, Henrich, Dominik, Ceccarelli, Marco, Series editor, Corves, Burkhard, Advisory editor, Takeda, Yukio, Advisory editor, Ferraresi, Carlo, editor, and Quaglia, Giuseppe, editor
- Published
- 2018
- Full Text
- View/download PDF
27. A Reconfigurable Pick-Place System Under Robot Operating System
- Author
-
Ding, Cheng, Wu, Jianhua, Xiong, Zhenhua, Liu, Chao, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Chen, Zhiyong, editor, Mendes, Alexandre, editor, Yan, Yamin, editor, and Chen, Shifeng, editor
- Published
- 2018
- Full Text
- View/download PDF
28. An Intuitive Robot Learning from Human Demonstration
- Author
-
Ogenyi, Uchenna Emeoha, Zhang, Gongyue, Yang, Chenguang, Ju, Zhaojie, Liu, Honghai, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Chen, Zhiyong, editor, Mendes, Alexandre, editor, Yan, Yamin, editor, and Chen, Shifeng, editor
- Published
- 2018
- Full Text
- View/download PDF
29. Accuracy and Repeatability Tests on HoloLens 2 and HTC Vive.
- Author
-
Soares, Inês, Sousa, Ricardo B., Petry, Marcelo, and Paulo Moreira, António
- Subjects
AUGMENTED reality ,VIRTUAL reality ,ACCURACY ,HUMAN-robot interaction ,INDUSTRIAL robots - Abstract
Augmented and virtual reality have been experiencing rapid growth in recent years, but there is still no deep knowledge regarding their capabilities and in what fields they could be explored. In that sense, this paper presents a study on the accuracy and repeatability of Microsoft's HoloLens 2 (augmented reality device) and HTC Vive (virtual reality device) using an OptiTrack system as ground truth. For the HoloLens 2, the method used was hand tracking, whereas, in HTC Vive, the object tracked was the system's hand controller. A series of tests in different scenarios and situations were performed to explore what could influence the measures. The HTC Vive obtained results in the millimeter range, while the HoloLens 2 revealed not very accurate measurements (around 2 cm). Although the difference can seem to be considerable, the fact that HoloLens 2 was tracking the user's hand and not the system's controller made a huge impact. The results are considered a significant step for the ongoing project of developing a human-robot interface by demonstrating an industrial robot using extended reality, which shows great potential to succeed based on our data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Learning to Avoid Obstacles With Minimal Intervention Control
- Author
-
Anqing Duan, Raffaello Camoriano, Diego Ferigo, Yanlong Huang, Daniele Calandriello, Lorenzo Rosasco, and Daniele Pucci
- Subjects
programming by demonstration ,reinforcement learning ,obstacle avoidance ,humanoid robots ,minimal intervention control ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Programming by demonstration has received much attention as it offers a general framework which allows robots to efficiently acquire novel motor skills from a human teacher. While traditional imitation learning that only focuses on either Cartesian or joint space might become inappropriate in situations where both spaces are equally important (e.g., writing or striking task), hybrid imitation learning of skills in both Cartesian and joint spaces simultaneously has been studied recently. However, an important issue which often arises in dynamical or unstructured environments is overlooked, namely how can a robot avoid obstacles? In this paper, we aim to address the problem of avoiding obstacles in the context of hybrid imitation learning. Specifically, we propose to tackle three subproblems: (i) designing a proper potential field so as to bypass obstacles, (ii) guaranteeing joint limits are respected when adjusting trajectories in the process of avoiding obstacles, and (iii) determining proper control commands for robots such that potential human-robot interaction is safe. By solving the aforementioned subproblems, the robot is capable of generalizing observed skills to new situations featuring obstacles in a feasible and safe manner. The effectiveness of the proposed method is validated through a toy example as well as a real transportation experiment on the iCub humanoid robot.
- Published
- 2020
- Full Text
- View/download PDF
31. Design and Optimization of a BCI-Driven Telepresence Robot Through Programming by Demonstration
- Author
-
Berdakh Abibullaev, Amin Zollanvari, Batyrkhan Saduanov, and Tohid Alizadeh
- Subjects
Brain-Computer interfaces ,telepresence ,programming by demonstration ,EEG ,event-related potentials ,humanoid robots ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Improving the life quality of people with severe motor paralysis has a significant impact on restoring their functional independence to perform activities of daily living (ADL). Telepresence is a subfield of the robotic-assisted route, where human plays the role of an operator, sending high-level instructions to an assistive robot while receiving sensory feedback. However, for severely motor-impaired people, conventional interaction modalities may not be suitable due to their complete paralysis. Thus, designing alternative ways of interaction such as Brain-Computer Interfaces (BCI) is essential for a telepresence capability. We propose a novel framework that integrates a BCI system and a humanoid robot to develop a brain-controlled telepresence system with multimodal control features. In particular, the low-level control is executed by Programming by Demonstration (PbD) models, and the higher-level cognitive commands are produced by a BCI system to perform vital ADLs. The presented system is based on real-time decoding of attention-modulated neural responses elicited in the brain electroencephalographic signals and generating multiple control commands. As a result, the system allows a user to interact with a humanoid robot while receiving auditory and visual feedback from the robot's sensors. We validated our system across ten subjects in a realistic scenario. The experimental results show the feasibility of the approach in the design of a telepresence robot with high BCI decoding performances.
- Published
- 2019
- Full Text
- View/download PDF
32. Automated and Assisted Authoring of Serious Game Scenarios
- Author
-
Duval, Yohan, Reymonet, Axel, Thomas, Jérôme, Panzoli, David, Plantec, Jean-Yves, Jessel, Jean-Pierre, Kacprzyk, Janusz, Series editor, Pal, Nikhil R., Advisory editor, Bello Perez, Rafael, Advisory editor, Corchado, Emilio S., Advisory editor, Hagras, Hani, Advisory editor, Kóczy, László T., Advisory editor, Kreinovich, Vladik, Advisory editor, Lin, Chin-Teng, Advisory editor, Lu, Jie, Advisory editor, Melin, Patricia, Advisory editor, Nedjah, Nadia, Advisory editor, Nguyen, Ngoc Thanh, Advisory editor, Wang, Jun, Advisory editor, Auer, Michael E., editor, Guralnick, David, editor, and Uhomoibhi, James, editor
- Published
- 2017
- Full Text
- View/download PDF
33. Movement Recognition and Cooperative Task Synthesis Through Hierarchical Database Search
- Author
-
Deniša, Miha, Ude, Aleš, Kacprzyk, Janusz, Series editor, Pal, Nikhil R., Advisory editor, Bello Perez, Rafael, Advisory editor, Corchado, Emilio S., Advisory editor, Hagras, Hani, Advisory editor, Kóczy, László T., Advisory editor, Kreinovich, Vladik, Advisory editor, Lin, Chin-Teng, Advisory editor, Lu, Jie, Advisory editor, Melin, Patricia, Advisory editor, Nedjah, Nadia, Advisory editor, Nguyen, Ngoc Thanh, Advisory editor, Wang, Jun, Advisory editor, Rodić, Aleksandar, editor, and Borangiu, Theodor, editor
- Published
- 2017
- Full Text
- View/download PDF
34. Programming IoT Devices by Demonstration Using Mobile Apps
- Author
-
Li, Toby Jia-Jun, Li, Yuanchun, Chen, Fanglin, Myers, Brad A., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Barbosa, Simone, editor, Markopoulos, Panos, editor, Paternò, Fabio, editor, Stumpf, Simone, editor, and Valtolina, Stefano, editor
- Published
- 2017
- Full Text
- View/download PDF
35. Generating Block-Structured Parallel Process Models by Demonstration.
- Author
-
Lekić, Julijana, Milićev, Dragan, Stanković, Dragan, and Pankowska, Malgorzata
- Subjects
PARALLEL processing ,PROCESS mining ,GRAPHICAL user interfaces ,USER interfaces - Abstract
Programming by demonstration (PBD) is a technique which allows end users to create, modify, accommodate, and expand programs by demonstrating what the program is supposed to do. Although the ideal of common-purpose programming by demonstration or by examples has been rejected as practically unrealistic, this approach has found its application and shown potentials when limited to specific narrow domains and ranges of applications. In this paper, the original method of applying the principles of programming by demonstration in the area of process mining (PM) to interactive construction of block-structured parallel business processes models is presented. A technique and tool that enable interactive process mining and incremental discovery of process models have been described in this paper. The idea is based on the following principle: using a demonstrational user interface, a user demonstrates scenarios of execution of parallel business process activities, and the system gives a generalized model process specification. A modified process mining technique with the α
|| algorithm applied on weakly complete event logs is used for creating parallel business process models using demonstration. [ABSTRACT FROM AUTHOR]- Published
- 2021
- Full Text
- View/download PDF
36. Kinesthetic Guidance Utilizing DMP Synchronization and Assistive Virtual Fixtures for Progressive Automation.
- Author
-
Papageorgiou, Dimitrios, Dimeas, Fotios, Kastritsi, Theodora, and Doulgeri, Zoe
- Subjects
- *
ROBOT programming , *AUTOMATION , *IMPEDANCE control , *SYNCHRONIZATION , *HUMAN-robot interaction , *TORQUE , *MANIPULATORS (Machinery) - Abstract
SUMMARY: The progressive automation framework allows the seamless transition of a robot from kinesthetic guidance to autonomous operation mode during programming by demonstration of discrete motion tasks. This is achieved by the synergetic action of dynamic movement primitives (DMPs), virtual fixtures, and variable impedance control. The proposed DMPs encode the demonstrated trajectory and synchronize with the current demonstration from the user so that the reference generated motion follows the human's demonstration. The proposed virtual fixtures assist the user in repeating the learned kinematic behavior but allow penetration so that the user can make modifications to the learned trajectory if needed. The tracking error in combination with the interaction forces and torques is used by a variable stiffness strategy to adjust the progressive automation level and transition the leading role between the human and the robot. An energy tank approach is utilized to apply the designed controller and to prove the passivity of the overall control method. An experimental evaluation of the proposed framework is presented for a pick and place task and results show that the transition to autonomous mode is achieved in few demonstrations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
37. Joining Force of Human Muscular Task Planning With Robot Robust and Delicate Manipulation for Programming by Demonstration.
- Author
-
Wang, Fei, Zhou, Xingqun, Wang, Jianhui, Zhang, Xing, He, Zhenquan, and Song, Bo
- Abstract
Recently, programing by demonstration (PbD) received much attention for its capacity of fast programming with increasing demands in the robot manipulation area, especially in industrial applications. However, one of the biggest challenges of PbD is the recognition of demonstrator's finger high-fidelity motions especially in the environments with uncertainties, which limits the efficiency and accuracy of PbD. In this article, inspired by human dexterity, a novel PbD approach using the implicit muscular task planning strategy is presented to extract features from the arms’ giant movement and the hands’ fine motions during the demonstrator's operation. Furthermore, we integrate a deep reinforcement learning control method that further improves the manipulations’ adaptive ability in the unknown or dynamic environments. The experimental results show that our proposed approach can deal with relative complex assembly tasks with a success rate of more than 67% within a fit tolerance of 4.2 mm by one-shot demonstration. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. Trajectory reconstruction for robot programming by demonstration.
- Author
-
Elhachemi Amar, Reda Hanifi, Benchikh, Laredj, Dermeche, Hakima, Bachir, Ouamri, and Ahmed-Foitih, Zoubir
- Subjects
ROBOT programming ,ROBOT motion ,MOTION capture (Human mechanics) ,ARTIFICIAL hands ,VIRTUAL reality ,ROBOT hands ,MANIPULATORS (Machinery) ,SPLINES - Abstract
The reproduction of hand movements by a robot remains difficult and conventional learning methods do not allow us to faithfully recreate these movements because it is very difficult when the number of crossing points is very large. Programming by Demonstration gives a better opportunity for solving this problem by tracking the user’s movements with a motion capture system and creating a robotic program to reproduce the performed tasks. This paper presents a Programming by Demonstration system in a trajectory level for the reproduction of hand/tool movement by a manipulator robot; this was realized by tracking the user’s movement with the ArToolkit and reconstructing the trajectories by using the constrained cubic spline. The results obtained with the constrained cubic spline were compared with cubic spline interpolation. Finally the obtained trajectories have been simulated in a virtual environment on the Puma 600 robot. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Towards easy setup of robotic assembly tasks.
- Author
-
Sloth, Christoffer, Kramberger, Aljaž, and Iturrate, Iñigo
- Subjects
- *
ROBOTIC assembly , *FLEXIBLE manufacturing systems , *SURGICAL robots , *MEDICAL robotics , *TASKS , *ROBOTS - Abstract
There is a growing need for adaptive robotic assembly systems that are fast to setup and reprogram when new products are introduced. The World Robot Challenge at World Robot Summit 2018 was centered around the challenge of setting up a flexible robotic assembly system aiming at changeover times below 1 day. This paper presents a method for programming robotic assembly tasks that was initiated in connection with the World Robot Challenge that enables fast and easy setup of robotic insertion tasks. We propose to program assembly tasks by demonstration, but instead of using the taught behavior directly, the demonstration is merged with assembly primitives to increase robustness. In contrast to other programming by demonstration approaches, we perform not only one demonstration but a sequence of four sub-demonstrations that are used to extract the desired robot trajectory in addition to parameters for the assembly primitive. The proposed assembly strategy is compared to a standard dynamic movement primitive and experiments show that the proposed assembly strategy increases the robustness towards pose uncertainties and significantly reduces the applied forces during the execution of the assembly task. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
40. Programming by Demonstration
- Author
-
Ikeuchi, Katsushi, editor
- Published
- 2021
- Full Text
- View/download PDF
41. Robot programming by demonstration using teleoperation through imitation
- Author
-
Jha, Abhishek and Chiddarwar, Shital S.
- Published
- 2017
- Full Text
- View/download PDF
42. Programming Robots by Demonstration Using Augmented Reality
- Author
-
Inês Soares, Marcelo Petry, and António Paulo Moreira
- Subjects
augmented reality ,collaborative robots ,industrial robots ,programming by demonstration ,Chemical technology ,TP1-1185 - Abstract
The world is living the fourth industrial revolution, marked by the increasing intelligence and automation of manufacturing systems. Nevertheless, there are types of tasks that are too complex or too expensive to be fully automated, it would be more efficient if the machines were able to work with the human, not only by sharing the same workspace but also as useful collaborators. A possible solution to that problem is on human–robot interaction systems, understanding the applications where they can be helpful to implement and what are the challenges they face. This work proposes the development of an industrial prototype of a human–machine interaction system through Augmented Reality, in which the objective is to enable an industrial operator without any programming experience to program a robot. The system itself is divided into two different parts: the tracking system, which records the operator’s hand movement, and the translator system, which writes the program to be sent to the robot that will execute the task. To demonstrate the concept, the user drew geometric figures, and the robot was able to replicate the operator’s path recorded.
- Published
- 2021
- Full Text
- View/download PDF
43. Beating-Time Gestures Imitation Learning for Humanoid Robots
- Author
-
Denis Amelynck, Pieter-Jan Maes, Jean-Pierre Martens, and Marc Leman
- Subjects
programming by demonstration ,cubic spline regression ,dynamical time warping ,beating-time gestures ,Technology - Abstract
Beating-time gestures are movement patterns of the hand swaying along with music, thereby indicating accented musical pulses. The spatiotemporal configuration of these patterns makes it diÿcult to analyse and model them. In this paper we present an innovative modelling approach that is based upon imitation learning or Programming by Demonstration (PbD). Our approach - based on Dirichlet Process Mixture Models, Hidden Markov Models, Dynamic Time Warping, and non-uniform cubic spline regression - is particularly innovative as it handles spatial and temporal variability by the generation of a generalised trajectory from a set of periodically repeated movements. Although not within the scope of our study, our procedures may be implemented for the sake of controlling movement behaviour of robots and avatar animations in response to music.
- Published
- 2017
- Full Text
- View/download PDF
44. Force control design for robots based on correlation in human demonstration data
- Author
-
Yasuhiko FUKUMOTO, Natsuki YAMANOBE, Weiwei WAN, and Kensuke HARADA
- Subjects
industrial robot ,force control ,programming by demonstration ,cross-correlation ,optimization ,assembly ,flexible object ,ring part ,Mechanical engineering and machinery ,TJ1-1570 ,Engineering machinery, tools, and implements ,TA213-215 - Abstract
This paper proposes a method to construct a force control law for industrial robots by using the normalized cross-correlation (NCC) of human demonstration data. Conventionally there are two solutions where one is based on human demonstration data, and the other is based on numerical optimization. The former one gives the force control parameters efficiently, but the parameters may make the robot unstable. On the other hand, the latter can maximize the performance, but it requires an enormous number of trial and error. Taking into account the merit of each method, we consider combining the two approaches. In our proposed method, the force control laws are determined to maximize the NCC of the human demonstration data. The orientation of the coordinate system is also determined to maximize the NCC of the human demonstration data. Then, the parameters included in the force control law are optimized by the downhill simplex method. The proposed method was applied to a ring-shaped rubber packing assembly task, and it could realize an assembly with human-like performance. Moreover, some comparable experiments were also conducted, and it was confirmed that the proposed method can construct appropriate force controls.
- Published
- 2019
- Full Text
- View/download PDF
45. Accuracy and Repeatability Tests on HoloLens 2 and HTC Vive
- Author
-
Inês Soares, Ricardo B. Sousa, Marcelo Petry, and António Paulo Moreira
- Subjects
programming by demonstration ,virtual reality ,augmented reality ,accuracy ,repeatability ,Technology ,Science - Abstract
Augmented and virtual reality have been experiencing rapid growth in recent years, but there is still no deep knowledge regarding their capabilities and in what fields they could be explored. In that sense, this paper presents a study on the accuracy and repeatability of Microsoft’s HoloLens 2 (augmented reality device) and HTC Vive (virtual reality device) using an OptiTrack system as ground truth. For the HoloLens 2, the method used was hand tracking, whereas, in HTC Vive, the object tracked was the system’s hand controller. A series of tests in different scenarios and situations were performed to explore what could influence the measures. The HTC Vive obtained results in the millimeter range, while the HoloLens 2 revealed not very accurate measurements (around 2 cm). Although the difference can seem to be considerable, the fact that HoloLens 2 was tracking the user’s hand and not the system’s controller made a huge impact. The results are considered a significant step for the ongoing project of developing a human–robot interface by demonstrating an industrial robot using extended reality, which shows great potential to succeed based on our data.
- Published
- 2021
- Full Text
- View/download PDF
46. A Programming by Demonstration with Least Square Support Vector Machine for Manipulators
- Author
-
Zhao, Jingdong, Li, Chongyang, Jiang, Zainan, Liu, Hong, Goebel, Randy, Series Editor, Tanaka, Yuzuru, Series Editor, Wahlster, Wolfgang, Series Editor, Siekmann, Joerg, Founding Editor, Liu, Honghai, editor, Kubota, Naoyuki, editor, Zhu, Xiangyang, editor, and Dillmann, Rüdiger, editor
- Published
- 2015
- Full Text
- View/download PDF
47. An Object-Centric Paradigm for Robot Programming by Demonstration
- Author
-
Huang, Di-Wei, Katz, Garrett E., Langsfeld, Joshua D., Oh, Hyuk, Gentili, Rodolphe J., Reggia, James A., Goebel, Randy, Series editor, Tanaka, Yuzuru, Series editor, Wahlster, Wolfgang, Series editor, Schmorrow, Dylan D., editor, and Fidopiastis, Cali M., editor
- Published
- 2015
- Full Text
- View/download PDF
48. Can Human-Inspired Learning Behaviour Facilitate Human–Robot Interaction?
- Author
-
Carfì, Alessandro, Villalobos, Jessica, Coronado, Enrique, Bruno, Barbara, and Mastrogiovanni, Fulvio
- Subjects
HUMAN-robot interaction ,AUTONOMOUS robots ,ROBOTS ,BEHAVIOR ,INTERPERSONAL relations ,SHARED workspaces - Abstract
The evolution of production systems for smart factories foresees a tight relation between human operators and robots. Specifically, when robot task reconfiguration is needed, the operator must be provided with an easy and intuitive way to do it. A useful tool for robot task reconfiguration is Programming by Demonstration (PbD). PbD allows human operators to teach a robot new tasks by showing it a number of examples. The article presents two studies investigating the role of the robot in PbD. A preliminary study compares standard PbD with human–human teaching and suggests that a collaborative robot should actively participate in the teaching process as human practitioners typically do. The main study uses a wizard of oz approach to determine the effects of having a robot actively participating in the teaching process, specifically by controlling the end-effector. The results suggest that active behaviour inspired by humans can lead to a more intuitive PbD. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
49. A human inspired handover policy using Gaussian Mixture Models and haptic cues.
- Author
-
Sidiropoulos, Antonis, Psomopoulou, Efi, and Doulgeri, Zoe
- Subjects
GAUSSIAN mixture models ,STATE feedback (Feedback control systems) ,DYNAMICAL systems ,HUMAN-robot interaction - Abstract
A handover strategy is proposed that aims at natural and fluent robot to human object handovers. For the approaching phase, a globally asymptotically stable dynamical system (DS) is utilized, trained from human demonstrations and exploiting the existence of mirroring in the human wrist motion. The DS operates in the robot task space thus achieving independence with respect to the robot platform, encapsulating the position and orientation of the human wrist within a single DS. It is proven that the motion generated by such a DS, having as target the current wrist pose of the receiver's hand, is bounded and converges to the previously unknown handover location. Haptic cues based on load estimates at the robot giver ensure full object load transfer before grip release. The proposed strategy is validated with simulations and experiments in real settings. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Improving Web Automation Tools through UI Context and Demonstration
- Author
-
Krosnick, Rebecca
- Subjects
- web automation, programming by demonstration, user interfaces, end-user programming, human-computer interaction
- Abstract
User interface (UI) automation allows people to perform UI tasks programmatically and can be helpful for computer or smartphone tasks that are tedious, repetitive, or inaccessible. UI automation works by programmatically mimicking a user's interactions on a UI, for example clicking a button or typing into a text field. Traditionally people create UI automation macros by writing code, which requires programming expertise and familiarity with UI technologies. Researchers have explored direct manipulation interfaces and programming-by-demonstration (PBD) to make creating UI automation more accessible for people with less programming experience. With PBD, the user provides demonstrations of how they want their program to behave in a small set of scenarios, and the system then infers a generalized program. Since demonstrations are inherently ambiguous, a key challenge of PBD is in correctly inferring the user's intent and effectively communicating those inferences back to the user. In this thesis, I address important challenges in authoring UI automation macros by leveraging user-provided demonstrations and parameters, and structural patterns in the UI to infer generalized automation; and in understanding UI automation macros by (a) highlighting selected elements on the target UI, (b) visualizing high-level behavior through sequences of actions and UIs visited, (c) visualizing generalizations through color-coding UI elements and grouping corresponding UIs, and (d) providing feedback on validity and uniqueness of element selection logic. First, I conducted two studies observing how programmers write automation code. One of the key challenges participants experienced was in identifying appropriate UI element selection logic. Next, I designed two programming-by-demonstration systems, ParamMacros and ScrapeViz, that enable users to create automation macros without writing code. Users provide demonstrations of what UI elements they want to click or scrape, and then these systems leverage structural patterns in the website DOM to identify patterns and infer generalized automation. ParamMacros supports parameterized macros (powered by user-provided parameters) while ScrapeViz supports distributed hierarchical web scraping macros. ScrapeViz also provides visual tools to help users understand automation behavior in the context of the page source and across different UI pages. This thesis contributes learnings about the challenges users face in creating UI automation macros, and no-code authoring tools and visual understanding tools which have the promise to make UI automation more accessible to a wider audience.
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.