7 results on '"Sim2real transfer"'
Search Results
2. Motion Planning via Reinforcement Learning
- Author
-
Mulla, Awies Mohammad
- Subjects
Robotics ,Lipschitz continuity ,Motion Planning ,Reinforcement Learning ,Sim2Real Transfer - Abstract
This thesis focuses on solving the problem of motion planning for autonomous mobile robots using reinforcement learning. In recent years, Reinforcement Learning (RL) has shown remarkable potential in training autonomous agents to navigate complex environments. In the context of autonomous ground vehicles, one of the fundamental challenges is achieving seamless navigation amidst obstacles. Our work delves into the development and evaluation of a reinforcement learning agent, implemented using the Proximal Policy Optimization (PPO) algorithm, guiding an Ackermann drive car towards a goal while avoiding obstacles. The key innovation lies in the fusion of car states and depth images using various neural network architectures to extract features that drives the RL agent's decision-making process. The problem of motion planning also comprises of an optimality criteria of the planned path depending on the state space. Our work considers the time to navigate to the goal as an optimality criteria, which is being controlled by hyperparameters of the reward function formulated. Finally, we implement our formulation on a f1-tenth car with close to no tuning during sim2real transfer. We deploy the network trained in the simulation as-is on the vehicle. Stabilization of the policy was taken by care regularizing the trained network carried out by targetting the lipschitz continuity of the trained network.
- Published
- 2024
3. DiSECt: a differentiable simulator for parameter inference and control in robotic cutting.
- Author
-
Heiden, Eric, Macklin, Miles, Narang, Yashraj, Fox, Dieter, Garg, Animesh, and Ramos, Fabio
- Subjects
DAMAGE models ,VERTICAL motion ,FRACTIONS ,ROBOTICS ,FINITE element method ,LATERAL loads ,SOFT robotics - Abstract
Robotic cutting of soft materials is critical for applications such as food processing, household automation, and surgical manipulation. As in other areas of robotics, simulators can facilitate controller verification, policy learning, and dataset generation. Moreover, differentiable simulators can enable gradient-based optimization, which is invaluable for calibrating simulation parameters and optimizing controllers. In this work, we present DiSECt: the first differentiable simulator for cutting soft materials. The simulator augments the finite element method with a continuous contact model based on signed distance fields, as well as a continuous damage model that inserts springs on opposite sides of the cutting plane and allows them to weaken until zero stiffness, enabling crack formation. Through various experiments, we evaluate the performance of the simulator. We first show that the simulator can be calibrated to match resultant forces and deformation fields from a state-of-the-art commercial solver and real-world cutting datasets, with generality across cutting velocities and object instances. We then show that Bayesian inference can be performed efficiently by leveraging the differentiability of the simulator, estimating posteriors over hundreds of parameters in a fraction of the time of derivative-free methods. Next, we illustrate that control parameters in the simulation can be optimized to minimize cutting forces via lateral slicing motions. Finally, we conduct experiments on a real robot arm equipped with a slicing knife to infer simulation parameters from force measurements. By optimizing the slicing motion of the knife, we show on fruit cutting scenarios that the average knife force can be reduced by more than 40 % compared to a vertical cutting motion. We publish code and additional materials on our project website at https://diff-cutting-sim.github.io. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Reproducible Pruning System on Dynamic Natural Plants for Field Agricultural Robots
- Author
-
Katyara, Sunny, Ficuciello, Fanny, Caldwell, Darwin G., Chen, Fei, Siciliano, Bruno, Siciliano, Bruno, Series Editor, Khatib, Oussama, Series Editor, Antonelli, Gianluca, Advisory Editor, Fox, Dieter, Advisory Editor, Harada, Kensuke, Advisory Editor, Hsieh, M. Ani, Advisory Editor, Kröger, Torsten, Advisory Editor, Kulic, Dana, Advisory Editor, Park, Jaeheung, Advisory Editor, Saveriano, Matteo, editor, Renaudo, Erwan, editor, Rodríguez-Sánchez, Antonio, editor, and Piater, Justus, editor
- Published
- 2021
- Full Text
- View/download PDF
5. Transfer learning as an enabler of the intelligent digital twin.
- Author
-
Maschler, Benjamin, Braun, Dominik, Jazdi, Nasser, and Weyrich, Michael
- Abstract
Digital Twins have been described as beneficial in many areas, such as virtual commissioning, fault prediction or reconfiguration planning. Equipping Digital Twins with artificial intelligence functionalities can greatly expand those beneficial applications or open up altogether new areas of application, among them cross-phase industrial transfer learning. In the context of machine learning, transfer learning represents a set of approaches that enhance learning new tasks based upon previously acquired knowledge. Here, knowledge is transferred from one lifecycle phase to another in order to reduce the amount of data or time needed to train a machine learning algorithm. Looking at common challenges in developing and deploying industrial machinery with deep learning functionalities, embracing this concept would offer several advantages: Using an intelligent Digital Twin, learning algorithms can be designed, configured and tested in the design phase before the physical system exists and real data can be collected. Once real data becomes available, the algorithms must merely be fine-tuned, significantly speeding up commissioning and reducing the probability of costly modifications. Furthermore, using the Digital Twin's simulation capabilities virtually injecting rare faults in order to train an algorithm's response or using reinforcement learning, e.g. to teach a robot, become practically feasible. This article presents several cross-phase industrial transfer learning use cases utilizing intelligent Digital Twins. A real cyber physical production system consisting of an automated welding machine and an automated guided vehicle equipped with a robot arm is used to illustrate the respective benefits. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Surrogate empowered Sim2Real transfer of deep reinforcement learning for ORC superheat control.
- Author
-
Lin, Runze, Luo, Yangyang, Wu, Xialai, Chen, Junghui, Huang, Biao, Su, Hongye, and Xie, Lei
- Subjects
- *
DEEP reinforcement learning , *RANKINE cycle , *INDUSTRIAL wastes , *MATHEMATICAL optimization , *VIRTUAL prototypes , *HEAT recovery - Abstract
The Organic Rankine Cycle (ORC) is widely used in industrial waste heat recovery due to its simple structure and easy maintenance. However, in the context of smart manufacturing in the process industry, traditional model-based optimization control methods are unable to adapt to the varying operating conditions of the ORC system or sudden changes in operating modes. Deep reinforcement learning (DRL) has significant advantages in situations with uncertainty as it directly achieves control objectives by interacting with the environment without requiring an explicit model of the controlled plant. Nevertheless, direct application of DRL to physical ORC systems presents unacceptable safety risks, and its generalization performance under model-plant mismatch is insufficient to support ORC control requirements. Therefore, this paper proposes a Sim2Real transfer learning-based DRL control method for ORC superheat control, which aims to provide a new simple, feasible, and user-friendly solution for energy system optimization control. Experimental results show that the proposed method greatly improves the training speed of DRL in ORC control problems and solves the generalization performance issue of the agent under multiple operating conditions through Sim2Real transfer. • A practical and user friendly method, DRL Sim2Real transfer learning, is proposed. • The surrogate based virtual prototype for pre training is used to ensure safety. • Limitations of traditional model based optimization control methods are addressed. • Closed loop data is used for VP modeling to enhance computational efficiency. • The significance of level of exploration during Sim2Real transfer is emphasized. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Des robots qui voient : apprentissage de comportements guidés par la vision
- Author
-
Pashevich, Alexander, Laboratoire Jean Kuntzmann (LJK), Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes (UGA)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP ), Université Grenoble Alpes (UGA), Université Grenoble Alpes [2020-....], and Cordelia Schmid
- Subjects
Apprentissage profond ,Apprentissage par renforcement ,Sim2real transfer ,Natural language processing ,Reinforcement learning ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Deep learning ,Robotics ,Robotique ,Traitement automatique du langage naturel - Abstract
Recently, vision and learning made significant progress that could improve robot control policies for complex environments. In this thesis, we introduce novel methods for learning robot control that improve the state-of-the-art on challenging tasks. We also propose a novel approach for the task of learning control in dynamic environments guided by natural language.Data availability is one of the major challenges for learning-based methods in robotics. While collecting a dataset from real robots is expensive and limits scalability, simulators provide an attractive alternative. Policies learned in simulation, however, usually do not transfer well to real scenes due to the domain gap between real and synthetic data. We propose a method that enables task-independent policy learning for real robots using only synthetic data. We demonstrate that our approach achieves excellent results on a range of real-world manipulation tasks.Learning-based approaches can solve complex tasks directly from camera images but require non-trivial domain-specific knowledge for their supervision. This thesis introduces two novel methods for learning visually guided control policies given a limited amount of supervision. First, we propose a reinforcement learning approach that learns to combine skills using neither intermediate rewards nor complete task demonstrations. Second, we propose a new method to solve a task specified with a solution example employing a novel disassembly procedure. While using no real images for training, we demonstrate the versatility of our methods in challenging real-world settings including temporary occlusions and dynamic scene changes.Interaction and navigation defined by natural language instructions in dynamic environments pose significant challenges for learning-based methods. To handle long sequences of subtasks, we propose a novel method based on a multimodal transformer that encodes the full history of observations and actions. We also propose to leverage synthetic instructions as intermediate representations to improve understanding of complex human instructions.For all the contributions, we validate our approaches against strong baselines and show that they outperform previous state-of-the-art methods.; Récemment, la vision par ordinateur et l'apprentissage automatique ont fait des progrès significatifs qui pourraient améliorer le contrôle des robots dans les environnements complexes. Dans ce manuscrit, nous introduisons de nouvelles méthodes d'apprentissage de comportements des robots. Nous proposons également une nouvelle approche pour la tâche d'apprentissage du contrôle guidé par le langage naturel.La disponibilité des données reste l’un des défis principaux pour les méthodes d’apprentissage en robotique. Néanmoins, bien que la collecte d’un ensemble de données à partir de robots réels soit coûteuse et rarement extensible, les simulateurs offrent aujourd’hui une alternative attrayante. Le comportement appris en simulation, cependant, ne se transfère généralement pas adéquatement aux scènes réelles à cause de la difference principale entre les données réelles et synthétiques. Pour faire face à cette limitation, nous proposons dans cette thèse une méthode qui permet un apprentissage de comportements pour les robots réels en utilisant uniquement des données synthétiques. Nous démontrons que notre approche aboutit à d'excellents résultats sur une gamme de tâches de manipulation dans un milieu réel. Les approches d'apprentissage peuvent résoudre des tâches complexes directement à base des images, mais nécessitent des connaissances spécifiques au domaine pour leur supervision. Nous proposons deux méthodes d’apprentissage des comportements guidés par la vision compte tenu d’une supervision limitée. Premièrement, nous proposons une approche d'apprentissage par renforcement qui apprend à combiner des compétences primitives. Deuxièmement, nous proposons une nouvelle méthode pour résoudre des tâches définies avec un exemple de solution qui utilise une procédure innovante de désassemblage. Nous démontrons la polyvalence de nos méthodes dans des contextes réels complexes, y compris des occlusions et des changements dynamiques.L'interaction et la navigation définies par le langage naturel dans des environnements dynamiques posent des défis importants pour les méthodes d'apprentissage. Pour gérer une longue séquence de sous-tâches, nous proposons une nouvelle méthode qui garde l’historique complet des observations et des actions. Nous proposons également d'utiliser des instructions synthétiques pour améliorer la compréhension des instructions humaines complexes.Pour toutes nos contributions, nous avons comparé nos approches avec les techniques existantes et nous montrons que nos résultats sont significativement meilleurs que ceux de l'état de l'art.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.