8 results on '"Andrea F. Daniele"'
Search Results
2. Inferring Compact Representations for Efficient Natural Language Understanding of Robot Instructions.
- Author
-
Siddharth Patki, Andrea F. Daniele, Matthew R. Walter, and Thomas M. Howard
- Published
- 2019
- Full Text
- View/download PDF
3. A Multiview Approach to Learning Articulated Motion Models.
- Author
-
Andrea F. Daniele, Thomas M. Howard, and Matthew R. Walter
- Published
- 2017
- Full Text
- View/download PDF
4. Navigational Instruction Generation as Inverse Reinforcement Learning with Neural Machine Translation.
- Author
-
Andrea F. Daniele, Mohit Bansal, and Matthew R. Walter
- Published
- 2017
- Full Text
- View/download PDF
5. Integrated Benchmarking and Design for Reproducible and Accessible Evaluation of Robotic Agents
- Author
-
Emilio Frazzoli, Gianmarco Bernasconi, Amaury Camus, Bhairav Mehta, Matthew R. Walter, Liam Paull, Andrea Censi, Jacopo Tani, Rohit Suri, Andrea F. Daniele, Aleksandar Petrov, Anthony Courchesne, and Tomasz Zaluska
- Subjects
FOS: Computer and information sciences ,0303 health sciences ,0209 industrial biotechnology ,Computer Science - Machine Learning ,business.industry ,Computer science ,Robotics ,02 engineering and technology ,Remote evaluation ,Variance (accounting) ,Benchmarking ,Machine Learning (cs.LG) ,03 medical and health sciences ,Computer Science - Robotics ,020901 industrial engineering & automation ,Software ,Robot ,Artificial intelligence ,Software engineering ,business ,Robotics (cs.RO) ,030304 developmental biology - Abstract
As robotics matures and increases in complexity, it is more necessary than ever that robot autonomy research be reproducible. Compared to other sciences, there are specific challenges to benchmarking autonomy, such as the complexity of the software stacks, the variability of the hardware and the reliance on data-driven techniques, amongst others. In this paper, we describe a new concept for reproducible robotics research that integrates development and benchmarking, so that reproducibility is obtained "by design" from the beginning of the research/development processes. We first provide the overall conceptual objectives to achieve this goal and then a concrete instance that we have built: the DUCKIENet. One of the central components of this setup is the Duckietown Autolab, a remotely accessible standardized setup that is itself also relatively low-cost and reproducible. When evaluating agents, careful definition of interfaces allows users to choose among local versus remote evaluation using simulation, logs, or remote automated hardware setups. We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and across different remote labs., IROS 2020; Code available at https://github.com/duckietown
- Published
- 2020
6. A Multiview Approach to Learning Articulated Motion Models
- Author
-
Thomas M. Howard, Matthew R. Walter, and Andrea F. Daniele
- Subjects
Computer science ,02 engineering and technology ,Kinematics ,Object (computer science) ,Motion (physics) ,Multimodal learning ,03 medical and health sciences ,0302 clinical medicine ,Human–computer interaction ,Feature (computer vision) ,030221 ophthalmology & optometry ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Graphical model ,Representation (mathematics) ,Natural language - Abstract
In order for robots to operate effectively in homes and workplaces, they must be able to manipulate the articulated objects common within environments built for and by humans. Kinematic models provide a concise representation of these objects that enable deliberate, generalizable manipulation policies. However, existing approaches to learning these models rely upon visual observations of an object’s motion, and are subject to the effects of occlusions and feature sparsity. Natural language descriptions provide a flexible and efficient means by which humans can provide complementary information in a weakly supervised manner suitable for a variety of different interactions (e.g., demonstrations and remote manipulation). In this paper, we present a multimodal learning framework that incorporates both vision and language information acquired in situ to estimate the structure and parameters that define kinematic models of articulated objects. The visual signal takes the form of an RGB-D image stream that opportunistically captures object motion in an unprepared scene. Accompanying natural language descriptions of the motion constitute the linguistic signal. We model linguistic information using a probabilistic graphical model that grounds natural language descriptions to their referent kinematic motion. By exploiting the complementary nature of the vision and language observations, our method infers correct kinematic models for various multiple-part objects on which the previous state-of-the-art, visual-only system fails. We evaluate our multimodal learning framework on a dataset comprised of a variety of household objects, and demonstrate a \(23\%\) improvement in model accuracy over the vision-only baseline.
- Published
- 2019
7. The AI Driving Olympics at NeurIPS 2018
- Author
-
A. Kirsten Bowser, Jacopo Tani, Matthew R. Walter, Breandan Considine, Andrea Censi, Ruslan Hristov, Gianmarco Bernasconi, Andrea F. Daniele, Julian Zilly, Liam Paull, Emilio Frazzoli, Claudio Ruch, Florian Golemo, Bhairav Mehta, Manfred Diaz, Jan Hakenberg, Sunil Mallya, Escalera, Sergio, and Herbrich, Ralf
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,Event (computing) ,Deep learning ,05 social sciences ,Robotics ,010501 environmental sciences ,01 natural sciences ,Bridge (nautical) ,GeneralLiterature_MISCELLANEOUS ,Task (project management) ,Computer Science - Robotics ,Embodied cognition ,Human–computer interaction ,0502 economics and business ,Reinforcement learning ,Artificial intelligence ,050207 economics ,business ,Robotics (cs.RO) ,0105 earth and related environmental sciences - Abstract
Despite recent breakthroughs, the ability of deep learning and reinforcement learning to outperform traditional approaches to control physically embodied robotic agents remains largely unproven. To help bridge this gap, we present the “AI Driving Olympics” (AI-DO), a competition with the objective of evaluating the state of the art in machine learning and artificial intelligence for mobile robotics. Based on the simple and well-specified autonomous driving and navigation environment called “Duckietown,” the AI-DO includes a series of tasks of increasing complexity—from simple lane-following to fleet management. For each task, we provide tools for competitors to use in the form of simulators, logs, code templates, baseline implementations and low-cost access to robotic hardware. We evaluate submissions in simulation online, on standardized hardware environments, and finally at the competition event. The first AI-DO, AI-DO 1, occurred at the Neural Information Processing Systems (NeurIPS) conference in December 2018. In this paper we will describe the AI-DO 1 including the motivation and design objections, the challenges, the provided infrastructure, an overview of the approaches of the top submissions, and a frank assessment of what worked well as well as what needs improvement. The results of AI-DO 1 highlight the need for better benchmarks, which are lacking in robotics, as well as improved mechanisms to bridge the gap between simulation and reality. © Springer Nature Switzerland AG 2020., The Springer Series on Challenges in Machine Learning, The NeurIPS '18 Competition. From Machine Learning to Intelligent Conversations, ISBN:978-3-030-29135-8, ISBN:978-3-030-29134-1
- Published
- 2019
8. Navigational Instruction Generation as Inverse Reinforcement Learning with Neural Machine Translation
- Author
-
Andrea F. Daniele, Matthew R. Walter, and Mohit Bansal
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Machine translation ,Computer Science - Artificial Intelligence ,Computer science ,02 engineering and technology ,computer.software_genre ,Human–robot interaction ,Machine Learning (cs.LG) ,Computer Science - Robotics ,020901 industrial engineering & automation ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Robot kinematics ,Computer Science - Computation and Language ,business.industry ,Natural language generation ,Robotics ,Computer Science - Learning ,Artificial Intelligence (cs.AI) ,Task analysis ,Robot ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Robotics (cs.RO) ,Computation and Language (cs.CL) ,Natural language - Abstract
Modern robotics applications that involve human-robot interaction require robots to be able to communicate with humans seamlessly and effectively. Natural language provides a flexible and efficient medium through which robots can exchange information with their human partners. Significant advancements have been made in developing robots capable of interpreting free-form instructions, but less attention has been devoted to endowing robots with the ability to generate natural language. We propose a navigational guide model that enables robots to generate natural language instructions that allow humans to navigate a priori unknown environments. We first decide which information to share with the user according to their preferences, using a policy trained from human demonstrations via inverse reinforcement learning. We then "translate" this information into a natural language instruction using a neural sequence-to-sequence model that learns to generate free-form instructions from natural language corpora. We evaluate our method on a benchmark route instruction dataset and achieve a BLEU score of 72.18% when compared to human-generated reference instructions. We additionally conduct navigation experiments with human participants that demonstrate that our method generates instructions that people follow as accurately and easily as those produced by humans.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.