12 results on '"Andrea Cherubini"'
Search Results
2. Point Clouds With Color: A Simple Open Library for Matching RGB and Depth Pixels from an Uncalibrated Stereo Pair
- Author
-
Jordan Nowak, Jean-Pierre Daures, Philippe Fraisse, Andrea Cherubini, Interactive Digital Humans (IDH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), and Clinique Médicale Beausoleil
- Subjects
Source code ,Pixel ,Matching (graph theory) ,Computer science ,business.industry ,media_common.quotation_subject ,Distortion (optics) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,Set (abstract data type) ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,RGB color model ,Robot ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS ,media_common - Abstract
International audience; Current day robots often rely - for visual perception - on the coupling of two cameras: one for color and one for depth. While for custom RGB-D cameras, the manufacturer takes care of aligning the two images, this is not done when two commercial cameras are coupled (e.g., on the Pepper robot) without having been calibrated beforehand. In this article, we present a simple open library for reconstructing the 3D position of RGB pixels without knowing the parameters of the two cameras. The library requires a simple preliminary calibration step based on pixel-to-pixel matching, and then automatically reconstructs 3D colored point clouds from a given set of pixels in the RGB image. The source code is available at the following link https://github.com/jordan-nowak/OpenHSML.
- Published
- 2021
3. A framework for intuitive collaboration with a mobile manipulator
- Author
-
Andrea Cherubini, Aïcha Fonte, Benjamin Navarro, Philippe Fraisse, Gérard Poisson, Interactive Digital Humans (IDH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), Département Images, Robotique, Automatique et Signal [Orléans] (IRAUS), Laboratoire pluridisciplinaire de recherche en ingénierie des systèmes, mécanique et énergétique (PRISME), Université d'Orléans (UO)-Institut National des Sciences Appliquées - Centre Val de Loire (INSA CVL), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Université d'Orléans (UO)-Institut National des Sciences Appliquées - Centre Val de Loire (INSA CVL), Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA), and ANR-14-CE27-0016,SISCob,Capteur de Sécurité intelligente pour la Cobotique(2014)
- Subjects
0209 industrial biotechnology ,Engineering ,Physical human-robot interaction ,business.industry ,Mobile manipulator ,020207 software engineering ,Control engineering ,Robotics and Control ,Redundancy Resolution ,02 engineering and technology ,Computer Science::Robotics ,Angular deviation ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Redundancy (engineering) ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Robot ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Human operator ,business ,Omnidirectional antenna - Abstract
International audience; In this paper, we present a control strategy that enables intuitive physical human-robot collaboration with mobile manipulators equipped with an omnidirectional base. When interacting with a human operator, intuitiveness of operation is a major concern. To this end, we propose a redundancy solution that allows the mobile base to be fixed when working locally and moves it only when the robot approaches a set of constraints. These constraints include distance to singular poses, minimum of manipulability and distance to objects and angular deviation. Experimental results with a Kuka LWR4 arm mounted on a Neobotix MPO700 mobile base validate the proposed approach.
- Published
- 2017
4. Tentacle-based moving obstacle avoidance for omnidirectional robots with visibility constraints
- Author
-
Robin Passama, Andrea Cherubini, Abdellah Khelloufi, Nouara Achour, Interactive Digital Humans (IDH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), Laboratoire de Robotique Parallélisme Electroénergétique [Alger] (LRPE), and Université des Sciences et de la Technologie Houari Boumediene [Alger] (USTHB)
- Subjects
0209 industrial biotechnology ,business.industry ,Visibility (geometry) ,Mobile robot ,Field of view ,02 engineering and technology ,[SPI.AUTO]Engineering Sciences [physics]/Automatic ,Task (project management) ,Robot control ,020901 industrial engineering & automation ,Geography ,Obstacle avoidance ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Robot ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Omnidirectional antenna ,business - Abstract
International audience; This paper presents a tentacle-based obstacle avoidance scheme for omnidirectional mobile robots that must satisfy visibility constraints during navigation. The navigation task consists of driving the robot towards a visual target in the presence of environment (static or moving) obstacles. The target is acquired by an on-board camera, while the obstacles surrounding the robot are sensed by laser range scanners. To perform such task, the robot must avoid the obstacles while maintaining the target in its field of view. The approach is validated in both simulated and real experiments.
- Published
- 2017
5. Walking pattern generators designed for physical collaboration
- Author
-
Pierre-Brice Wieber, Don Joven Agravante, Alexander Sherikov, Abderrahmane Kheddar, Andrea Cherubini, Interactive Digital Humans (IDH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), Modelling, Simulation, Control and Optimization of Non-Smooth Dynamical Systems (BIPOP), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Laboratoire Jean Kuntzmann (LJK ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), Joint Robotics Laboratory [Japan] (CNRS-AIST JRL), National Institute of Advanced Industrial Science and Technology (AIST)-Centre National de la Recherche Scientifique (CNRS), Centre National de la Recherche Scientifique (CNRS)-National Institute of Advanced Industrial Science and Technology (AIST), Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS), and Joint Robotics Laboratory (CNRS-AIST JRL )
- Subjects
0209 industrial biotechnology ,Engineering ,Model-predictive control ,business.industry ,Carry (arithmetic) ,Control engineering ,02 engineering and technology ,Physical interaction ,Construct (python library) ,Reduced model ,Model predictive control ,020901 industrial engineering & automation ,Human-humanoid physical interaction ,0202 electrical engineering, electronic engineering, information engineering ,Humanoid walking ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,020201 artificial intelligence & image processing ,business ,Simulation ,Humanoid robot - Abstract
International audience; This paper is about the design of humanoid walking pattern generators to be used for physical collaboration. A particular use case is a humanoid robot helping a human to carry large and/or heavy objects. To do this, we construct a reduced model which takes into account physical interaction. This is used in a model predictive control framework to generate separate behaviors for being a follower or a leader. The approach is then validated both on simulation and on the HRP-4 humanoid robot.
- Published
- 2016
6. An integrated framework for humanoid embodiment with a BCI
- Author
-
Pierre Gergondet, Andrea Cherubini, Damien Petit, Abderrahmane Kheddar, Joint Robotics Laboratory [Japan] (CNRS-AIST JRL), National Institute of Advanced Industrial Science and Technology (AIST)-Centre National de la Recherche Scientifique (CNRS), Interactive Digital Humans (IDH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), and Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)
- Subjects
Personal robot ,Engineering ,Social robot ,business.industry ,05 social sciences ,Mobile robot ,050105 experimental psychology ,Mobile robot navigation ,[SPI.AUTO]Engineering Sciences [physics]/Automatic ,Robot control ,03 medical and health sciences ,0302 clinical medicine ,Articulated robot ,Robot ,0501 psychology and cognitive sciences ,Computer vision ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Humanoid robot - Abstract
International audience; This paper presents a framework to embody a user (e.g. disabled persons) into a humanoid robot controlled by means of brain-computer interfaces (BCI). With our framework, the robot can interact with the environment, or assist its user. The low frequency and accuracy of the BCI commands is compensated by vision tools, such as objects recognition and mapping techniques, as well as shared-control approaches. As a result, the proposed framework offers intuitive, safe, and accurate robot navigation towards an object or a person. The generic aspect of the framework is demonstrated by two complex experiments, where the user controls the robot to serve him a drink, and to raise his own arm.
- Published
- 2015
7. Human-humanoid joint haptic table carrying task with height stabilization using vision
- Author
-
Abderrahmane Kheddar, Don Joven Agravante, Antoine Bussy, Andrea Cherubini, Interactive Digital Humans (IDH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), Joint Robotics Laboratory [France] (CNRS-AIST JRL), National Institute of Advanced Industrial Science and Technology (AIST)-Centre National de la Recherche Scientifique (CNRS), European Project: 288533,EC:FP7:ICT,FP7-ICT-2011-7,ROBOHOW.COG(2012), Centre National de la Recherche Scientifique (CNRS)-National Institute of Advanced Industrial Science and Technology (AIST), Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS), and Joint Robotics Laboratory (CNRS-AIST JRL )
- Subjects
0209 industrial biotechnology ,Engineering ,business.industry ,Physical Human-Robot Interaction ,020207 software engineering ,02 engineering and technology ,Visual servoing ,Robot control ,020901 industrial engineering & automation ,Control theory ,0202 electrical engineering, electronic engineering, information engineering ,Trajectory ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Torque ,Computer vision ,Artificial intelligence ,business ,Human and humanoid skills/cognition/interaction ,Pose ,Simulation ,Humanoid robot ,Haptic technology - Abstract
International audience; In this paper, a first step is taken towards using vision in human-humanoid haptic joint actions. Haptic joint actions are characterized by physical interaction throughout the execution of a common goal. Because of this, most of the focus is on the use of force/torque-based control. However, force/torque information is not rich enough for some tasks. Here, a particular case is shown: height stabilization during table carrying. To achieve this, a visual servoing controller is used to generate a reference trajectory for the impedance controller. The control law design is fully described along with important considerations for the vision algorithm and a framework to make pose estimation robust during the table carrying task of the humanoid robot. We then demonstrate all this by an experiment where a human and the HRP-2 humanoid jointly transport a beam using combined force and vision data to adjust the interaction impedance while at the same time keeping the inclination of the beam horizontal.
- Published
- 2013
8. Multimodal control for human-robot cooperation
- Author
-
André Crosnier, Philippe Fraisse, Robin Passama, Arnaud Meline, Andrea Cherubini, Interactive Digital Humans (IDH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), and Keith, François
- Subjects
0209 industrial biotechnology ,Ubiquitous robot ,Engineering ,Personal robot ,Social robot ,business.industry ,[INFO.INFO-RB] Computer Science [cs]/Robotics [cs.RO] ,Mobile robot ,02 engineering and technology ,Visual servoing ,Visual Servoing ,Robot learning ,Human-Robot Interaction ,Mobile robot navigation ,Robot control ,020901 industrial engineering & automation ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business - Abstract
JTCF Novel Technology Paper Award for Amusement culture finalist. Associated video: http://www.youtube.com/watch?v=1Ei8uS9hgnQ; International audience; For intuitive human-robot collaboration, the robot must quickly adapt to the human behavior. To this end, we propose a multimodal sensor-based control framework, enabling a robot to recognize human intention, and consequently adapt its control strategy. Our approach is marker-less, relies on a Kinect and on an on-board camera, and is based on a unified task formalism. Moreover, we validate it in a mock-up industrial scenario, where human and robot must collaborate to insert screws in a flank.
- Published
- 2013
9. A redundancy-based approach for obstacle avoidance in mobile robot navigation
- Author
-
François Chaumette and Andrea Cherubini
- Subjects
Robot kinematics ,Occupancy grid mapping ,Computer science ,business.industry ,Mobile robot ,Motion control ,Visual servoing ,Mobile robot navigation ,Obstacle avoidance ,Redundancy (engineering) ,Robot ,Computer vision ,Artificial intelligence ,business - Abstract
In this paper, we propose a framework for visual navigation with simultaneous obstacle avoidance. The obstacles are modeled by using a vortex potential field, derived from an occupancy grid. Kinematic redundancy guarantees that obstacle avoidance and navigation are achieved concurrently, and the whole scheme is merely sensor-based. The problem is solved both in an obstacle-free and in a dangerous context, and the control law is smoothened in the intermediate situations. In a series of simulations, we show that with our framework, a robot can replay a taught visual path while avoiding collisions, even in the presence of visual occlusions.
- Published
- 2010
10. An extended policy gradient algorithm for robot task learning
- Author
-
Luca Iocchi, Francesca Giannone, Pier Francesco Palamara, Andrea Cherubini, Dipartimento di Informatica e Sistemistica [Rome], and Università degli Studi di Roma 'La Sapienza' = Sapienza University [Rome]
- Subjects
business.industry ,Computer science ,Active learning (machine learning) ,Robot performance ,Online machine learning ,02 engineering and technology ,010501 environmental sciences ,Machine learning ,computer.software_genre ,Motion control ,Optimal parameters ,01 natural sciences ,Robot learning ,Rate of convergence ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Robot ,Reinforcement learning ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,International conferences ,computer ,0105 earth and related environmental sciences - Abstract
International audience; In real-world robotic applications, many factors, both at low-level (e.g., vision and motion control parameters) and at high-level (e.g., the behaviors) determine the quality of the robot performance. Thus, for many tasks, robots require fine tuning of the parameters, in the implementation of behaviors and basic control actions, as well as in strategic deci-sional processes. In recent years, machine learning techniques have been used to find optimal parameter sets for different behaviors. However, a drawback of learning techniques is time consumption: in practical applications, methods designed for physical robots must be effective with small amounts of data. In this paper, we present a method for concurrent learning of best strategy and optimal parameters, by extending the policy gradient reinforcement learning algorithm. The results of our experimental work in a simulated environment and on a real robot show a very high convergence rate.
- Published
- 2007
11. Dual-arm robotic manipulation of flexible cables
- Author
-
Benjamin Navarro, André Crosnier, Philippe Fraisse, Jihong Zhu, Andrea Cherubini, Interactive Digital Humans (IDH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), and Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)
- Subjects
0209 industrial biotechnology ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Robot manipulator ,02 engineering and technology ,Deformation (meteorology) ,[SPI.AUTO]Engineering Sciences [physics]/Automatic ,Dual (category theory) ,Computer Science::Robotics ,Task (computing) ,020901 industrial engineering & automation ,Control theory ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Robot ,020201 artificial intelligence & image processing ,Manipulator ,Fourier series ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
International audience; Deforming a cable to a desired (reachable) shape is a trivial task for a human to do without even knowing the internal dynamics of the cable. This paper proposes a framework for cable shapes manipulation with multiple robot manipulators. The shape is parameterized by a Fourier series. A local deformation model of the cable is estimated on-line with the shape parameters. Using the deformation model, a velocity control law is applied on the robot to deform the cable into the desired shape. Experiments on a dual-arm manipulator are conducted to validate the framework.
- Full Text
- View/download PDF
12. Dual-Arm Relative Tasks Performance Using Sparse Kinematic Control
- Author
-
Damien Sallé, André Crosnier, Philippe Fraisse, Sonny Tarbouriech, Andrea Cherubini, Benjamin Navarro, Interactive Digital Humans (IDH), Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM), TECNALIA France, and Tecnalia
- Subjects
0209 industrial biotechnology ,Robot kinematics ,Computer science ,Control engineering ,02 engineering and technology ,Kinematics ,DUAL (cognitive architecture) ,01 natural sciences ,Task (project management) ,[SPI.AUTO]Engineering Sciences [physics]/Automatic ,Computer Science::Robotics ,010104 statistics & probability ,020901 industrial engineering & automation ,Task analysis ,Robot ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,0101 mathematics ,Control (linguistics) - Abstract
International audience; To make production lines more flexible, dual-arm robots are good candidates to be deployed in autonomous assembly units. In this paper, we propose a sparse kinematic control strategy, that minimizes the number of joints actuated for a coordinated task between two arms. The control strategy is based on a hierarchical sparse QP architecture. We present experimental results that highlight the capability of this architecture to produce sparser motions (for an assembly task) than those obtained with standard controllers.
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.