177 results on '"AUTONOMOUS robots"'
Search Results
2. UAS-Borne Radar for Autonomous Navigation and Surveillance Applications
- Author
-
Christos Milias, Rasmus B. Andersen, Bilal Muhammad, Jes T. B. Kristensen, Pavlos I. Lazaridis, Zaharias D. Zaharis, Albena Mihovska, and Dan D. S. Hermansen
- Subjects
UAS-borne radar ,Radar ,Autonomous robots ,Mechanical Engineering ,Radar antennas ,Automotive Engineering ,Radar applications ,drone detection ,Autonomous navigation ,Airborne radar ,Drones ,Radar detection ,Computer Science Applications - Abstract
The autonomous navigation of UAS requires, among others, detect-and-avoid capability as a prerequisite for enabling wide-ranging applications, including the transportation of goods and people. This article presents the design, implementation, and experimental results of a UAS-borne radar system detecting drones. The applications of the proposed system include not only detect-and-avoid systems for safe and autonomous navigation of unmanned aircraft systems but also airborne surveillance of malicious drones in controlled or restricted airspace for mitigating security and privacy threats. The system performance in terms of maximum detection range is evaluated through field tests. The experimental results show that the proposed UAS-borne radar can detect a DJI Phantom 4 and a DJI Matrice 600 Pro at a maximum distance of 440 and 500 meters, respectively. The article also provides insights into the system implementation and integration aspects, discusses future research direction, and stresses the need for standardization efforts to benchmark the required performance levels for UAS-borne radars.
- Published
- 2023
3. Underwater 3D Scanner to Counteract Refraction: Calibration and Experimental Results
- Author
-
Josep Forest, Miguel Castillón, and Pere Ridao Rodriguez
- Subjects
Submersibles ,Vehicles submergibles ,Lectors òptics ,Control and Systems Engineering ,Autonomous robots ,Robots autònoms ,Optical scanners ,Three-dimensional imaging ,Visualització tridimensional (Informàtica) ,Three-dimensional display systems ,Electrical and Electronic Engineering ,Imatgeria tridimensional ,Computer Science Applications - Abstract
Underwater 3-D laser scanners are an essential type of sensors used by unmanned underwater vehicle (UUVs) for operations such as inspection, navigation, and object recognition and manipulation. This article presents a novel 3-D laser scanner, which uses a 2-axis mirror to project straight lines into the water by compensating for refraction-related distortions. This is achieved by projecting optimally curved lines, so that the refraction when they enter the water transforms them into straight lines. The relevance of this approach lies in the fact that 3-D triangulation using planes is noticeably faster than using elliptic cones. The goal of this work is twofold: first, to prove that refraction-related distortions can in practice be compensated for by using a 2-axis mirror, and second, to present a simple calibration algorithm that only needs to compute the coefficients of polynomial functions. To the best of the authors’ knowledge, the prototype presented in this article is the first laser line scanner that actively counteracts the refraction of the projected light in the context of underwater robotics
- Published
- 2022
4. MROS: runtime adaptation for robot control architectures
- Author
-
Darko Bozhinoski, Mario Garzon Oviedo, Nadia Hammoudeh Garcia, Harshavardhan Deshpande, Gijs van der Hoorn, Jon Tjerngren, Andrzej Wąsowski, Carlos Hernandez Corbato, and Publica
- Subjects
FOS: Computer and information sciences ,models-at-runtime ,control architecture ,Computer Science Applications ,Software Engineering (cs.SE) ,Human-Computer Interaction ,autonomous robots ,Computer Science - Robotics ,Computer Science - Software Engineering ,Hardware and Architecture ,Control and Systems Engineering ,ontologies ,Self-adaptive systems ,Robotics (cs.RO) ,Software - Abstract
Known attempts to build autonomous robots rely on complex control architectures, often implemented with the Robot Operating System platform (ROS). Runtime adaptation is needed in these systems, to cope with component failures and with contingencies arising from dynamic environments-otherwise, these affect the reliability and quality of the mission execution. Existing proposals on how to build self-adaptive systems in robotics usually require a major re-design of the control architecture and rely on complex tools unfamiliar to the robotics community. Moreover, they are hard to reuse across applications. This paper presents MROS: a model-based framework for run-time adaptation of robot control architectures based on ROS. MROS uses a combination of domain-specific languages to model architectural variants and captures mission quality concerns, and an ontology-based implementation of the MAPE-K and meta-control visions for run-time adaptation. The experiment results obtained applying MROS in two realistic ROS-based robotic demonstrators show the benefits of our approach in terms of the quality of the mission execution, and MROS' extensibility and re-usability across robotic applications.
- Published
- 2022
5. Mensch-Roboter-Kollaboration in der Kommissionierung
- Author
-
Christoph H. Glock, Thomas Semar, Minqi Zhang, Eric H. Grosse, and Sven Winkelhaus
- Subjects
021103 operations research ,Computer science ,Strategy and Management ,05 social sciences ,0211 other engineering and technologies ,General Engineering ,Autonome Roboter ,02 engineering and technology ,Management Science and Operations Research ,Human Robot Collaboration ,Kommissionierung ,Logistik 4.0 ,Mensch-Roboter-Kooperation ,Logistics 4.0 ,0502 economics and business ,Order Picking ,Autonomous Robots ,Simulation ,050203 business & management - Abstract
Die Kommissionierung weist in den meisten Unternehmen einen hohen Anteil an manueller Arbeit auf und ist sehr zeit- und kostenintensiv sowie fehleranfällig. Um diesen Prozess zu automatisieren, wurden autonome Roboter entwickelt. Eine modularisierte Lösung stellt die kollaborative Kommissionierung dar, welche die Vorteile der manuellen und robotergestützten Kommissionierung kombiniert. Dieser Beitrag systematisiert die kollaborative Kommissionierung und entwickelt ein Simulationsmodell, um deren Potenziale herauszustellen.
- Published
- 2021
6. Autonomous Robot Twin System for Room Acoustic Measurements
- Author
-
Ville Pulkki, Götz Georg, Abraham Martinez Ornelas, Sebastian J. Schlecht, Communication Acoustics: Spatial Sound and Psychoacoustics, Dept Signal Process and Acoust, Department of Media, Aalto-yliopisto, and Aalto University
- Subjects
room acoustic measurements ,autonomous robots ,sound field analysis ,Computer science ,Acoustics ,General Engineering ,Room impulse response ,Spatial Audio ,Autonomous robot ,Room acoustics ,Music ,Room Acoustics - Abstract
While room acoustic measurements can accurately capture the sound field of real rooms, they are usually time consuming and tedious if many positions need to be measured. Therefore this contribution presents the Autonomous Robot Twin System for Room Acoustic Measurements (ARTSRAM) to autonomously capture large sets of room impulse responses with variable sound source and receiver positions. The proposed implementation of the system consists of two robots, one of which is equipped with a loudspeaker, while the other one is equipped with a microphone array. Each robot contains collision sensors, thus enabling it to move autonomously within the room. The robots move according to a random walk procedure to ensure a big variability between measured positions. A tracking system provides position data matching the respective measurements. After outlining the robot system, this paper presents a validation, in which anechoic responses of the robots are presented and the movement paths resulting from the random walk procedure are investigated. Additionally the quality of the obtained room impulse responses is demonstrated with a sound field visualization. In summary, the evaluation of the robot system indicates that large sets of diverse and high-quality room impulse responses can be captured with the system in an automated way. Such large sets of measurements will benefit research in the fields of room acoustics and acoustic virtual reality.
- Published
- 2021
7. Exploiting the Internet Resources for Autonomous Robots in Agriculture
- Author
-
Luis Emmi, Roemi Fernández, Pablo Gonzalez-de-Santos, Matteo Francia, Matteo Golfarelli, Giuliano Vitali, Hendrik Sandmann, Michael Hustedt, Merve Wollweber, European Commission, Emmi, Luis Alfredo, Fernández, Roemi, González-de-Santos, Pablo, Francia, Matteo, Vitali, Giuliano, Hustedt, Michael, and Wollweber, Merve
- Subjects
Artificial intelligence ,IoT ,Precision agriculture ,Autonomous robots ,Cloud computing ,Plant Science ,Agronomy and Crop Science ,precision agriculture ,autonomous robots ,artificial intelligence ,cloud computing ,Food Science - Abstract
Special Issue "Robots and Autonomous Machines for Agriculture Production", Autonomous robots in the agri-food sector are increasing yearly, promoting the application of precision agriculture techniques. The same applies to online services and techniques implemented over the Internet, such as the Internet of Things (IoT) and cloud computing, which make big data, edge computing, and digital twins technologies possible. Developers of autonomous vehicles understand that autonomous robots for agriculture must take advantage of these techniques on the Internet to strengthen their usability. This integration can be achieved using different strategies, but existing tools can facilitate integration by providing benefits for developers and users. This study presents an architecture to integrate the different components of an autonomous robot that provides access to the cloud, taking advantage of the services provided regarding data storage, scalability, accessibility, data sharing, and data analytics. In addition, the study reveals the advantages of integrating new technologies into autonomous robots that can bring significant benefits to farmers. The architecture is based on the Robot Operating System (ROS), a collection of software applications for communication among subsystems, and FIWARE (Future Internet WARE), a framework of open-source components that accelerates the development of intelligent solutions. To validate and assess the proposed architecture, this study focuses on a specific example of an innovative weeding application with laser technology in agriculture. The robot controller is distributed into the robot hardware, which provides real-time functions, and the cloud, which provides access to online resources. Analyzing the resulting characteristics, such as transfer speed, latency, response and processing time, and response status based on requests, enabled positive assessment of the use of ROS and FIWARE for integrating autonomous robots and the Internet., This article is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 101000256.
- Published
- 2023
8. TWINBOT: Autonomous Underwater Cooperative Transportation
- Author
-
Roger Pi, Patryk Cieslak, Pere Ridao, Pedro J. Sanz, and Agencia Estatal de Investigación
- Subjects
0209 industrial biotechnology ,Vehicles submergibles ,cooperative manipulation ,General Computer Science ,Computer science ,Robots autònoms ,autonomous underwater intervention ,task priority control ,02 engineering and technology ,Kinematics ,Remotely operated underwater vehicle ,01 natural sciences ,Submersibles ,020901 industrial engineering & automation ,Autonomous robots ,Communication bandwidth ,Torque ,General Materials Science ,Underwater ,0105 earth and related environmental sciences ,Robot kinematics ,010505 oceanography ,General Engineering ,cooperative robots ,Control engineering ,Autonomous underwater intervention ,Task analysis ,Robot ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,lcsh:TK1-9971 - Abstract
Underwater Inspection, Maintenance, and Repair operations are nowadays performed using Remotely Operated Vehicles (ROV) deployed from dynamic-positioning vessels, having high daily opera- tional costs. During the last twenty years, the research community has been making an effort to design new Intervention Autonomous Underwater Vehicles (I-AUV), which could, in the near future, replace the ROVs, significantly decreasing these costs. Until now, the experimental work using I-AUVs has been limited to a few single-vehicle interventions, including object search and recovery, valve turning, and hot stab operations. More complex scenarios usually require the cooperation of multiple agents, i.e., the transportation of large and heavy objects. Moreover, using small, autonomous vehicles requires consideration of their limited load capacity and limited manipulation force/torque capabilities. Following the idea of multi-agent systems, in this paper we propose a possible solution: using a group of cooperating I-AUVs, thus sharing the load and optimizing the stress exerted on the manipulators. Specifically, we tackle the problem of transporting a long pipe. The presented ideas are based on a decentralized Task-Priority kinematic control algorithm adapted for the highly limited communication bandwidth available underwater. The aforementioned pipe is transported following a sequence of poses. A path-following algorithm computes the desired velocities for the robots’ end-effectors, and the on-board controllers ensure tracking of these setpoints, taking into account the geometry of the pipe and the vehicles’ limitations. The utilized algorithms and their practical implementation are discussed in detail and validated through extensive simulations and experimental trials performed in a test tank using two 8 DOF I-AUVs This work was supported in part by the TWINBOT ’’TWIN ROBOTS FOR COOPERATIVE UNDERWATER INTERVENTION MISSIONS’’ Project funded by the Spanish Ministry of economy, industry, and competitiveness, under Grant DPI2017-86372-C3, in part by the Valencian Government (CIRTESU Project) under Grant IDIFEDER/2018/013, and in part by the Secretaria d’Universitats i Recerca del Departament d’Economia i Coneixement de la Generalitat de Catalunya under Grant 2019FI_B_00812
- Published
- 2021
9. Fetch Mobile Manipulator – Programming Link Lab’s First Personal Robot; Investigation of Sociotechnical Relationships between Autonomous Robots and Humans
- Subjects
autonomous robots ,fetch mobile manipulator ,human-robot ,sociotechnical relationships - Abstract
People perform mundane tasks on a daily basis. These mundane tasks include stacking chairs, organizing rooms, erasing whiteboards, folding laundry, etc. These tasks all hold great importance, but they take up a lot of time and effort. Mundane tasks have a lasting effect on human beings. It has been proven that mundane tasks cause psychological distress on people. One solution to this problem is to develop autonomous robots that can accurately perform simple human tasks. As technology evolves and improves rapidly, we are capable of building such robots with high precision to handle mundane activities. The focus of the technical project is to program the Fetch mobile manipulator, an autonomous robot, to perform a simple, mundane task, such as erasing a whiteboard. When building autonomous robots, we always need to consider the effects it will have on its environments. Every new technology that is released tends to have a great impact on people’s daily lives. Therefore, the project will also focus on the effect of introducing autonomous robots in differing environments. The goal is to learn from the research of how robots have an impact on their environments to ensure proper considerations when designing the program for Fetch. Autonomous robots are being built and researched to help humans with their daily tasks. As technology is evolving, many tasks in manufacturing plants have already been autonomized, from building a car to making and packaging a box of ice cream. However, many warehouse and office tasks are still done by human-workers, which can become tedious and tiring to them. This project aims to create such a robot that can complete daily tasks in the Link Lab. In order to achieve the end goal, the Fetch mobile manipulator, a high-performance robot, is used. The Fetch robot has many commercial applications, and the primary interest of the robot is shelf picking. With the selection of the sensors on the robot, it can accurately avoid obstacles, navigate, and manipulate its environment. Aside from the sensors, the Fetch comes equipped with a robotic arm, which can be used for the manipulation of its surroundings. With the use of Robot Operating System (ROS), the project programs the Fetch to autonomously navigate through the Link Lab’s arena without colliding with any static and dynamic obstacles. The end goal of the project is to program the Fetch to approach a whiteboard and completely erase it. Within the next few years, the Fetch should become the first human-like assistant in the Link Lab. This portfolio also explores the effect of introducing autonomous robots in home and office areas on humans, and vice versa. The posed research question is: What do engineers need to consider when building autonomous robots for all environments? With a huge spike in technology, there are many robots and voice assistants that have been released. These robots are advertised as objects that will help easen up certain tasks for its users, but do they really improve people’s lives? This cause and effect relationship is what is explored in this paper. The Interactive Sociotechnical Analysis (ISTA) framework is used to analyze these relationships, as it has been used before to examine the relationship between a robot used in a hospital. It is expected to see that there are mistakes being made by, both, consumers and producers with regards to using/developing the autonomous robots. Sometimes humans make mistakes in operating or even understanding these technologies, which leads to failures. Sometimes manufacturers do not anticipate all the issues that could arise with their products, which ends up hurting people’s productivity when using the technology. I will investigate if such issues arise, by whom are they caused, and what can be done in the future to improve satisfaction of introducing autonomy in people’s daily lives. By working on both the projects simultaneously, I can apply the knowledge gained from both projects onto each other. For instance, for all the case studies and final conclusion as to what engineers need to consider when designing autonomous robots, I am able to learn what I need to ensure in my own robot design and functions when it will interact with the Link Lab. Moreover, I am able to witness how the Fetch is able to move and locate itself in the lab itself and see how people have reacted to the robot as it is moving around. Both the projects go hand-in-hand and help me gain another level of perspective that would not be achieved if they were worked on separately.
- Published
- 2022
- Full Text
- View/download PDF
10. PanoraMIS: An ultra-wide field of view image dataset for vision-based robot-motion estimation
- Author
-
Houssem Eddine Benseddik, Guillaume Caron, Fabio Morbidi, Modélisation, Information et Systèmes - UR UPJV 4290 (MIS), and Université de Picardie Jules Verne (UPJV)
- Subjects
0209 industrial biotechnology ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Field of view ,02 engineering and technology ,Catadioptric system ,image-based localization ,020901 industrial engineering & automation ,Documentation ,omnidirectional vision ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Computer vision ,Electrical and Electronic Engineering ,Visual odometry ,Ground truth ,business.industry ,Applied Mathematics ,Mechanical Engineering ,Deep learning ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,autonomous robots ,Panoramic cameras ,Modeling and Simulation ,Robot ,020201 artificial intelligence & image processing ,The Internet ,Artificial intelligence ,business ,visual-inertial odometry ,Software - Abstract
International audience; This paper presents a new dataset of ultra-wide field of view images with accurate ground truth, called PanoraMIS. The dataset covers a large spectrum of panoramic cameras (catadioptric, twin-fisheye), robotic platforms (wheeled, aerial and industrial robots), and testing environments (indoors and outdoors), and it is well suited to rigorously validate novel image-based robot-motion estimation algorithms, including visual odometry, visual SLAM, and deep learning-based methods. PanoraMIS and the accompanying documentation is publicly available on the Internet for the entire research community.
- Published
- 2020
11. RoboPlanner: a pragmatic task planning framework for autonomous robots
- Author
-
Balamuralidhar Purushotaman and Ajay Kattepur
- Subjects
uncertain environments ,execution failures ,lcsh:Computer engineering. Computer hardware ,runtime changes ,Computer science ,Cognitive Neuroscience ,execution monitor ,Business system planning ,logistics supply chains ,lcsh:TK7885-7895 ,Experimental and Cognitive Psychology ,unforeseen obstacles ,simulation framework ,lcsh:Computer applications to medicine. Medical informatics ,mobile robots ,Artificial Intelligence ,roboplanner ,abstract automated planning systems ,warehousing ,Orchestration (computing) ,Motion planning ,retail warehousing ,path planning ,pragmatic task planning framework ,cognitive autonomy ,deliberative robotic planning ,runtime robotic executions ,robotic automation ,logistics ,adaptive deployments ,business.industry ,intelligent robots ,Control reconfiguration ,Mobile robot ,industrial deployments including manufacturing ,Automation ,mobile pick & delivery robots ,Computer Science Applications ,simulated execution framework ,pragmatic integration ,autonomous robots ,Knowledge base ,lcsh:R858-859.7 ,Robot ,Computer Vision and Pattern Recognition ,business ,Software engineering - Abstract
Robotic automation has proliferated various industrial deployments including manufacturing, retail warehousing and logistics supply chains. In order for robots to advance to the next stage of cognitive autonomy, a robust framework for planning, execution and adaptation is needed. While there have been advances in abstract automated planning systems, they are still ill-suited to be applied within runtime robotic executions, which take place in uncertain environments. In this study, the authors provide a deliberative robotic planning and simulated execution framework called RoboPlanner that provides a pragmatic integration of automated planning, orchestration and adaptive deployments. This is coupled with an execution monitor and plan repair module, that allows reconfiguration to various template actions with runtime changes. Structured rules for re-planning in the case of state changes, unforeseen obstacles or execution failures are provided. They demonstrate their simulation framework on a realistic example of mobile pick & delivery robots in Industry 4.0 warehouses, that plan, execute, adapt and re-plan in sync with a knowledge base.
- Published
- 2020
12. Garment manipulation dataset for robot learning by demonstration through a virtual reality framework
- Author
-
Arnau Boix-Granell, Sergi Foix, Carme Torras, European Commission, European Research Council, Agencia Estatal de Investigación (España), Ministerio de Ciencia, Innovación y Universidades (España), Institut de Robòtica i Informàtica Industrial, and Universitat Politècnica de Catalunya. ROBiri - Grup de Percepció i Manipulació Robotitzada de l'IRI
- Subjects
Realitat virtual ,Image classification ,Robots autònoms ,Virtual reality framework ,Virtual reality ,Gesture recognition ,Automation::Robots [Classificació INSPEC] ,Human in the loop ,Autonomous robots ,Cloth folding dataset ,Data acquistion ,Computer vision ,Learning by demonstration ,Edge detection ,Informàtica::Robòtica [Àrees temàtiques de la UPC] ,Garment manipulation ,Pose estimation - Abstract
Being able to teach complex capabilities, such as folding garments, to a bi-manual robot is a very challenging task, which is often tackled using learning from demonstration datasets. The few garment folding datasets available nowadays to the robotics research community are either gathered from human demonstrations or generated through simulation. The former have the huge problem of perceiving human action and transferring it to the dynamic control of the robot, while the latter requires coding human motion into the simulator in open loop, resulting in far-from-realistic movements. In this article, we present a reduced but very accurate dataset of human cloth folding demonstrations. The dataset is collected through a novel virtual reality (VR) framework we propose, based on Unity¿s 3D platform and the use of a HTC Vive Pro system. The framework is capable of simulating very realistic garments while allowing users to interact with them, in real time, through handheld controllers. By doing so, and thanks to the immersive experience, our framework gets rid of the gap between the human and robot perception-action loop, while simplifying data capture and resulting in more realistic samples., This work was developed in the context of the project CLOTHILDE (”CLOTH manIpulation Learning from DEmonstrations”) which has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 741930) and is also supported by the BURG project PCI2019-103447 funded by MCIN/ AEI /10.13039/501100011033 and by the ”European Union”.
- Published
- 2022
13. An Architectural Approach for Enabling and Developing Cooperative Behaviour in Diverse Autonomous Robots
- Author
-
Simo Linkola, Niko Mäkitalo, Tomi Laurinen, Anna Kantosalo, Tomi Männistö, Scandurra, P, Galster, M, Mirandola, R, Weyns, D, Department of Computer Science, and Empirical Software Engineering research group
- Subjects
Robot software architecture ,Peer modeling ,Robot cooperation ,Ontology-based reasoning ,Autonomous robots ,113 Computer and information sciences - Abstract
The paper introduces an architecture for robot-to-robot cooperation which takes into consideration how situational context augmented with peer modeling fosters cooperation opportunity identification and cooperation planning. The presented architecture allows developing, training, testing, and deploying dynamic cooperation solutions for diverse autonomous robots using ontology-based reasoning. The architecture operates in three different worlds: in the Real World with real robots, in a 3D Virtual World by emulating the real environments and robots, and in an abstract Block World that enables developing and studying large-scale cooperation scenarios. We describe an assessment practice for our architecture and cooperation procedures, which is based on scenarios implemented in all three worlds, and provide initial results of stress testing the cooperation procedures in the Block World. Moreover, as the core part of our architecture can operate in all the three worlds, development of the robot cooperation with the architecture can regularly accommodate insights gained from experimenting and testing in one world as improvements in another. We report our insights from developing the architecture and cooperation procedures as additional research outcomes.
- Published
- 2022
14. A Perspective on Lifelong Open-Ended Learning Autonomy for Robotics through Cognitive Architectures
- Author
-
Alejandro Romero, Francisco Bellas, and Richard J. Duro
- Subjects
Cognitive architectures ,Lifelong learning ,Autonomous robots ,Open-ended learning ,Electrical and Electronic Engineering ,Biochemistry ,Instrumentation ,Atomic and Molecular Physics, and Optics ,Analytical Chemistry - Abstract
[Abstract]: This paper addresses the problem of achieving lifelong open-ended learning autonomy in robotics, and how different cognitive architectures provide functionalities that support it. To this end, we analyze a set of well-known cognitive architectures in the literature considering the different components they address and how they implement them. Among the main functionalities that are taken as relevant for lifelong open-ended learning autonomy are the fact that architectures must contemplate learning, and the availability of contextual memory systems, motivations or attention. Additionally, we try to establish which of them were actually applied to real robot scenarios. It transpires that in their current form, none of them are completely ready to address this challenge, but some of them do provide some indications on the paths to follow in some of the aspects they contemplate. It can be gleaned that for lifelong open-ended learning autonomy, motivational systems that allow finding domain-dependent goals from general internal drives, contextual long-term memory systems that all allow for associative learning and retrieval of knowledge, and robust learning systems would be the main components required. Nevertheless, other components, such as attention mechanisms or representation management systems, would greatly facilitate operation in complex domains. Ministerio de Ciencia e Innovación; PID2021-126220OB-I00 Xunta de Galicia; EDC431C-2021/39 Consellería de Cultura, Educación, Formación Profesional e Universidades; ED431G 2019/01
- Published
- 2023
15. Route Planning for Autonomous Mobile Robots Using a Reinforcement Learning Algorithm
- Author
-
Fatma M. Talaat, Abdelhameed Ibrahim, El-Sayed M. El-Kenawy, Abdelaziz A. Abdelhamid, Amel Ali Alhussan, Doaa Sami Khafaga, and Dina Ahmed Salem
- Subjects
autonomous robots ,routing algorithm ,collision avoidance ,reinforcement learning ,mobile robots ,Control and Optimization ,Control and Systems Engineering - Abstract
This research suggests a new robotic system technique that works specifically in settings such as hospitals or emergency situations when prompt action and preserving human life are crucial. Our framework largely focuses on the precise and prompt delivery of medical supplies or medication inside a defined area while avoiding robot collisions or other obstacles. The suggested route planning algorithm (RPA) based on reinforcement learning makes medical services effective by gathering and sending data between robots and human healthcare professionals. In contrast, humans are kept out of the patients’ field. Three key modules make up the RPA: (i) the Robot Finding Module (RFM), (ii) Robot Charging Module (RCM), and (iii) Route Selection Module (RSM). Using such autonomous systems as RPA in places where there is a need for human gathering is essential, particularly in the medical field, which could reduce the risk of spreading viruses, which could save thousands of lives. The simulation results using the proposed framework show the flexible and efficient movement of the robots compared to conventional methods under various environments. The RSM is contrasted with the leading cutting-edge topology routing options. The RSM’s primary benefit is the much-reduced calculations and updating of routing tables. In contrast to earlier algorithms, the RSM produces a lower AQD. The RSM is hence an appropriate algorithm for real-time systems.
- Published
- 2022
16. Loop Closure using laser 2D
- Author
-
Pujol Badell, Sergi, Vallvé, Joan, Vallvé Navarro, Joan, Puig Cayuela, Vicenç, and Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial
- Subjects
Adaptive control systems -- Design and construction ,Robot vision -- Mathematical models -- Software ,Autonomous robots ,Robots autònoms ,Visió artificial (Robòtica) -- Models matemàtics -- Programari ,Sistemes adaptatius -- Disseny i construcció ,Informàtica::Robòtica [Àrees temàtiques de la UPC] - Abstract
Trabajo fin de máster presentado en la Universidad Politécnica de Cataluña, Máster en Automática y Robótica.--2021-10-08, A fundamental necessity in mobile robotics is the ability of knowing the robot own location. In many applications, a map of the environment is not available or it does not have the accuracy or the information necessary for robot localization. In this cases, it is necessary to build a map simultaneously (SLAM). In applications where the GPS signal is not reliable, like in indoor environments, the localization has to be based in estimating robot position with the information of the robot movement. This estimation is obtained in local coordinates and it implies that a drift in the estimation accumulates error over time. In order to reduce the drift effects, it is necessary the capacity to recognize places that had already been visited and readjust the localization. This search of loop closures provides global reference points to the localization system. This field of study is already a very discussed topic, but finding an equilibrium between efficiency and robustness in order to be able to work in large environments is still a challenge. The aim of this work is to equip the WOLF SLAM library with an state-of-art loop closure for 2D LIDAR sensors. WOLF SLAM library is integrated with ROS and proposes an interesting environment to manage SLAM and localization problems based in graph-SLAM. In this work, we first performed a review of the state of the art for loop closure that lead us to FALKO. It is an open source project focused in loop closure that implements feature detection and scene matching using descriptors. Furthermore, the FALKO loop closure algorithm has been integrated along with an Iterative Closest Point method for robustness and accuracy purposes. ICP is a point to line metric optimized for range-finder scan matching. The integration and developed algorithms has been tested with experimental data from a real robot
- Published
- 2021
17. Rapid SARS-CoV-2 Inactivation in a Simulated Hospital Room Using a Mobile and Autonomous Robot Emitting Ultraviolet-C Light
- Author
-
Cristina Lorca-Oró, Jordi Vila, Patricia Pleguezuelos, Júlia Vergara-Alert, Jordi Rodon, Natàlia Majó, Sergio López, Joaquim Segalés, Francesc Saldaña-Buesa, Maria Visa-Boladeras, Andreu Veà-Baró, Josep Maria Campistol, Xavier Abad, Producció Animal, and Sanitat Animal
- Subjects
Exponential viral-load reduction ,SARS-CoV-2 ,Ultraviolet Rays ,viruses ,virus diseases ,COVID-19 ,Sterilization ,exponential viral-load reduction ,Robotics ,Hospitals ,body regions ,UV-C radiation ,autonomous robots ,Infectious Diseases ,AcademicSubjects/MED00290 ,Virus inactivation ,Autonomous robots ,Major Article ,virus inactivation ,Immunology and Allergy ,Humans ,SARS-CoV-2 inactivation - Abstract
The spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) since 2019 has made mask-wearing, physical distancing, hygiene, and disinfection complementary measures to control virus transmission. Especially for health facilities, we evaluated the efficacy of an UV-C autonomous robot to inactivate SARS-CoV-2 desiccated on potentially contaminated surfaces. ASSUM (autonomous sanitary sterilization ultraviolet machine) robot was used in an experimental box simulating a hospital intensive care unit room. Desiccated SARS-CoV-2 samples were exposed to UV-C in 2 independent runs of 5, 12, and 20 minutes. Residual virus was eluted from surfaces and viral titration was carried out in Vero E6 cells. ASSUM inactivated SARS-CoV-2 by ≥ 99.91% to ≥ 99.99% titer reduction with 12 minutes or longer of UV-C exposure and onwards and a minimum distance of 100cm between the device and the SARS-CoV-2 desiccated samples. This study demonstrates that ASSUM UV-C device is able to inactivate SARS-CoV-2 within a few minutes., The virucidal capacity of an autonomous and mobile UV-C light robot was experimentally evaluated at different exposure times (5, 12, and 20 minutes) and locations (≥ 1 meter from UV-C) simulating a high viral load potentially present in hospital-room facilities.
- Published
- 2021
18. Análise de risco para a cooperação entre o condutor e sistema de controle de veículos autônomos
- Author
-
Garcia Bedoya, Olmer, 1983, Ferreira, Janito Vaqueiro, 1961, Meirelles, Pablo Siqueira, Nóbrega, Eurípedes Guilherme de Oliveira, Wolf, Denis Fernando, Nascimento Júnior, Cairo Lúcio, Universidade Estadual de Campinas. Faculdade de Engenharia Mecânica, Programa de Pós-Graduação em Engenharia Mecânica, and UNIVERSIDADE ESTADUAL DE CAMPINAS
- Subjects
Controle preditivo ,Veículos autônomos ,Autonomous robots ,Engenharia automotiva ,Analysis of trajectory ,Análise de trajetória ,Predictive control ,Automotive engineering - Abstract
Orientador: Janito Vaqueiro Ferreira Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica Resumo: Este trabalho tem como objetivo principal estudar estratégias para a cooperação entre o condutor e o sistema de controle de trajetórias de veículos autônomos por uma análise de risco. Inicialmente apresenta-se um estudo das arquiteturas dos veículos autônomos baseadas nas camadas de percepção, planejamento e controle. Baseado neste estudo definiu-se uma arquitetura de software e de hardware para a plataforma de testes VILMA01 (Veículo Inteligente do Laboratório de Mobilidade Autônoma) que possibilita a interação com o condutor. Depois é apresentada a camada do controle de trajetórias que consiste em manipular os graus de liberdade do veículo (direção, freio e acelerador) para leva-lo a uma posição desejada para cada instante de tempo. Para isso, este trabalho utiliza uma técnica de controle preditivo baseado nos modelos dinâmicos do veículo e da direção. Na sequência é apresentado o planejamento de trajetórias que consiste em saber para onde deve ir o veículo de acordo com a percepção e a missão. Nesta camada é apresentada a parte reativa também conhecida como planejamento local de caminho, onde a rota desejada representada em um espaço curvilíneo é selecionada a partir de indicadores de risco intrínseco e extrínseco de cada caminho. Com as camadas de planejamento e controle definidas é proposto um método pelo qual pode-se estimar durante o controle cooperativo a trajetória desejada do condutor, possibilitando uma decisão a ser tomada com base numa análise de risco para condicionamento do controle cooperativo ou do planejamento cooperativo. Finalmente, diferentes testes experimentais foram realizados os quais permitiram validar a automação, a instrumentação da plataforma VILMA01, e os conceitos de controle e planejamento cooperativos Abstract: This work aims to study strategies of cooperation between the driver and path control system of an autonomous vehicle through a risk analysis. First, it is presented the study of architectures of autonomous vehicles based on the layers of perception, planning and control. An architecture that includes interaction with the driver is proposed to VILMA01 (First Intelligent Vehicle of the Autonomous Mobility Laboratory), as well as their hardware and software architectures. Second, it is presented the path control layer consisting of manipulating the degrees of freedom of the car (steering, braking and acceleration) for bringing it to a desired position for each instant of time. In order to achieve that, the dynamic models of the vehicle and the steering system are used to apply the predictive control technique. Then, it is presented the path planning layer which consists of determining where the vehicle should go according to the perception and the mission. This work shows the reactive part, also known as local path planning, where the desired path represented in a curvilinear space is selected based on intrinsic and extrinsic risk indicators of each path. With the layers of planning and control already set, a method is proposed to estimate the trajectory desired by the driver during the cooperative control, allowing a decision to be made based on a risk analysis for conditioning cooperative planning or the cooperative control. Finally, different tests on VILMA01 are performed to validate the automation, the instrumentation in the vehicle and the concepts of cooperative control and cooperative planning Doutorado Mecânica dos Sólidos e Projeto Mecânico Doutor em Engenharia Mecânica CAPES 14864126 CNPQ 141276/2012-6
- Published
- 2021
19. A Modeling Tool for Reconfigurable Skills in ROS
- Author
-
Mario Garzon Oviedo, Darko Bozhinoski, Esther Aguado, Ricardo Sanz, Carlos Hernández, and Andrzej Wasowski
- Subjects
Domain-specific language ,self adaptive systems ,business.industry ,Computer science ,Patrolling ,Design tool ,Maintainability ,Semantic reasoner ,Reuse ,autonomous robots ,Robot ,ontologies ,domain specific language ,Software engineering ,business ,Adaptation (computer science) ,ROS2 tool - Abstract
Known attempts to build autonomous robots rely on complex control architectures, often implemented with the Robot Operating System platform (ROS). The implementation of adaptable architectures is very often ad hoc, quickly gets cumbersome and expensive. Reusable solutions that support complex, runtime reasoning for robot adaptation have been seen in the adoption of ontologies. While the usage of ontologies significantly increases system reuse and maintainability, it requires additional effort from the application developers to translate requirements into formal rules that can be used by an ontological reasoner. In this paper, we present a design tool that facilitates the specification of reconfigurable robot skills. Based on the specified skills, we generate corresponding runtime models for self-adaptation that can be directly deployed to a running robot that uses a reasoning approach based on ontologies. We demonstrate the applicability of the tool in a real robot performing a patrolling mission at a university campus.
- Published
- 2021
20. Linking physical objects to their digital twins via fiducial markers designed for invisibility to humans
- Author
-
Rijeesh Kizhakidathazhath, Danqing Liu, Mathew Schwartz, Yong Geng, Hakam Agha, Gabriele Lenzini, Jan P. F. Lagerwall, European Commission - EC [sponsor], Fonds National de la Recherche - FnR [sponsor], and Office of Naval Research Global [sponsor]
- Subjects
construction ,Invisibility ,building information modeling ,Computer science ,Materials Science (miscellaneous) ,FOS: Physical sciences ,02 engineering and technology ,Systems and Control (eess.SY) ,Space (commercial competition) ,Condensed Matter - Soft Condensed Matter ,010402 general chemistry ,ENCODE ,01 natural sciences ,Electrical Engineering and Systems Science - Systems and Control ,localization ,Biomaterials ,fiducial markers ,Human–computer interaction ,digital twin ,Autonomous robots ,FOS: Electrical engineering, electronic engineering, information engineering ,Leverage (statistics) ,Construction ,cholesteric liquid crystals ,Perspective (graphical) ,Building information modeling ,Cholesteric liquid crystals ,Mobile robot ,021001 nanoscience & nanotechnology ,Digital twin ,0104 chemical sciences ,Surfaces, Coatings and Films ,autonomous robots ,Localization ,Mathematics [G03] [Physical, chemical, mathematical & earth Sciences] ,Soft Condensed Matter (cond-mat.soft) ,Augmented reality ,Mathématiques [G03] [Physique, chimie, mathématiques & sciences de la terre] ,0210 nano-technology ,Fiducial marker ,Fiducial markers - Abstract
The ability to label and track physical objects that are assets in digital representations of the world is foundational to many complex systems. Simple, yet powerful methods such as bar- and QR-codes have been highly successful, e.g. in the retail space, but the lack of security, limited information content and impossibility of seamless integration with the environment have prevented a large-scale linking of physical objects to their digital twins. This paper proposes to link digital assets created through BIM with their physical counterparts using fiducial markers with patterns defined by Cholesteric Spherical Reflectors (CSRs), selective retroreflectors produced using liquid crystal self-assembly. The markers leverage the ability of CSRs to encode information that is easily detected and read with computer vision while remaining practically invisible to the human eye. We analyze the potential of a CSR-based infrastructure from the perspective of BIM, critically reviewing the outstanding challenges in applying this new class of functional materials, and we discuss extended opportunities arising in assisting autonomous mobile robots to reliably navigate human-populated environments, as well as in augmented reality., 30 pages, 8 figures. This paper is a very interdisciplinary topical review on the use of Cholesteric Spherical Reflectors to make fiducial markers-- visible to robots but not humans-- to link Digital and Physical twins. The authors are from fields including Design, Materials Science, Security and Computer Science, and Physics
- Published
- 2021
- Full Text
- View/download PDF
21. A distributed approach to robust control of multi-robot systems
- Author
-
Hesuan Hu, Yuan Zhou, Yang Liu, Shang-Wei Lin, Zuohua Ding, and School of Computer Science and Engineering
- Subjects
0209 industrial biotechnology ,Computer science ,Reliability (computer networking) ,Control engineering ,02 engineering and technology ,Discrete-event Systems ,Motion (physics) ,Term (time) ,020901 industrial engineering & automation ,Control and Systems Engineering ,Software deployment ,0202 electrical engineering, electronic engineering, information engineering ,Computer science and engineering [Engineering] ,Robot ,020201 artificial intelligence & image processing ,Autonomous Robots ,State (computer science) ,Motion planning ,Electrical and Electronic Engineering ,Robust control - Abstract
Motion planning of multi-robot systems has been extensively investigated. Many proposed approaches assume that all robots are reliable. However, robots with priori known levels of reliability may be used in applications to account for: (1) the cost in terms of unit price per robot type, and (2) the cost in terms of robot wear in long term deployment. In the former case, higher reliability comes at a higher price, while in the latter replacement may cost more than periodic repairs, e.g., buses, trams, and subways. In this study, we investigate robust control of multi-robot systems, such that the number of robots affected by the failed ones is minimized. It should mandate that the failure of a robot can only affect the motion of robots that collide directly with the failed one. We assume that the robots in a system are divided into reliable and unreliable ones, and each robot has a predetermined and closed path to execute persistent tasks. By modeling each robot’s motion as a labeled transition system, we propose two distributed robust control algorithms: one for reliable robots and the other for unreliable ones. The algorithms guarantee that wherever an unreliable robot fails, only the robots whose state spaces contain the failed state are blocked. Theoretical analysis shows that the proposed algorithms are practically operative. Simulations with seven robots are carried out and the results show the effectiveness of our algorithms.
- Published
- 2018
22. Scene understanding for autonomous robots operating in indoor environments
- Author
-
Hernández Silva, Alejandra Carolina, Barber Castaño, Ramón Ignacio, Martínez Mozos, Óscar, Universidad Carlos III de Madrid. Departamento de Ingeniería de Sistemas y Automática, and UC3M. Departamento de Ingeniería de Sistemas y Automática
- Subjects
Scene recognition ,Autonomous robots ,Semantic labeling ,Scene understanding ,Robótica e Informática Industrial ,Object recognition ,Searching strategies - Abstract
Mención Internacional en el título de doctor The idea of having robots among us is not new. Great efforts are continually made to replicate human intelligence, with the vision of having robots performing different activities, including hazardous, repetitive, and tedious tasks. Research has demonstrated that robots are good at many tasks that are hard for us, mainly in terms of precision, efficiency, and speed. However, there are some tasks that humans do without much effort that are challenging for robots. Especially robots in domestic environments are far from satisfactorily fulfilling some tasks, mainly because these environments are unstructured, cluttered, and with a variety of environmental conditions to control. This thesis addresses the problem of scene understanding in the context of autonomous robots operating in everyday human environments. Furthermore, this thesis is developed under the HEROITEA research project that aims to develop a robot system to help elderly people in domestic environments as an assistant. Our main objective is to develop different methods that allow robots to acquire more information from the environment to progressively build knowledge that allows them to improve the performance on high-level robotic tasks. In this way, scene understanding is a broad research topic, and it is considered a complex task due to the multiple sub-tasks that are involved. In that context, in this thesis, we focus on three sub-tasks: object detection, scene recognition, and semantic segmentation of the environment. Firstly, we implement methods to recognize objects considering real indoor environments. We applied machine learning techniques incorporating uncertainties and more modern techniques based on deep learning. Besides, apart from detecting objects, it is essential to comprehend the scene where they can occur. For this reason, we propose an approach for scene recognition that considers the influence of the detected objects in the prediction process. We demonstrate that the exiting objects and their relationships can improve the inference about the scene class. We also consider that a scene recognition model can benefit from the advantages of other models. We propose a multi-classifier model for scene recognition based on weighted voting schemes. The experiments carried out in real-world indoor environments demonstrate that the adequate combination of independent classifiers allows obtaining a more robust and precise model for scene recognition. Moreover, to increase the understanding of a robot about its surroundings, we propose a new division of the environment based on regions to build a useful representation of the environment. Object and scene information is integrated into a probabilistic fashion generating a semantic map of the environment containing meaningful regions within each room. The proposed system has been assessed on simulated and real-world domestic scenarios, demonstrating its ability to generate consistent environment representations. Lastly, full knowledge of the environment can enhance more complex robotic tasks; that is why in this thesis, we try to study how a complete knowledge of the environment influences the robot’s performance in high-level tasks. To do so, we select an essential task, which is searching for objects. This mundane task can be considered a precondition to perform many complex robotic tasks such as fetching and carrying, manipulation, user requirements, among others. The execution of these activities by service robots needs full knowledge of the environment to perform each task efficiently. In this thesis, we propose two searching strategies that consider prior information, semantic representation of the environment, and the relationships between known objects and the type of scene. All our developments are evaluated in simulated and real-world environments, integrated with other systems, and operating in real platforms, demonstrating their feasibility to implement in real scenarios, and in some cases outperforming other approaches. We also demonstrate how our representation of the environment can boost the performance of more complex robotic tasks compared to more standard environmental representations. La idea de tener robots entre nosotros no es nueva. Continuamente se realizan grandes esfuerzos para replicar la inteligencia humana, con la visión de tener robots que realicen diferentes actividades, incluidas tareas peligrosas, repetitivas y tediosas. La investigación ha demostrado que los robots son buenos en muchas tareas que resultan difíciles para nosotros, principalmente en términos de precisión, eficiencia y velocidad. Sin embargo, existen tareas que los humanos realizamos sin mucho esfuerzo y que son un desafío para los robots. Especialmente, los robots en entornos domésticos están lejos de cumplir satisfactoriamente algunas tareas, principalmente porque estos entornos no son estructurados, pueden estar desordenados y cuentan con una gran variedad de condiciones ambientales que controlar. Esta tesis aborda el problema de la comprensión de la escena en el contexto de robots autónomos que operan en entornos humanos cotidianos. Asimismo, esta tesis se desarrolla en el marco del proyecto de investigación HEROITEA que tiene como objetivo desarrollar un sistema robótico que funcione como asistente para ayudar a personas mayores en entornos domésticos. Nuestro principal objetivo es desarrollar diferentes métodos que permitan a los robots adquirir más información del entorno a fin de construir progresivamente un conocimiento que les permita mejorar su desempeño en tareas robóticas más complejas. En este sentido, la comprensión de escenas es un tema de investigación amplio, y se considera una tarea compleja debido a las múltiples subtareas involucradas. En esta tesis nos enfocamos específicamente en tres subtareas: detección de objetos, reconocimiento de escenas y etiquetado semántico del entorno. Por un lado, implementamos métodos para el reconocimiento de objectos considerando entornos interiores reales. Aplicamos técnicas de aprendizaje automático incorporando incertidumbres y técnicas más modernas basadas en aprendizaje profundo. Además, aparte de detectar objetos, es fundamental comprender la escena donde estos se encuentran. Por esta razón, proponemos un modelo para el reconocimiento de escenas que considera la influencia de los objetos detectados en el proceso de predicción. Demostramos que los objetos existentes y sus relaciones pueden mejorar el proceso de inferencia de la categoría de la escena. También consideramos que un modelo de reconocimiento de escenas puede beneficiarse de las ventajas de otros modelos. Por ello, proponemos un multiclasificador para el reconocimiento de escenas basado en esquemas de votación ponderados. Los experimentos llevados a cabo en entornos interiores reales demuestran que la combinación adecuada de clasificadores independientes permite obtener un modelo más robusto y preciso para el reconocimiento de escenas. Adicionalmente, para aumentar la comprensión de un robot acerca de su entorno, proponemos una nueva división del entorno basada en regiones a fin de construir una representación útil del entorno. La información de objetos y de la escena se integra de forma probabilística generando un mapa semántico que contiene regiones significativas dentro de cada habitación. El sistema propuesto ha sido evaluado en entornos domésticos simulados y reales, demostrando su capacidad para generar representaciones consistentes del entorno. Por otro lado, el conocimiento integral del entorno puede mejorar tareas robóticas más complejas; es por ello que en esta tesis analizamos cómo el conocimiento completo del entorno influye en el desempeño del robot en tareas de alto nivel. Para ello, seleccionamos una tarea fundamental, que es la búsqueda de objetos. Esta tarea mundana puede considerarse una condición previa para realizar diversas tareas robóticas complejas, como transportar objetos, tareas de manipulación, atender requerimientos del usuario, entre otras. La ejecución de estas actividades por parte de robots de servicio requiere un conocimiento profundo del entorno para realizar cada tarea de manera eficiente. En esta tesis proponemos dos estrategias de búsqueda de objetos que consideran información previa, la representación semántica del entorno, las relaciones entre los objetos conocidos y el tipo de escena. Todos nuestros desarrollos son evaluados en entornos simulados y reales, integrados con otros sistemas y operando en plataformas reales, demostrando su viabilidad de ser implementados en escenarios reales y, en algunos casos, superando a otros enfoques. También demostramos cómo nuestra representación del entorno puede mejorar el desempeño de tareas robóticas más complejas en comparación con representaciones del entorno más tradicionales. Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de Madrid Presidente: Carlos Balaguer Bernaldo de Quirós.- Secretario: Fernando Matía Espada.- Vocal: Klaus Strobl
- Published
- 2021
23. Recurrent and convolutional neural networks for deep terrain classification by autonomous robots
- Author
-
Roberto Marani, Fabio Vulpi, Giulio Reina, and Annalisa Milella
- Subjects
Terrain classification ,Computer science ,business.industry ,Mechanical Engineering ,Feature vector ,010401 analytical chemistry ,Pattern recognition ,Terrain ,04 agricultural and veterinary sciences ,Autonomous robots ,Deep-learning ,Vehicle-terrain interaction ,01 natural sciences ,Convolutional neural network ,0104 chemical sciences ,Support vector machine ,Recurrent neural network ,040103 agronomy & agriculture ,0401 agriculture, forestry, and fisheries ,Robot ,A priori and a posteriori ,Spectrogram ,Artificial intelligence ,business - Abstract
The future challenge for field robots is to increase the level of autonomy towards long distance (>1 km) and duration (>1h) applications. One of the key technologies is the ability to accurately estimate the properties of the traversed terrain to optimize onboard control strategies and energy efficient path-planning, ensuring safety and avoiding possible immobilization conditions that would lead to mission failure. Two main hypotheses are put forward in this research. The first hypothesis is that terrain can be effectively detected by relying exclusively on the measurement of quantities that pertain to the robot-ground interaction, i.e., on proprioceptive signals. Therefore, no visual or depth information is required. Then, artificial deep neural networks can provide an accurate and robust solution to the classification problem of different terrain types. Under these hypotheses, sensory signals are classified as time series directly by a Recurrent Neural Network or by a Convolutional Neural Network in the form of higher-level features or spectrograms resulting from additional processing. In both cases, results obtained from real experiments show comparable or better performance when contrasted with standard Support Vector Machine with the additional advantage of not requiring an a priori definition of the feature space.
- Published
- 2021
- Full Text
- View/download PDF
24. Docking of Non-Holonomic AUVs in Presence of Ocean Currents: A Comparative Survey
- Author
-
Narcis Palomeras, Pere Ridao, Patryk Cieslak, and Joan Esteba
- Subjects
0209 industrial biotechnology ,Vehicles submergibles ,General Computer Science ,Computer science ,Monte Carlo method ,Robots autònoms ,02 engineering and technology ,ocean currents ,01 natural sciences ,Docking ,non-holonomic ,Submersibles ,020901 industrial engineering & automation ,Docking (dog) ,Autonomous robots ,General Materials Science ,Electrical and Electronic Engineering ,Underwater ,AUV ,0105 earth and related environmental sciences ,010505 oceanography ,Holonomic ,General Engineering ,Collision ,TK1-9971 ,Metric (mathematics) ,Robot ,Electrical engineering. Electronics. Nuclear engineering ,Docking station ,Marine engineering - Abstract
This paper presents a comparative study of docking algorithms intended for non-holonomic autonomous underwater vehicles, docking in funnel-shaped docking stations, operating under the influence of ocean currents. While descriptive surveys have been already reported in the literature, our goal is to compare the most relevant algorithms through realistic Monte Carlo simulations to provide an insight into their performance. To this aim, a new numerical performance indicator is proposed, which, based on the geometry of the manoeuvre, is able to characterize a successful or unsuccessful docking, providing a metric for comparison. The experimental study is carried out using hardware-in-the-loop simulation by means of the Stonefish simulator, including the dynamic/hydrodynamic model of the Sparus II AUV, models of all internal and external sensors, and the collision geometry representing the docking station This work was supported by the European Union’s Horizon 2020 Research and Innovation Program through the ATLANTIS ‘‘The Atlantic Testing Platform for Maritime Robotics: New Frontiers for Inspection and Maintenance of Offshore Energy Infrastructures’’ Project under Grant 871571
- Published
- 2021
25. Underwater Object Recognition Using Point-Features, Bayesian Estimation and Semantic Information
- Author
-
Nuno Gracias, Pere Ridao, Khadidja Himri, and Agencia Estatal de Investigación
- Subjects
pipeline detection ,0209 industrial biotechnology ,3D object recognition ,Computer science ,Robots autònoms ,Point cloud ,maintenance and repair ,02 engineering and technology ,lcsh:Chemical technology ,Biochemistry ,Article ,Analytical Chemistry ,laser scanner ,underwater environment ,semantic information ,multi-object tracking ,020901 industrial engineering & automation ,Histogram ,Autonomous robots ,point clouds ,0202 electrical engineering, electronic engineering, information engineering ,Feature descriptor ,Feature (machine learning) ,lcsh:TP1-1185 ,Reconeixement de formes (Informàtica) ,Electrical and Electronic Engineering ,inspection ,Instrumentation ,AUV ,Bayes estimator ,business.industry ,Cognitive neuroscience of visual object recognition ,Pattern recognition ,Pattern recognition systems ,Atomic and Molecular Physics, and Optics ,semantic segmentation ,Bayesian probabilities ,Joint compatibility branch and bound ,Video tracking ,JCBB ,020201 artificial intelligence & image processing ,Artificial intelligence ,autonomous manipulation ,business ,global descriptors - Abstract
This paper proposes a 3D object recognition method for non-coloured point clouds using point features. The method is intended for application scenarios such as Inspection, Maintenance and Repair (IMR) of industrial sub-sea structures composed of pipes and connecting objects (such as valves, elbows and R-Tee connectors). The recognition algorithm uses a database of partial views of the objects, stored as point clouds, which is available a priori. The recognition pipeline has 5 stages: (1) Plane segmentation, (2) Pipe detection, (3) Semantic Object-segmentation and detection, (4) Feature based Object Recognition and (5) Bayesian estimation. To apply the Bayesian estimation, an object tracking method based on a new Interdistance Joint Compatibility Branch and Bound (IJCBB) algorithm is proposed. The paper studies the recognition performance depending on: (1) the point feature descriptor used, (2) the use (or not) of Bayesian estimation and (3) the inclusion of semantic information about the objects connections. The methods are tested using an experimental dataset containing laser scans and Autonomous Underwater Vehicle (AUV) navigation data. The best results are obtained using the Clustered Viewpoint Feature Histogram (CVFH) descriptor, achieving recognition rates of 51.2%, 68.6% and 90%, respectively, clearly showing the advantages of using the Bayesian estimation (18% increase) and the inclusion of semantic information (21% further increase).
- Published
- 2021
26. A Novel Remote Visual Inspection System for Bridge Predictive Maintenance
- Author
-
Alessandro Galdelli, Mariapaola D’Imperio, Gabriele Marchello, Adriano Mancini, Massimiliano Scaccia, Michele Sasso, Emanuele Frontoni, and Ferdinando Cannella
- Subjects
General Earth and Planetary Sciences ,bridge monitoring ,visual inspection ,multi-camera system ,autonomous robots ,artificial intelligence - Abstract
Predictive maintenance on infrastructures is currently a hot topic. Its importance is proportional to the damages resulting from the collapse of the infrastructure. Bridges, dams and tunnels are placed on top on the scale of severity of potential damages due to the fact that they can cause loss of lives. Traditional inspection methods are not objective, tied to the inspector’s experience and require human presence on site. To overpass the limits of the current technologies and methods, the authors of this paper developed a unique new concept: a remote visual inspection system to perform predictive maintenance on infrastructures such as bridges. This is based on the fusion between advanced robotic technologies and the Automated Visual Inspection that guarantees objective results, high-level of safety and low processing time of the results.
- Published
- 2022
27. Explorer51 - Indoor Mapping, Discovery, and Navigation for an Autonomous Mobile Robot; Investigation of Social Pressures on the Evolution of Machine Learning and Autonomous Robot Systems
- Subjects
systems engineering ,autonomous robots ,explorer51 ,robot systems - Abstract
The perceptions and evolution of autonomous robots and advanced machine learning by individuals involved in rescue and relief efforts have contributed to noticeable disparities in faith in the technology. As part of our year-long project, my Capstone team developed an autonomous robot to perform actions of our choosing, including navigating and mapping an indoor space, identifying targets, and communicating necessary information for future improvement. The technology is notable in that it raises questions about the needs of the two major groups involved in a typical autonomous robot system: the engineers in charge of the technology and the victims who receive the actual care in some capacity. My team’s work on the robot may bring to light some of the engineer’s objectives while designing a robot system, but does little to represent those receiving support in times of need. Analysis of the strengths, challenges, opportunities, and threats (SCOT analysis) of autonomous robot systems in society would provide the necessary framework to outline the different perspectives of the two stakeholder groups. Considering the ethics of care in technology may also be useful to explore the role of attributing responsibility when addressing the needs of those offering relief and those receiving it. My chosen method of conducting research is constituted by the delivery of a survey and analysis of its results, along with comparison to relevant documents regarding sentiment regarding autonomous robots. The preliminary survey results should provide empirical evidence regarding public attitude toward autonomous robots, with which I can reveal and study trends which may or may not have been documented in previous research studies. I expect to highlight the differences between the needs and requirements of the two chosen stakeholder groups, and understand the levels of contribution from both groups in the development of autonomous technology. The implications of my findings may elucidate how the decision-makers (engineers) of the sociotechnical system choose to choose to develop autonomous robots, and how they could better do so to mitigate public fear of the technology and better satisfy the desires of those in need.
- Published
- 2020
- Full Text
- View/download PDF
28. Mind the ground: A power spectral density-based estimator for all-terrain rovers
- Author
-
Annalisa Milella, Giulio Reina, Antonio Leanza, Arcangelo Messina, Reina, G., Leanza, A., Milella, A., and Messina, A.
- Subjects
FOS: Computer and information sciences ,Rough-terrain vehicle ,Computer science ,Feature vector ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Terrain ,Systems and Control (eess.SY) ,02 engineering and technology ,terrain unevenness estimation ,01 natural sciences ,Electrical Engineering and Systems Science - Systems and Control ,Computer Science - Robotics ,Autonomous robots ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Range (statistics) ,Power spectral density analysis ,Computer vision ,Sensitivity (control systems) ,Electrical and Electronic Engineering ,Instrumentation ,High-level mapping ,Terrain unevenness estimation ,business.industry ,Applied Mathematics ,Power spectral density analysi ,020208 electrical & electronic engineering ,010401 analytical chemistry ,Rough-terrain vehicles ,Estimator ,Spectral density ,Condensed Matter Physics ,0104 chemical sciences ,Autonomous robot ,Harshness ,Artificial intelligence ,business ,high-level mapping ,Robotics (cs.RO) ,Stereo camera - Abstract
There is a growing interest in new sensing technologies and processing algorithms to increase the level of driving automation towards self-driving vehicles. The challenge for autonomy is especially difficult for the negotiation of uncharted scenarios, including natural terrain. This paper proposes a method for terrain unevenness estimation that is based on the power spectral density (PSD) of the surface profile as measured by exteroceptive sensing, that is, by using a common onboard range sensor such as a stereoscopic camera. Using these components, the proposed estimator can evaluate terrain on-line during normal operations. PSD-based analysis provides insight not only on the magnitude of irregularities, but also on how these irregularities are distributed at various wavelengths. A feature vector can be defined to classify roughness that is proved a powerful statistical tool for the characterization of a given terrain fingerprint showing a limited sensitivity to vehicle tilt rotations. First, the theoretical foundations behind the PSD-based estimator are presented. Then, the system is validated in the field using an all-terrain rover that operates on various natural surfaces. It is shown its potential for automatic ground harshness estimation and, in general, for the development of driving assistance systems., 26 pages
- Published
- 2020
- Full Text
- View/download PDF
29. Reinforcement learning for robotic assisted tasks
- Author
-
Civit Bertran, Aniol, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Alenyà Ribas, Guillem, Angulo Bahón, Cecilio, Alenyà, Guillem, and Angulo, Cecilio
- Subjects
Imitation Learning ,Machine Learning ,Compliance control ,Deep Learning ,Enginyeria mecànica [Àrees temàtiques de la UPC] ,Autonomous robots ,Robots autònoms ,Activities of daily living ,Dynamic Movement Primitives ,Assistive Robots ,Reinforcement Learning - Abstract
The increment of dependant people for the realization of Activities of Daily Living is a fact that comes implicit with the increase of population and their lifespan given by the modern technological progress. For this reason, Assistive Robotics is a field that is currently researched and gradually implemented. In this project, two different algorithms to train robots to perform those tasks are being studied and compared, Reinforcement Learning and Imitation Learning. The main objective is to learn the benefits, drawbacks and suitability of both for a given task. To do the experimentation, the Assistive Gym software will be used, to train the nets with Reinforcement Learning and to perform the movements with Dynamic Movement Primitives learned from a demonstration. The environment will consist in a Sawyer robot cleaning the right arm of a human lying down on a bed. The potential of each methodology will be tested in different scenarios, having the user’s arm straight or partially flexed. Furthermore, small implementations to improve the execution of the task will be implemented for the Dynamic Movement Primitives to test the performance and the similarity to the demonstration. Both methodologies are capable to perform the desired task successfully, the model trained with Reinforcement Learning is more suitable for an individual user that will use the robot in a long term, due to the training time. The Imitation Learning algorithm is faster to teach and its trajectory and velocities are easily adaptable to the preferences of the user, it is more suitable in places where there is a high rotation of users that require urgent and short attention, such as clinics or hospitals.
- Published
- 2020
30. Colloidal Robotics: autonomous propulsion and navigation of active particles
- Author
-
Dou, Yong
- Subjects
Engineering ,Physics ,Autonomous robots ,Robotics ,Robots - Abstract
Colloidal robots refer to the colloid scale (from nm to μm) machines capable of carrying out programmed actions for complex tasks automatically. Because of its promising application in engineering and medical service, colloidal robotics have been of much recent research interest in both theoretical and technological relevance. However, there remain many open challenges on increasing actuation efficiency, achieving high level tasks (e.g., autonomous navigation), etc. This dissertation, in general, focuses on developing new actuation mechanisms and designing autonomous navigation strategies for colloidal robots with both experimental and computational efforts. Firstly, the motivation, background and recent research advances on colloidal robots are reviewed. In Chapter 2, a high-efficiency actuation method called contact charge electrophoresis(CCEP) is introduced to propel the dielectric metallic Janus colloid particles. The autonomous propulsion of Janus particles shows colloidal particle asymmetries can be used to direct the motions of colloidal robots. Beyond single colloidal particle's propulsion, Chapter 3 shows multi-colloidal particles' motions can be coupled and synchronized to generate traveling waves via electrostatic interactions. Our results in Chapter 3 suggest that simple energy inputs can coordinate complex motions for colloidal robots. Then inspired by active particles motions' guided by their symmetry in Chapter 2, we show in Chapter 4 how multiple autonomous navigation can be achieved by designing the active particle's geometry and its stimulus response. Chapter 4 describes a strategy that colloid particles can sense the stimulus in environment via shape-shifting. The feedback loop of sensing and motion enables colloid particles to achieve positive or negative chemotaxis-like navigation. To experimentally realize similar navigation behaviors introduced in Chapter 4, we described a magnetic driven colloidal robot system in Chapter 5, which could show navigation behaviors (uphill and downhill) on a slope by rationally programming the external magnetic field. Chapter 6 highlights future research directions and potential applications of colloidal robots.
- Published
- 2020
- Full Text
- View/download PDF
31. Decentralized Reinforcement Learning of Robot Behaviors
- Author
-
David Leonardo Leottau, Javier Ruiz-del-Solar, and Robert Babuska
- Subjects
Scheme (programming language) ,0209 industrial biotechnology ,Linguistics and Language ,Decentralized control ,Computer science ,SCARA ,02 engineering and technology ,Language and Linguistics ,020901 industrial engineering & automation ,Empirical research ,Artificial Intelligence ,Autonomous robots ,Reinforcement learning ,0202 electrical engineering, electronic engineering, information engineering ,computer.programming_language ,business.industry ,Multi-agent system ,Multi-agent systems ,Robotics ,Decentralised system ,Distributed artificial intelligence ,Robot ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In addition to proposing this methodology, three specific multi agent DRL approaches are considered: DRL-Independent, DRL Cooperative-Adaptive (CA), and DRL-Lenient. These approaches are validated and analyzed with an extensive empirical study using four different problems: 3D Mountain Car, SCARA Real-Time Trajectory Generation, Ball-Dribbling in humanoid soccer robotics, and Ball-Pushing using differential drive robots. The experimental validation provides evidence that DRL implementations show better performances and faster learning times than their centralized counterparts, while using less computational resources. DRL-Lenient and DRL-CA algorithms achieve the best final performances for the four tested problems, outperforming their DRL-Independent counterparts. Furthermore, the benefits of the DRL-Lenient and DRL-CA are more noticeable when the problem complexity increases and the centralized scheme becomes intractable given the available computational resources and training time.
- Published
- 2018
32. Immunized Token-Based Approach for Autonomous Deployment of Multiple Mobile Robots in Burnt Area
- Author
-
Weizhi Ran, Pengfei Liu, Sulemana Nantogma, Zhang Yu, and Yang Xu
- Subjects
business.industry ,Computer science ,Artificial immune system ,Science ,Distributed computing ,Multi-agent system ,Mobile robot ,robots deployment ,Security token ,immune algorithm ,autonomous robots ,sensor networks ,Robustness (computer science) ,Software deployment ,General Earth and Planetary Sciences ,Wireless ,multi-agent systems ,business ,Wireless sensor network - Abstract
Collaborative exploration, sensing and communication in previously unknown environments with high network latency, such as outer space, battlefields and disaster hit areas are promising in multi-agent applications. When disasters such as large fires or natural disasters occur, previously established networks might be destroyed or incapacitated. In these cases, multiple autonomous mobile robots (AMR) or autonomous unmanned ground vehicles carrying wireless devices and/or thermal sensors can be deployed to create an end-to-end communication and sensing coverage to support rescue efforts or access the severity of damage. However, a fundamental problem is how to rapidly deploy these mobile agents in such complex and dynamic environments. The uncertainties introduced by the operational environment and wide range of scheduling problem have made solving them as a whole challenging. In this paper, we present an efficient decentralized approach for practical mobile agents deployment in unknown, burnt or disaster hit areas. Specifically, we propose an approach that combines methods from Artificial Immune System (AIS) with special token messages passing for a team of interconnected AMR to decide who, when and how to act during deployment process. A distributed scheme is adopted, where each AMR makes its movement decisions based on its local observation and a special token it receives from its neighbors. Empirical evidence of robustness and effectiveness of the proposed approach is demonstrated through simulation.
- Published
- 2021
33. Autonomous and Safe Navigation of Mobile Robots in Vineyard with Smooth Collision Avoidance
- Author
-
Arpit Rawankar, Yohei Hoshino, Abhijeet Ravankar, and Ankit A. Ravankar
- Subjects
Computer science ,Agriculture (General) ,feature extraction ,Real-time computing ,Feature extraction ,Mobile robot ,Plant Science ,Field (computer science) ,S1-972 ,autonomous robots ,Component (UML) ,Assisted GPS ,Robot ,collision avoidance ,navigation ,Agronomy and Crop Science ,Collision avoidance ,Smoothing ,vineyard robots ,Food Science - Abstract
In recent years, autonomous robots have extensively been used to automate several vineyard tasks. Autonomous navigation is an indispensable component of such field robots. Autonomous and safe navigation has been well studied in indoor environments and many algorithms have been proposed. However, unlike structured indoor environments, vineyards pose special challenges for robot navigation. Particularly, safe robot navigation is crucial to avoid damaging the grapes. In this regard, we propose an algorithm that enables autonomous and safe robot navigation in vineyards. The proposed algorithm relies on data from a Lidar sensor and does not require a GPS. In addition, the proposed algorithm can avoid dynamic obstacles in the vineyard while smoothing the robot’s trajectories. The curvature of the trajectories can be controlled, keeping a safe distance from both the crop and the dynamic obstacles. We have tested the algorithm in both a simulation and with robots in an actual vineyard. The results show that the robot can safely navigate the lanes of the vineyard and smoothly avoid dynamic obstacles such as moving people without abruptly stopping or executing sharp turns. The algorithm performs in real-time and can easily be integrated into robots deployed in vineyards.
- Published
- 2021
34. Monocular Height Estimation Method with 3 Degree-Of-Freedom Compensation of Road Unevennesses
- Author
-
Kenjiro Yamamoto and Alex Masuo Kaneko
- Subjects
Estimation ,Monocular ,Physics and Astronomy (miscellaneous) ,lcsh:T ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,lcsh:Technology ,Compensation (engineering) ,autonomous robots ,Control theory ,Management of Technology and Innovation ,3 DOF compensation ,lcsh:Q ,monocular camera ,lcsh:Science ,Engineering (miscellaneous) ,height estimation - Abstract
Height estimation of objects is a valuable information for locomotion of autonomous robots and vehicles. Even though several sensors such as stereo cameras have been applied in these systems, cost and processing time have been motivating solutions with monocular cameras. This research proposes two new methods: i) height estimation of objects using only a monocular camera based on flat surface constraints and ii) 3 degree-of-freedom compensation of errors caused by roll, pitch and yaw variations of the camera when applying the Flat Surface Model. Experiments outdoors with the KITTI benchmark data (4997 frames and 436 objects) resulted in improved accuracy of the estimated heights from a maximum error of 1.51 m to 1.12 m and reduced number of estimation failures by 4 times, proving the validity and effectiveness of the proposed method.
- Published
- 2017
35. Planning of Autonomous and Mobile Robots in Dynamic Environments
- Author
-
Neuber, Daniel
- Subjects
Planning ,Autonomer Roboter ,Autonomous Robots ,Mobiler Roboter ,Robotic ,Dynamic Planning - Published
- 2019
- Full Text
- View/download PDF
36. Invisible Machinery in Function, Not Form: User Expectations of Domestic Use Humanoid Robots
- Author
-
Carpenter, Julie, Davis, Joan M., Erwin-Stewart, Norah, Lee, Tiffany R., Bransford, John, and Vye, Nancy
- Subjects
autonomous robots ,technology ,agents - Abstract
This study examines people’s expectations about humanoid robots in terms of existing beliefs, opinions and responses to video stimulus of human-robot interactions. A robot intended for home use triggers many feelings in people including fascination, unease, fear and curiosity. A humanoid robot – one designed with humanlike morphology, behaviours, speech or other anthropomorphic indicators – brings additional issues to the human-robot interaction (HRI) scenario. In order to get a full idea of participants’ opinions and reactions, we applied post- video viewing questionnaires, physiological measures to test for arousal level and post-video semi-structured interviews to nineteen participants. We expected users to be interested in the robots but also to bring distinct culturally embedded expectations about robots in general. Users described their concerns about robot ownership and interaction, demonstrating emerging patterns of common issues.
- Published
- 2019
- Full Text
- View/download PDF
37. Trusting Robocop: Gender-Based Effects on Trust of an Autonomous Robot
- Author
-
Thy Vo, Darci M. Gallimore, Kevin T. Wynne, Sean Mahoney, and Joseph B. Lyons
- Subjects
business.industry ,lcsh:BF1-990 ,trust in automation ,trust ,gender-based effects ,Home use ,Public opinion ,Autonomous robot ,autonomous robots ,lcsh:Psychology ,Trustworthiness ,Human–computer interaction ,gender ,Psychology ,Robot ,business ,individual differences ,General Psychology ,Original Research ,security robots - Abstract
Little is known regarding public opinion of autonomous robots. Trust of these robots is a pertinent topic as this construct relates to one’s willingness to be vulnerable to such systems. The current research examined gender-based effects of trust in the context of an autonomous security robot. Participants (N = 200; 63% male) viewed a video depicting an autonomous guard robot interacting with humans using Amazon’s Mechanical Turk. The robot was equipped with a non-lethal device to deter non-authorized visitors and the video depicted the robot using this non-lethal device on one of the three humans in the video. However, the scenario was designed to create uncertainty regarding who was at fault – the robot or the human. Following the video, participants rated their trust in the robot, perceived trustworthiness of the robot, and their desire to utilize similar autonomous robots in several different contexts that varied from military use to commercial use to home use. The results of the study demonstrated that females reported higher trust and perceived trustworthiness of the robot relative to males. Implications for the role of individual differences in trust of robots are discussed.
- Published
- 2019
38. Action Recognition Using Single-Pixel Time-Of-Flight Detection
- Author
-
Sergio Escalera, Sergey Omelkov, Andreas Valdmann, Egils Avots, Heli Valtna-Lukner, Ikechukwu Ofodile, Albert Clapés, Kerttu Maria Peensoo, Gholamreza Anbarjafari, Sandhra-Mirella Valdma, Cagri Ozcinar, Ahmed Helmi, and HKÜ, Mühendislik Fakültesi, Elektirik Elektronik Mühendisliği Bölümü
- Subjects
Computer science ,Robots autònoms ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,time-of-flight ,Article ,Task (project management) ,Autonomous robots ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,lcsh:Science ,TRACE (psycholinguistics) ,Sequence ,action recognition ,Repetition (rhetorical device) ,business.industry ,Time-of-flight mass spectrometry ,Detector ,single pixel single photon image acquisition ,020207 software engineering ,Object (computer science) ,lcsh:QC1-999 ,Recurrent neural network ,Action (philosophy) ,Espectrometria de masses de temps de vol ,020201 artificial intelligence & image processing ,lcsh:Q ,Artificial intelligence ,business ,lcsh:Physics - Abstract
Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject&rsquo, s privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene. Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47 % accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent neural network.
- Published
- 2019
39. Stonefish: An Advanced Open-Source Simulation Tool Designed for Marine Robotics, With a ROS Interface
- Author
-
Patryk Cieslak
- Subjects
Robots -- Programació ,Computer science ,Software tool ,Robots -- Programming ,Robots autònoms ,02 engineering and technology ,Robots submarins ,01 natural sciences ,Rendering (computer graphics) ,010309 optics ,Software ,Autonomous robots ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Mobile robots ,Robots -- Sistemes de control ,Underwater ,Vision algorithms ,Digital computer simulation ,Robots -- Control Systems ,business.industry ,020207 software engineering ,Robotics ,Simulació per ordinador digital ,Open source ,Robots mòbils ,Underwater robots ,Systems engineering ,Artificial intelligence ,Actuator ,business - Abstract
EU Marine Robots project grant agreement no. 731103 The marine robotics community is lacking a high quality simulator for doing scientific research, especially when it comes to testing control and vision algorithms in realistic underwater intervention tasks. All of the solutions used today are either outdated or try to combine different software tools, which often results in bad performance, stability issues and lack of important features. This paper presents a new software tool, focused on, but not limited to, simulation of intervention autonomous underwater vehicles (I-AUV). It delivers advanced hydrodynamics based on actual geometry, simulation of underwater sensors and actuators, as well as realistic rendering of underwater environment and ocean surface. It consists of a library written in C++ and a Robot Operating System (ROS) package Patryk Ciésslak has received funding from the European Community H2020 Programme under the Marie Skłodowska-Curie grant agreement no. 750063 and under EU Marine Robots project grant agreement no. 731103
- Published
- 2019
- Full Text
- View/download PDF
40. A Fully-Autonomous Aerial Robot for Search and Rescue Applications in Indoor Environments using Learning-Based Techniques
- Author
-
Hriday Bavle, Carlos Sampedro, Pascual Campoy, Paloma de la Puente, Adrian Carrio, Alejandro Rodriguez-Ramos, Ministerio de Economía y Competitividad (España), Sampedro, Carlos [0000-0003-2414-2284], Rodríguez-Ramos, Alejandro [0000-0002-9710-7810], Bavle, Hriday [0000-0002-1732-0647], Puente, Paloma de la [0000-0002-8652-0300], Campoy, Pascual [0000-0002-9894-2009], Sampedro, Carlos, Rodríguez-Ramos, Alejandro, Bavle, Hriday, Puente, Paloma de la, and Campoy, Pascual
- Subjects
0209 industrial biotechnology ,Search and rescue ,Computer science ,Real-time computing ,Robótica e Informática Industrial ,Image-based visual servoing ,02 engineering and technology ,Visual servoing ,Convolutional neural network ,Industrial and Manufacturing Engineering ,Electrical & electronics engineering [C06] [Engineering, computing & technology] ,020901 industrial engineering & automation ,Artificial Intelligence ,Autonomous robots ,Reinforcement learning ,11. Sustainability ,Electrical and Electronic Engineering ,Ingénierie électrique & électronique [C06] [Ingénierie, informatique & technologie] ,business.industry ,Mechanical Engineering ,Deep learning ,Supervised learning ,Robotics ,13. Climate action ,Control and Systems Engineering ,Robot ,Electrónica ,Artificial intelligence ,business ,Software - Abstract
Search and Rescue (SAR) missions represent an important challenge in the robotics research field as they usually involve exceedingly variable-nature scenarios which require a high-level of autonomy and versatile decision-making capabilities. This challenge becomes even more relevant in the case of aerial robotic platforms owing to their limited payload and computational capabilities. In this paper, we present a fully-autonomous aerial robotic solution, for executing complex SAR missions in unstructured indoor environments. The proposed system is based on the combination of a complete hardware configuration and a flexible system architecture which allows the execution of high-level missions in a fully unsupervised manner (i.e. without human intervention). In order to obtain flexible and versatile behaviors from the proposed aerial robot, several learning-based capabilities have been integrated for target recognition and interaction. The target recognition capability includes a supervised learning classifier based on a computationally-efficient Convolutional Neural Network (CNN) model trained for target/background classification, while the capability to interact with the target for rescue operations introduces a novel Image-Based Visual Servoing (IBVS) algorithm which integrates a recent deep reinforcement learning method named Deep Deterministic Policy Gradients (DDPG). In order to train the aerial robot for performing IBVS tasks, a reinforcement learning framework has been developed, which integrates a deep reinforcement learning agent (e.g. DDPG) with a Gazebo-based simulator for aerial robotics. The proposed system has been validated in a wide range of simulation flights, using Gazebo and PX4 Software-In-The-Loop, and real flights in cluttered indoor environments, demonstrating the versatility of the proposed system in complex SAR missions., This work was supported by the Spanish Ministry of Science (Project DPI2014-60139-R).
- Published
- 2019
41. Flight planning in multi-unmanned aerial vehicle systems: Nonconvex polygon area decomposition and trajectory assignment
- Author
-
Cristina Barrado, Esther Salamí, Enric Pastor, Georgy Skorobogatov, Universitat Politècnica de Catalunya. Doctorat en Ciència i Tecnologia Aeroespacials, Universitat Politècnica de Catalunya. Departament d'Arquitectura de Computadors, and Universitat Politècnica de Catalunya. ICARUS - Intelligent Communications and Avionics for Robust Unmanned Aerial Systems
- Subjects
0209 industrial biotechnology ,Vehicles autònoms ,Computer science ,Unmanned aerial system ,Robots autònoms ,Real-time computing ,Autonomous vehicles ,0211 other engineering and technologies ,lcsh:TK7800-8360 ,Trajectory planning ,02 engineering and technology ,lcsh:QA75.5-76.95 ,Multi-UAV ,Task (project management) ,020901 industrial engineering & automation ,Artificial Intelligence ,Autonomous robots ,Decomposition (computer science) ,021101 geological & geomatics engineering ,lcsh:Electronics ,Unmanned aerial vehicle ,Enginyeria aeroespacial ,Computer Science Applications ,Aerospace engineering ,Coverage path planning ,Flight planning ,Polygon ,Trajectory ,lcsh:Electronic computers. Computer science ,Aeronàutica i espai [Àrees temàtiques de la UPC] ,Software - Abstract
Nowadays, it is quite common to have one unmanned aerial vehicle (UAV) working on a task but having a team of UAVs is still rare. One of the problems that prevent us from using teams of UAVs more frequently is flight planning. In this work, we present the first open-source solution ( https://pypi.org/project/pode/ ) for splitting any complex area into multiple parts. The area of interest can be convex or nonconvex and can include any number of no-flight zones. Four solutions, based on the algorithm of Hert and Lumelsky, are tested with the aim of improving the compactness of the partitions. We also show how the shape of the partitions influences flight performance in a real case scenario. This work was supported by Ministerio de Economía, Industria y Competitividad, and Gobierno de España under grants/award numbers BES-2017-079798 and TRA2016-77012-R.
- Published
- 2021
42. Flottes de robots pour un contrôle phytosanitaire non nuisible pour l'environnement
- Author
-
George Kaplanis, César Fernández-Quintanilla, Pablo Gonzalez-de-Santos, Michael Brandstoetter, Manuel Pérez-Ruiz, Francisca López-Granados, Jaime del Cerro, Constantino Valero, Gilles Rabatel, Benoit Debilde, Slobodanka Tomic, Stefania Pedrazzi, Marco Vieri, Andrea Peruzzi, Angela Ribeiro, Gonzalo Pajares, CENTER FOR AUTOMATION AND ROBOTICS UPM CSIC MADRID ESP, Partenaires IRSTEA, Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture (IRSTEA)-Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture (IRSTEA), INSTITUTE OF AGRICULTURAL SCIENCES CSIC MADRID ESP, Instituto de Agricultura Sostenible - Institute for Sustainable Agriculture (IAS CSIC), Consejo Superior de Investigaciones Científicas [Madrid] (CSIC), COGVIS SOFTWARE AND CONSULTING GMBH VIENNA AUT, FTW FORSCHUNGSZENTRUM TELEKOMMUNIKATION WIEN GMBH VIENNA AUT, CYBERBOTICS SARL LAUSANNE CHE, UNIVERSITA DI PISA ITA, UNIVERSIDAD COMPLUTENSE DE MADRID ESP, TROPICAL S.A. ATHENS GRC, University of Sevilla, TECHNICAL UNIVERSITY OF MADRID ESP, UNIVERSITA DEGLI STUDI DI FIRENZE FLORENCE ITA, Information – Technologies – Analyse Environnementale – Procédés Agricoles (UMR ITAP), Institut national de recherche en sciences et technologies pour l'environnement et l'agriculture (IRSTEA)-Institut national d’études supérieures agronomiques de Montpellier (Montpellier SupAgro), Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement (Institut Agro)-Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement (Institut Agro), CASE NEW HOLLAND INDUSTRIAL ZEDELGEM BEL, European Commission, and Consejo Superior de Investigaciones Científicas (España)
- Subjects
Engineering ,Fleets of robots ,Agrochemical ,Robótica e Informática Industrial ,Agricultural robots ,Autonomous robots ,Multi-robot systems ,Pest control ,Agricultural and Biological Sciences (all) ,Context (language use) ,02 engineering and technology ,Ingeniería Industrial ,Pest control, Agricultural robots, Autonomous robots, Fleets of robots, Multi-robot systems ,0202 electrical engineering, electronic engineering, information engineering ,AUTONOMOUS ROBOTS ,Production (economics) ,PEST CONTROL ,2. Zero hunger ,Telecomunicaciones ,Agricultural and Biological Sciences(all) ,business.industry ,Agricultura ,Environmental engineering ,04 agricultural and veterinary sciences ,Environmental economics ,Weed control ,Variety (cybernetics) ,FLEETS OF ROBOTS ,Agriculture ,AGRICULTURAL ROBOTS ,[SDE]Environmental Sciences ,040103 agronomy & agriculture ,Food processing ,MULTI-ROBOT SYSTEMS ,0401 agriculture, forestry, and fisheries ,Electrónica ,020201 artificial intelligence & image processing ,General Agricultural and Biological Sciences ,business - Abstract
González-de-Santos, Pablo et al., Feeding the growing global population requires an annual increase in food production. This requirement suggests an increase in the use of pesticides, which represents an unsustainable chemical load for the environment. To reduce pesticide input and preserve the environment while maintaining the necessary level of food production, the efficiency of relevant processes must be drastically improved. Within this context, this research strived to design, develop, test and assess a new generation of automatic and robotic systems for effective weed and pest control aimed at diminishing the use of agricultural chemical inputs, increasing crop quality and improving the health and safety of production operators. To achieve this overall objective, a fleet of heterogeneous ground and aerial robots was developed and equipped with innovative sensors, enhanced end-effectors and improved decision control algorithms to cover a large variety of agricultural situations. This article describes the scientific and technical objectives, challenges and outcomes achieved in three common crops., The research leading to these results received funding from the European Union’s Seventh Framework Programme [FP7/2007-2013] under Grant Agreement nº 245986. Support for publishing this article has been provided by CSIC.
- Published
- 2016
43. Can my robotic home cleaner be happy? Issues about emotional expression in non-bio-inspired robots
- Author
-
Andrea Bonarini
- Subjects
Personal robot ,Human-Robot interaction ,Experimental and Cognitive Psychology ,02 engineering and technology ,050105 experimental psychology ,Human–robot interaction ,Task (project management) ,Behavioral Neuroscience ,Autonomous robots ,0202 electrical engineering, electronic engineering, information engineering ,Emotional movement ,0501 psychology and cognitive sciences ,Emotional expression ,Set (psychology) ,Emotion ,Emotional robot ,Social robot ,business.industry ,05 social sciences ,Emotional robot, Emotion, Emotional movement, Autonomous robots, Human-Robot interaction, Social robot ,Expression (architecture) ,Robot ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Psychology - Abstract
In many robotic applications, a robot body should have a functional shape that cannot include bio-inspired elements, but it would still be important that the robot can express emotions, moods, or a character, to make it acceptable, and to involve its users. Dynamic signals from movement can be exploited to provide this expression while the robot is acting to perform its task. A research effort has been started to find general emotion expression models for actions that could be applied to any kind of robot to obtain believable and easily detectable emotional expressions. The need for a unified representation of emotional expression emerged. A framework to define action characteristics that could be used to represent emotions is proposed in this paper. Guidelines are provided to identify quantitative models and numerical values for parameters, which can be used to design and engineer emotional robot actions. A set of robots having different shapes, movement possibilities, and goals have been implemented following these guidelines. Thanks to the proposed framework, different models to implement emotional expression can now be compared in a sound way. The question mentioned in the title can now be answered in a justified way.
- Published
- 2016
44. Adaptive behaviors in multi-agent source localization using passive sensing
- Author
-
Mansoor Shaukat and Mandar Chitre
- Subjects
0209 industrial biotechnology ,Collective behavior ,Computer science ,Noise (signal processing) ,Distributed computing ,Multi-agent system ,Swarm intelligence ,Process (computing) ,Initialization ,Experimental and Cognitive Psychology ,Original Articles ,02 engineering and technology ,Computer Science::Multiagent Systems ,collective behavior ,autonomous robots ,Behavioral Neuroscience ,020901 industrial engineering & automation ,Interference (communication) ,0202 electrical engineering, electronic engineering, information engineering ,cooperative source localization ,020201 artificial intelligence & image processing ,multi-agent systems ,Sensitivity (control systems) - Abstract
In this paper, the role of adaptive group cohesion in a cooperative multi-agent source localization problem is investigated. A distributed source localization algorithm is presented for a homogeneous team of simple agents. An agent uses a single sensor to sense the gradient and two sensors to sense its neighbors. The algorithm is a set of individualistic and social behaviors where the individualistic behavior is as simple as an agent keeping its previous heading and is not self-sufficient in localizing the source. Source localization is achieved as an emergent property through agent’s adaptive interactions with the neighbors and the environment. Given a single agent is incapable of localizing the source, maintaining team connectivity at all times is crucial. Two simple temporal sampling behaviors, intensity-based-adaptation and connectivity-based-adaptation, ensure an efficient localization strategy with minimal agent breakaways. The agent behaviors are simultaneously optimized using a two phase evolutionary optimization process. The optimized behaviors are estimated with analytical models and the resulting collective behavior is validated against the agent’s sensor and actuator noise, strong multi-path interference due to environment variability, initialization distance sensitivity and loss of source signal.
- Published
- 2016
45. Exploration of Simulated Creatures Learning to Cross a Highway Using Frequency Histograms
- Author
-
Fei Yu, Anna T. Lawniczak, and Leslie Ly
- Subjects
0209 industrial biotechnology ,Computer science ,Computational intelligence ,02 engineering and technology ,01 natural sciences ,GeneralLiterature_MISCELLANEOUS ,010305 fluids & plasmas ,cognitive agents ,020901 industrial engineering & automation ,Autonomous robots ,0103 physical sciences ,computational intelligence ,data visualization ,General Environmental Science ,Abstraction (linguistics) ,Focus (computing) ,learning ,business.industry ,cellular automata ,Cognition ,Cellular automaton ,agents ,Knowledge base ,Trajectory ,General Earth and Planetary Sciences ,Artificial intelligence ,business - Abstract
We study via frequency histograms, the behaviour of a model of simulated cognitive agents (creatures) learning to safely cross a cellular automaton based highway. The creatures have the ability to learn from each other by evaluating how successful other creatures in the past were for their current situation. We examine the effects of the model parameters on the learning outcomes measured through metrics such as the number of creatures that have successfully crossed. In particular, we focus on the effects of the knowledge base transfer on the creatures’ success of learning. The presented model is general enough so that the considered cognitive agent, called creature, maybe even interpreted as an abstraction of an autonomous vehicle (AV), encountering suddenly another moving vehicle on its trajectory. The AV has to decide whether to continue or to break/stop in order to avoid being destroyed.
- Published
- 2016
- Full Text
- View/download PDF
46. Effects of Simulation Parameters on Naïve Creatures Learning to Safely Cross a Highway on Bimodal Threshold Nature of Success
- Author
-
Fei Yu, Anna T. Lawniczak, and Leslie Ly
- Subjects
Computer science ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Nave ,GeneralLiterature_MISCELLANEOUS ,010305 fluids & plasmas ,cognitive agents ,Intelligent agent ,Autonomous robots ,0103 physical sciences ,computational intelligence ,0202 electrical engineering, electronic engineering, information engineering ,data visualization ,General Environmental Science ,ComputingMethodologies_COMPUTERGRAPHICS ,learning ,Creatures ,business.industry ,cellular automata ,ComputingMilieux_PERSONALCOMPUTING ,Cellular automaton ,agents ,General Earth and Planetary Sciences ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
A model of simulated cognitive agents (naïve creatures) learning to safely cross a cellular automaton based highway is described. These creatures have ability to learn from each other. We investigate how the creatures’ learning outcomes are affected by the model parameters (e.g., the traffic density, creatures’ ability to change a crossing point, creatures’ fear and desire). We observe and study a bimodal nature in the number of successful creatures in various creature populations at simulation end. This value is either low or high depending on the values of the model parameters.
- Published
- 2016
- Full Text
- View/download PDF
47. Success Rate of Creatures Crossing a Highway as a Function of Model Parameters
- Author
-
Anna T. Lawniczak, Leslie Ly, and Fei Yu
- Subjects
0209 industrial biotechnology ,Computer science ,Population ,Computational intelligence ,02 engineering and technology ,computer.software_genre ,GeneralLiterature_MISCELLANEOUS ,cognitive agents ,Intelligent agent ,020901 industrial engineering & automation ,Autonomous robots ,computational intelligence ,0202 electrical engineering, electronic engineering, information engineering ,data visualization ,education ,General Environmental Science ,education.field_of_study ,learning ,business.industry ,cellular automata ,Social learning ,Cellular automaton ,agents ,Knowledge base ,General Earth and Planetary Sciences ,Robot ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
In modeling swarms of autonomous robots, individual robots may be identified as cognitive agents. We describe a model of population of simple cognitive agents, naïve creatures, learning to safely cross a cellular automaton based highway. These creatures have the ability to learn from each other by evaluating if creatures in the past were successful in crossing the highway for their current situation. The creatures use “observational social learning” mechanism in their decision to cross the highway or not. The model parameters heavily influence the learning outcomes examined through the collected simulation metrics. We study how these parameters, in particular the knowledge base, influence the creatures’ success rate of crossing the highway.
- Published
- 2016
48. Exploring the Interrelationship between Additive Manufacturing and Industry 4.0
- Author
-
Javaid Butt
- Subjects
0209 industrial biotechnology ,Industry 4.0 ,Emerging technologies ,Computer science ,Internet of Things ,Cognitive computing ,Big data ,Umbrella term ,Cloud computing ,02 engineering and technology ,lcsh:Technology ,Industrial and Manufacturing Engineering ,Interconnectedness ,020901 industrial engineering & automation ,lcsh:TA174 ,industry 4.0 ,Engineering (miscellaneous) ,lcsh:T ,business.industry ,Mechanical Engineering ,simulation ,lcsh:Engineering design ,021001 nanoscience & nanotechnology ,Automation ,augmented reality ,Manufacturing engineering ,autonomous robots ,0210 nano-technology ,business ,additive manufacturing - Abstract
Innovative technologies allow organizations to remain competitive in the market and increase their profitability. These driving factors have led to the adoption of several emerging technologies and no other trend has created more of an impact than Industry 4.0 in recent years. This is an umbrella term that encompasses several digital technologies that are geared toward automation and data exchange in manufacturing technologies and processes. These include but are not limited to several latest technological developments such as cyber-physical systems, digital twins, Internet of Things, cloud computing, cognitive computing, and artificial intelligence. Within the context of Industry 4.0, additive manufacturing (AM) is a crucial element. AM is also an umbrella term for several manufacturing techniques capable of manufacturing products by adding layers on top of each other. These technologies have been widely researched and implemented to produce homogeneous and heterogeneous products with complex geometries. This paper focuses on the interrelationship between AM and other elements of Industry 4.0. A comprehensive AM-centric literature review discussing the interaction between AM and Industry 4.0 elements whether directly (used for AM) or indirectly (used with AM) has been presented. Furthermore, a conceptual digital thread integrating AM and Industry 4.0 technologies has been proposed. The need for such interconnectedness and its benefits have been explored through the content-centric literature review. Development of such a digital thread for AM will provide significant benefits, allow companies to respond to customer requirements more efficiently, and will accelerate the shift toward smart manufacturing.
- Published
- 2020
49. Autonomous Microrobotic Manipulation Using Visual Servo Control
- Author
-
Steven Yee, And Samara L Firebaugh, J.A. Piepmeier, Hatem ElBidweihy, Matthew G. Feemster, and Harrison Biggs
- Subjects
0209 industrial biotechnology ,Computer science ,lcsh:Mechanical engineering and machinery ,Servo control ,02 engineering and technology ,Article ,020901 industrial engineering & automation ,mobile robots ,Control theory ,Position (vector) ,lcsh:TJ1-1570 ,Computer vision ,Electrical and Electronic Engineering ,robot control ,Pixel ,business.industry ,micromanipulators ,Mechanical Engineering ,Mobile robot ,021001 nanoscience & nanotechnology ,Robot control ,autonomous robots ,Control and Systems Engineering ,microassembly ,Robot ,Artificial intelligence ,Spiral (railway) ,0210 nano-technology ,business - Abstract
This describes the application of a visual servo control method to the microrobotic manipulation of polymer beads on a two-dimensional fluid interface. A microrobot, actuated through magnetic fields, is utilized to manipulate a non-magnetic polymer bead into a desired position. The controller utilizes multiple modes of robot actuation to address the different stages of the task. A filtering strategy employed in separation mode allows the robot to spiral from the manipuland in a fashion that promotes the manipulation positioning objective. Experiments demonstrate that our multiphase controller can be used to direct a microrobot to position a manipuland to within an average positional error of approximately 8 pixels (64 µ, m) over numerous trials.
- Published
- 2020
50. A Planning and Control System for Self-Driving Racing Vehicles
- Author
-
Adriano Fagiolini, Francesco Amerotti, Federico Massa, Alessandro Settimi, Stefano De Caro, Andrea Biondo, Andrea Corti, Lucia Pallottino, Danio Caporale, Luca Venturini, Caporale, Danilo, Fagiolini, Adriano, Pallottino, Lucia, Settimi, Alessandro, Biondo, Andrea, Amerotti, Francesco, Massa, Federico, De Caro, Stefano, Corti, Andrea, and Venturini, Luca
- Subjects
Operations research ,Renewable Energy, Sustainability and the Environment ,Computer science ,Control (management) ,Energy Engineering and Power Technology ,Computer Science Applications1707 Computer Vision and Pattern Recognition ,Autonomous robots, self-driving vehicles, racing, robotics challenge ,Pedestrian ,Industrial and Manufacturing Engineering ,Competition (economics) ,self-driving vehicles ,Autonomous robot ,racing ,Computer Networks and Communication ,Settore ING-INF/04 - Automatica ,Artificial Intelligence ,Autonomous robots ,Control system ,self-driving vehicle ,Trajectory ,Key (cryptography) ,Robot ,robotics challenge ,Everyday life ,Instrumentation - Abstract
Autonomous robots will soon enter our everyday life as self-driving cars. These vehicles are designed to behave according to certain sets of cooperative rules, such as traffic ones, and to respond to events that might be unpredictable in their occurrence but predictable in their nature, such as a pedestrian suddenly crossing a street, or another car losing control. As civilian autonomous cars will cross the road, racing autonomous cars are under development, which will require superior Artificial Intelligence Drivers to perform in structured but uncertain conditions. We describe some preliminary results obtained during the development of a planning and control system as key elements of an Artificial Intelligence driver for the competition scenario.
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.