319 results on '"Patric Jensfelt"'
Search Results
152. Conceptual spatial representations for indoor mobile robots.
- Author
-
Hendrik Zender, óscar Martínez Mozos, Patric Jensfelt, Geert-Jan M. Kruijff, and Wolfram Burgard
- Published
- 2008
- Full Text
- View/download PDF
153. Attentional Landmarks and Active Gaze Control for Visual SLAM.
- Author
-
Simone Frintrop and Patric Jensfelt
- Published
- 2008
- Full Text
- View/download PDF
154. The STRANDS Project: Long-Term Autonomy in Everyday Environments.
- Author
-
Nick Hawes, Chris Burbridge, Ferdian Jovan, Lars Kunze, Bruno Lacerda, Lenka Mudrová, Jay Young, Jeremy L. Wyatt, Denise Hebesberger, Tobias Körtner, Rares Ambrus, Nils Bore, John Folkesson, Patric Jensfelt, Lucas Beyer, Alexander Hermans, Bastian Leibe, Aitor Aldoma, Thomas Faulhammer, Michael Zillich, Markus Vincze, Muhannad Al-Omari, Eris Chinellato, Paul Duckworth, Yiannis Gatsoulis, David C. Hogg, Anthony G. Cohn 0001, Christian Dondrup, Jaime Pulido Fentanes, Tomás Krajník, João Machado Santos, Tom Duckett, and Marc Hanheide
- Published
- 2016
155. Relational Approaches for Joint Object Classification and Scene Similarity Measurement in Indoor Environments.
- Author
-
Marina Alberti, John Folkesson, and Patric Jensfelt
- Published
- 2014
156. Supervised semantic labeling of places using information extracted from sensor data.
- Author
-
óscar Martínez Mozos, Rudolph Triebel, Patric Jensfelt, Axel Rottmann, and Wolfram Burgard
- Published
- 2007
- Full Text
- View/download PDF
157. Field and service applications - Automating the marking process for exhibitions and fairs - The Making of Harry Platter.
- Author
-
Patric Jensfelt, Erik Förell, and Per Ljunggren
- Published
- 2007
- Full Text
- View/download PDF
158. Object detection and mapping for service robot tasks.
- Author
-
Staffan Ekvall, Danica Kragic, and Patric Jensfelt
- Published
- 2007
- Full Text
- View/download PDF
159. The M-Space Feature Representation for SLAM.
- Author
-
John Folkesson, Patric Jensfelt, and Henrik I. Christensen
- Published
- 2007
- Full Text
- View/download PDF
160. A mobile robot system for automatic floor marking.
- Author
-
Patric Jensfelt, Gunnar Gullstrand, and Erik Förell
- Published
- 2006
- Full Text
- View/download PDF
161. Kinect@Home: Crowdsourcing a Large 3D Dataset of Real Environments.
- Author
-
Alper Aydemir, Daniel Henell, Roy Shilkrot, and Patric Jensfelt
- Published
- 2012
162. A Planning Approach to Active Visual Search in Large Environments.
- Author
-
Moritz Göbelbecker, Alper Aydemir, Andrzej Pronobis, Kristoffer Sjöö, and Patric Jensfelt
- Published
- 2011
163. Overview of the CLEF 2009 Robot Vision Track.
- Author
-
Barbara Caputo, Andrzej Pronobis, and Patric Jensfelt
- Published
- 2009
164. Active global localization for a mobile robot using multiple hypothesis tracking.
- Author
-
Patric Jensfelt and Steen Kristensen
- Published
- 2001
- Full Text
- View/download PDF
165. Pose tracking using laser scanning and minimalistic environmental models.
- Author
-
Patric Jensfelt and Henrik I. Christensen
- Published
- 2001
- Full Text
- View/download PDF
166. Understanding Greediness in Map-Predictive Exploration Planning
- Author
-
Patric Jensfelt, Daniel Duberg, and Ludvig Ericson
- Subjects
Exploit ,Basis (linear algebra) ,Process (engineering) ,Computer science ,business.industry ,media_common.quotation_subject ,Principal (computer security) ,Space (commercial competition) ,Machine learning ,computer.software_genre ,Planner ,Thresholding ,Artificial intelligence ,Function (engineering) ,business ,computer ,media_common ,computer.programming_language - Abstract
In map-predictive exploration planning, the aim is to exploit a-priori map information to improve planning for exploration in otherwise unknown environments. The use of map predictions in exploration planning leads to exacerbated greediness, as map predictions allow the planner to defer exploring parts of the environment that have low value, e.g., unfinished corners. This behavior is undesirable, as it leaves holes in the explored space by design. To this end, we propose a scoring function based on inverse covisibility that rewards visiting these low-value parts, resulting in a more cohesive exploration process, and preventing excessive greediness in a map-predictive setting. We examine the behavior of a non-greedy map-predictive planner in a bare-bones simulator, and answer two principal questions: a) how far beyond explored space should a map predictor predict to aid exploration, i.e., is more better; and b) does shortest-path search as the basis for planning, a popular choice, cause greediness. Finally, we show that by thresholding covisibility, the user can trade-off greediness for improved early exploration performance.
- Published
- 2021
- Full Text
- View/download PDF
167. IEEE ICRA 2016 in Stockholm [Society News].
- Author
-
Petter ögren, Danica Kragic, Antonio Bicchi, Alessandro De Luca 0001, Christian Smith, and Patric Jensfelt
- Published
- 2016
- Full Text
- View/download PDF
168. Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments
- Author
-
Fredrik Heintz, Mattias Tiger, Magnus Selin, Patric Jensfelt, and Daniel Duberg
- Subjects
0209 industrial biotechnology ,Control and Optimization ,Computer science ,Distributed computing ,Biomedical Engineering ,02 engineering and technology ,020901 industrial engineering & automation ,Datorseende och robotik (autonoma system) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Motion and Path Planning ,Motion planning ,Computer Vision and Robotics (Autonomous Systems) ,business.industry ,Mechanical Engineering ,Mobile robot ,Robotics ,Search and Rescue Robots ,Computer Science Applications ,Human-Computer Interaction ,Mapping ,Control and Systems Engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Scale (map) - Abstract
Exploration is an important aspect of robotics, whether it is for mapping, rescue missions or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this work we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries. FACT (SSF) WASP
- Published
- 2019
- Full Text
- View/download PDF
169. Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps
- Author
-
Patric Jensfelt, John Folkesson, Nils Bore, and Johan Ekekrantz
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Probabilistic logic ,Point cloud ,Mobile robot ,02 engineering and technology ,Filter (signal processing) ,Kalman filter ,Tracking (particle physics) ,Object detection ,Computer Science Applications ,020901 industrial engineering & automation ,Control and Systems Engineering ,Robot ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.
- Published
- 2019
- Full Text
- View/download PDF
170. Bearing-Only Vision SLAM with Distinguishable Image Features
- Author
-
John Folkesson, Patric Jensfelt, and Danica Kragic
- Subjects
Laser scanning ,Matching (graph theory) ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Mobile robot ,Bearing (navigation) ,Sonar ,Set (abstract data type) ,Extended Kalman filter ,Computer vision ,Point (geometry) ,Artificial intelligence ,business - Abstract
One of the key competences for autonomous mobile robots is the ability to build a map of the environment using natural landmarks and to use it for localization (Thrun et al., 1998, Castellanos et al, 1999, Dissanayake et al, 2001, Tardos et al. 2002, Thrun et al., 2004). Most successful systems presented so far in the literature have relied on range sensors such as laser scanners and sonar sensors. For large scale, complex environments with natural landmarks the problem of SLAM is still an open research problem. Recently, the use of vision as the only exteroceptive sensor has become one of the most active areas of research in SLAM (Davison, 2003, Folkesson et al., 2005, Goncavles et al., 2005, Sim et al., 2005, Newman & Ho., 2005). In this chapter, we present a SLAM system that builds maps with point landmarks using a single camera. We deal with a set of open research issues such as how to identify and extract stable and well-localized landmarks and how to match them robustly to perform accurate reconstruction and loop closing. All of these issues are central to success, especially when an estimator such as the Extended Kalman Filter (EKF) is used. Robust matching is required for most recursive formulations of SLAM where decisions are final. Even for methods that allow the data associations to change over time, e.g. (Folkesson & Christensen, 2004, Frese & Schroder 2006), reliable matching is very important. One of the big disadvantages with the laser scanner is that it is a very expensive sensor. Cameras, on the other hand, are relatively cheap. Another aspect of using cameras for SLAM is the much greater richness of the sensor information as compared to that from, for example, a range sensor. Using a camera it is possible to recognize features based on their appearance. This provides the means for dealing with one of the most difficult problems in SLAM, namely data association. The main contributions of this work are i) a method for the initialisation of visual landmarks for SLAM, ii) a robust and precise feature detector, iii) the management of the measurement to make on-line estimation possible, and iv) the demonstration of how this framework can facilitate real-time SLAM even with an EKF based implementation.
- Published
- 2021
171. Dora the Explorer: a motivated robot.
- Author
-
Nick Hawes, Marc Hanheide, Kristoffer Sjöö, Alper Aydemir, Patric Jensfelt, Moritz Göbelbecker, Michael Brenner 0001, Hendrik Zender, Pierre Lison, Ivana Kruijff-Korbayová, Geert-Jan M. Kruijff, and Michael Zillich
- Published
- 2010
172. Adversarial Feature Training for Generalizable Robotic Visuomotor Control
- Author
-
Ali Ghadirzadeh, Patric Jensfelt, Xi Chen, and Mårten Björkman
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,media_common.quotation_subject ,Computer Science - Computer Vision and Pattern Recognition ,Robotics ,Machine Learning (cs.LG) ,Domain (software engineering) ,Task (project management) ,Computer Science - Robotics ,Feature (computer vision) ,Human–computer interaction ,Leverage (statistics) ,Reinforcement learning ,Artificial intelligence ,Transfer of learning ,business ,Function (engineering) ,Robotics (cs.RO) ,media_common - Abstract
Deep reinforcement learning (RL) has enabled training action-selection policies, end-to-end, by learning a function which maps image pixels to action outputs. However, it's application to visuomotor robotic policy training has been limited because of the challenge of large-scale data collection when working with physical hardware. A suitable visuomotor policy should perform well not just for the task-setup it has been trained for, but also for all varieties of the task, including novel objects at different viewpoints surrounded by task-irrelevant objects. However, it is impractical for a robotic setup to sufficiently collect interactive samples in a RL framework to generalize well to novel aspects of a task. In this work, we demonstrate that by using adversarial training for domain transfer, it is possible to train visuomotor policies based on RL frameworks, and then transfer the acquired policy to other novel task domains. We propose to leverage the deep RL capabilities to learn complex visuomotor skills for uncomplicated task setups, and then exploit transfer learning to generalize to new task domains provided only still images of the task in the target domain. We evaluate our method on two real robotic tasks, picking and pouring, and compare it to a number of prior works, demonstrating its superiority.
- Published
- 2020
- Full Text
- View/download PDF
173. UFOMap: An Efficient Probabilistic 3D Mapping Framework That Embraces the Unknown
- Author
-
Patric Jensfelt and Daniel Duberg
- Subjects
FOS: Computer and information sciences ,Control and Optimization ,Theoretical computer science ,Computer science ,Mechanical Engineering ,020208 electrical & electronic engineering ,Biomedical Engineering ,Structure (category theory) ,Probabilistic logic ,02 engineering and technology ,Extension (predicate logic) ,Computer Science Applications ,Human-Computer Interaction ,Octree ,Computer Science - Robotics ,Artificial Intelligence ,Control and Systems Engineering ,0202 electrical engineering, electronic engineering, information engineering ,Computer Vision and Pattern Recognition ,State (computer science) ,Motion planning ,Representation (mathematics) ,Robotics (cs.RO) - Abstract
3D models are an essential part of many robotic applications. In applications where the environment is unknown a-priori, or where only a part of the environment is known, it is important that the 3D model can handle the unknown space efficiently. Path planning, exploration, and reconstruction all fall into this category. In this paper we present an extension to OctoMap which we call UFOMap. UFOMap uses an explicit representation of all three states in the map, i.e., occupied, free, and unknown. This gives, surprisingly, a more memory efficient representation. Furthermore, we provide methods that allow for significantly faster insertions into the octree. This enables real-time colored volumetric mapping at high resolution (below 1 cm). UFOMap is contributed as a C++ library that can be used standalone but is also integrated into ROS., Project page: https://github.com/danielduberg/UFOMap
- Published
- 2020
174. Geometric Correspondence Network for Camera Motion Estimation
- Author
-
Patric Jensfelt, Jiexiong Tang, and John Folkesson
- Subjects
Scheme (programming language) ,0209 industrial biotechnology ,Control and Optimization ,Computer science ,Computer Science::Neural and Evolutionary Computation ,Biomedical Engineering ,02 engineering and technology ,Convolutional neural network ,020901 industrial engineering & automation ,Artificial Intelligence ,Motion estimation ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Visual odometry ,computer.programming_language ,Quantitative Biology::Neurons and Cognition ,business.industry ,Mechanical Engineering ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Computer Science::Computer Vision and Pattern Recognition ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer - Abstract
In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural networ ...
- Published
- 2018
- Full Text
- View/download PDF
175. Team KTH’s Picking Solution for the Amazon Picking Challenge 2016
- Author
-
Patric Jensfelt, E B Francisco Vina, Sergio Caccamo, Joshua A. Haustein, Petter Ögren, João F. Pinto B. De Carvalho, Danica Kragic, Diogo Almeida, Yiannis Karayiannidis, Silvia Cruciani, Rares Ambrus, Xi Chen, and Alejandro Marzinotto
- Subjects
0209 industrial biotechnology ,Engineering ,Operations research ,Warehouse automation ,business.industry ,Amazon rainforest ,Behavior Trees ,Robotics ,02 engineering and technology ,Industrial engineering ,Competition (economics) ,020901 industrial engineering & automation ,Work (electrical) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided ...
- Published
- 2020
- Full Text
- View/download PDF
176. Meta-Learning for Multi-objective Reinforcement Learning
- Author
-
Mårten Björkman, Xi Chen, Ali Ghadirzadeh, and Patric Jensfelt
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,Meta learning (computer science) ,Computer Science - Artificial Intelligence ,Computer science ,Generalization ,Control (management) ,Degrees of freedom (statistics) ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Task (project management) ,Artificial Intelligence (cs.AI) ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,020201 artificial intelligence & image processing ,Markov decision process ,Preference (economics) ,0105 earth and related environmental sciences - Abstract
Multi-objective reinforcement learning (MORL) is the generalization of standard reinforcement learning (RL) approaches to solve sequential decision making problems that consist of several, possibly conflicting, objectives. Generally, in such formulations, there is no single optimal policy which optimizes all the objectives simultaneously, and instead, a number of policies has to be found each optimizing a preference of the objectives. In this paper, we introduce a novel MORL approach by training a meta-policy, a policy simultaneously trained with multiple tasks sampled from a task distribution, for a number of randomly sampled Markov decision processes (MDPs). In other words, the MORL is framed as a meta-learning problem, with the task distribution given by a distribution over the preferences. We demonstrate that such a formulation results in a better approximation of the Pareto optimal solutions in terms of both the optimality and the computational efficiency. We evaluated our method on obtaining Pareto optimal policies using a number of continuous control problems with high degrees of freedom.
- Published
- 2019
- Full Text
- View/download PDF
177. Knowledge is Never Enough: Towards Web Aided Deep Open World Recognition
- Author
-
Patric Jensfelt, Hakan Karaoguz, Massimiliano Mancini, Elisa Ricci, and Barbara Caputo
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Autonomous agent ,Open set ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,010501 environmental sciences ,deep learning architecture ,01 natural sciences ,Oracle ,object recognition ,Computer Science - Robotics ,robot platform ,open set recognition ,Human–computer interaction ,mobile robots ,deep open world recognition ,0202 electrical engineering, electronic engineering, information engineering ,Training ,Set (psychology) ,autonomous mining ,0105 earth and related environmental sciences ,Visualization ,deep network ,deep extension ,Artificial neural network ,Artificial neural networks ,business.industry ,Deep learning ,020206 networking & telecommunications ,data mining ,Object (computer science) ,robot vision ,visual modules ,Semantics ,visual knowledge gaps ,learning (artificial intelligence) ,nonparametric model ,Robots ,Feature extraction ,Task analysis ,Robot ,Artificial intelligence ,business ,Robotics (cs.RO) - Abstract
While today's robots are able to perform sophisticated tasks, they can only act on objects they have been trained to recognize. This is a severe limitation: any robot will inevitably see new objects in unconstrained settings, and thus will always have visual knowledge gaps. However, standard visual modules are usually built on a limited set of classes and are based on the strong prior that an object must belong to one of those classes. Identifying whether an instance does not belong to the set of known categories (i.e. open set recognition), only partially tackles this problem, as a truly autonomous agent should be able not only to detect what it does not know, but also to extend dynamically its knowledge about the world. We contribute to this challenge with a deep learning architecture that can dynamically update its known classes in an end-to-end fashion. The proposed deep network, based on a deep extension of a non-parametric model, detects whether a perceived object belongs to the set of categories known by the system and learns it without the need to retrain the whole system from scratch. Annotated images about the new category can be provided by an 'oracle' (i.e. human supervision), or by autonomous mining of the Web. Experiments on two different databases and on a robot platform demonstrate the promise of our approach., Comment: ICRA 2019
- Published
- 2019
178. GCNv2: Efficient Correspondence Prediction for Real-Time SLAM
- Author
-
John Folkesson, Jiexiong Tang, Patric Jensfelt, and Ludvig Ericson
- Subjects
FOS: Computer and information sciences ,Control and Optimization ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Biomedical Engineering ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Computer Science - Robotics ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Orb (optics) ,Projective geometry ,business.industry ,Mechanical Engineering ,Deep learning ,020207 software engineering ,Computer Science Applications ,Human-Computer Interaction ,010201 computation theory & mathematics ,Control and Systems Engineering ,Feature (computer vision) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Algorithm ,Robotics (cs.RO) - Abstract
In this paper, we present a deep learning-based network, GCNv2, for generation of keypoints and descriptors. GCNv2 is built on our previous method, GCN, a network trained for 3D projective geometry. GCNv2 is designed with a binary descriptor vector as the ORB feature so that it can easily replace ORB in systems such as ORB-SLAM2. GCNv2 significantly improves the computational efficiency over GCN that was only able to run on desktop hardware. We show how a modified version of ORB-SLAM2 using GCNv2 features runs on a Jetson TX2, an embedded low-power platform. Experimental results show that GCNv2 retains comparable accuracy as GCN and that it is robust enough to use for control of a flying drone., Comment: Project page: https://github.com/jiexiong2016/GCNv2_SLAM
- Published
- 2019
- Full Text
- View/download PDF
179. Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments
- Author
-
Patric Jensfelt, Ali Ghadirzadeh, John Folkesson, Mårten Björkman, and Xi Chen
- Subjects
Computer science ,business.industry ,020207 software engineering ,Mobile robot ,Robotics ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Mobile robot navigation ,Action (philosophy) ,Robustness (computer science) ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Robot ,Reinforcement learning ,Artificial intelligence ,business ,0105 earth and related environmental sciences - Abstract
Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.
- Published
- 2018
- Full Text
- View/download PDF
180. Semantic Labeling of Indoor Environments from 3D RGB Maps
- Author
-
Rudolph Triebel, Zoltan-Csaba Marton, Kai O. Arras, Maximilian Durner, Manuel Brucker, Axel Wendt, Patric Jensfelt, and Rares Ambrus
- Subjects
Conditional random field ,0209 industrial biotechnology ,Computer science ,business.industry ,020302 automobile design & engineering ,Pattern recognition ,02 engineering and technology ,Solid modeling ,Object detection ,Semantics ,020901 industrial engineering & automation ,0203 mechanical engineering ,Labeling ,Task analysis ,Robot ,RGB color model ,Three-dimensional displays ,Training ,Artificial intelligence ,business ,Robots - Abstract
We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.
- Published
- 2018
181. Autonomous meshing, texturing and recognition of object models with a mobile robot
- Author
-
Patric Jensfelt, Nils Bore, John Folkesson, and Rares Ambrus
- Subjects
Surface (mathematics) ,Computer and Information Sciences ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Data- och informationsvetenskap ,020207 software engineering ,Mobile robot ,02 engineering and technology ,Object (computer science) ,Convolutional neural network ,Computer Science::Robotics ,Consistency (database systems) ,Computer Science::Graphics ,Feature (computer vision) ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Polygon mesh ,Computer vision ,Artificial intelligence ,business ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present a system for creating object models from RGB-D views acquired autonomously by a mobile robot. We create high-quality textured meshes of the objects by approximating the underlying geometry with a Poisson surface. Our system employs two optimization steps, first registering the views spatially based on image features, and second aligning the RGB images to maximize photometric consistency with respect to the reconstructed mesh. We show that the resulting models can be used robustly for recognition by training a Convolutional Neural Network (CNN) on images rendered from the reconstructed meshes. We perform experiments on data collected autonomously by a mobile robot both in controlled and uncontrolled scenarios. We compare quantitatively and qualitatively to previous work to validate our approach. QC 20180409
- Published
- 2017
- Full Text
- View/download PDF
182. Geometric and visual terrain classification for autonomous mobile navigation
- Author
-
Patric Jensfelt, John Folkesson, Xi Chen, and Fabian Schilling
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Point cloud ,Terrain ,02 engineering and technology ,020901 industrial engineering & automation ,Lidar ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Focus (optics) ,Representation (mathematics) - Abstract
In this paper, we present a multi-sensory terrain classification algorithm with a generalized terrain representation using semantic and geometric features. We compute geometric features from lidar point clouds and extract pixel-wise semantic labels from a fully convolutional network that is trained using a dataset with a strong focus on urban navigation. We use data augmentation to overcome the biases of the original dataset and apply transfer learning to adapt the model to new semantic labels in off-road environments. Finally, we fuse the visual and geometric features using a random forest to classify the terrain traversability into three classes: safe, risky and obstacle. We implement the algorithm on our four-wheeled robot and test it in novel environments including both urban and off-road scenes which are distinct from the training environments and under summer and winter conditions. We provide experimental result to show that our algorithm can perform accurate and fast prediction of terrain traversability in a mixture of environments with a small set of training data.
- Published
- 2017
- Full Text
- View/download PDF
183. Active Visual Object Search in Unknown Environments Using Uncertain Semantics
- Author
-
Moritz Göbelbecker, Andrzej Pronobis, Alper Aydemir, and Patric Jensfelt
- Subjects
Visual search ,Incremental heuristic search ,Computer science ,business.industry ,Best-first search ,Iterative deepening depth-first search ,Machine learning ,computer.software_genre ,Object (computer science) ,Computer Science Applications ,Control and Systems Engineering ,Video tracking ,Object type ,Beam search ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,computer - Abstract
In this paper, we study the problem of active visual search (AVS) in large, unknown, or partially known environments. We argue that by making use of uncertain semantics of the environment, a robot tasked with finding an object can devise efficient search strategies that can locate everyday objects at the scale of an entire building floor, which is previously unknown to the robot. To realize this, we present a probabilistic model of the search environment, which allows for prioritizing the search effort to those parts of the environment that are most promising for a specific object type. Further, we describe a method for reasoning about the unexplored part of the environment for goal-directed exploration with the purpose of object search. We demonstrate the validity of our approach by comparing it with two other search systems in terms of search trajectory length and time. First, we implement a greedy coverage-based search strategy that is found in previous work. Second, we let human participants search for objects as an alternative comparison for our method. Our results show that AVS strategies that exploit uncertain semantics of the environment are a very promising idea, and our method pushes the state-of-the-art forward in AVS.
- Published
- 2013
- Full Text
- View/download PDF
184. The STRANDS Project: Long-Term Autonomy in Everyday Environments
- Author
-
Alexander Hermans, Michael Zillich, Paul Duckworth, Jay Young, Joao Machado Santos, Patric Jensfelt, Nick Hawes, Christian Dondrup, David C. Hogg, Lucas Beyer, Yiannis Gatsoulis, Bastian Leibe, Tobias Körtner, Nils Bore, Ferdian Jovan, Tomas Krajnik, Aitor Aldoma, Thomas Faulhammer, Rares Ambrus, Tom Duckett, Jeremy L. Wyatt, Eris Chinellato, Denise Hebesberger, John Folkesson, Lars Kunze, Markus Vincze, Muhannad Al-Omari, Jaime Pulido Fentanes, Marc Hanheide, Christopher Burbridge, Anthony G. Cohn, Lenka Mudrova, and Bruno Lacerda
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Service (systems architecture) ,Process (engineering) ,Computer science ,Control (management) ,02 engineering and technology ,G700 Artificial Intelligence ,Computer Science - Robotics ,020901 industrial engineering & automation ,ddc:370 ,Human–computer interaction ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Duration (project management) ,business.industry ,End user ,Robotics ,H671 Robotics ,Computer Science Applications ,Control and Systems Engineering ,Robot ,020201 artificial intelligence & image processing ,Artificial intelligence ,G760 Machine Learning ,business ,Robotics (cs.RO) ,Mobile service - Abstract
IEEE robotics & automation magazine 24(3), 146-156 (2017). doi:10.1109/MRA.2016.2636359, Published by IEEE, New York, NY
- Published
- 2017
- Full Text
- View/download PDF
185. Human-Centric Partitioning of the Environment
- Author
-
Nils Bore, Patric Jensfelt, John Folkesson, and Hakan Karaoguz
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,media_common.quotation_subject ,02 engineering and technology ,010501 environmental sciences ,perception ,01 natural sciences ,Object detection ,Activity recognition ,human-robot interaction ,020901 industrial engineering & automation ,AI ,Datorseende och robotik (autonoma system) ,Perception ,Human centric ,Robot ,Computer vision ,State (computer science) ,Artificial intelligence ,business ,Cluster analysis ,Computer Vision and Robotics (Autonomous Systems) ,0105 earth and related environmental sciences ,media_common - Abstract
In this paper, we present an object based approach for human-centric partitioning of the environment. Our approach for determining the human-centric regionsis to detect the objects that are commonly associated withfrequent human presence. In order to detect these objects, we employ state of the art perception techniques. The detected objects are stored with their spatio-temporal information inthe robot’s memory to be later used for generating the regions.The advantages of our method is that it is autonomous, requires only a small set of perceptual data and does not even require people to be present while generating the regions.The generated regions are validated using a 1-month dataset collected in an indoor office environment. The experimental results show that although a small set of perceptual data isused, the regions are generated at densely occupied locations. QC 20171018
- Published
- 2017
186. A predictor for operator input for time-delayed teleoperation
- Author
-
Patric Jensfelt and Christian Smith
- Subjects
Engineering ,business.industry ,Mechanical Engineering ,Body movement ,Visual servoing ,Computer Science Applications ,law.invention ,Jerk ,Remote operation ,Control and Systems Engineering ,law ,Control theory ,Teleoperation ,Robot ,Electrical and Electronic Engineering ,Transmission time ,business ,Remote control ,Simulation - Abstract
In this paper we describe a method for bridging internet time delays in a free motion type teleoperation scenario in an unmodeled remote environment with video feedback. The method proposed uses minimum jerk motion models to predict the input from the user a time into the future that is equivalent to the round-trip communication delay. The predictions are then used to control a remote robot. Thus, the operator can in effect observe the resulting motion of the remote robot with virtually no time-delay, even in the presence of a delay on the physical communications channel. We present results from a visually guided teleoperated line tracing experiment with 100 ms round-trip delays, where we show that the proposed method makes a significant performance improvement for teleoperation with delays corresponding to intercontinental distances.
- Published
- 2010
- Full Text
- View/download PDF
187. Autonomous learning of object models on a mobile robot
- Author
-
Michael Zillich, Nick Hawes, Patric Jensfelt, Thomas Faulhammer, John Folkesson, Markus Vincze, Rares Ambrus, and Christopher Burbridge
- Subjects
0209 industrial biotechnology ,Personal robot ,Control and Optimization ,Robot ,Computer science ,Biomedical Engineering ,02 engineering and technology ,01 natural sciences ,Robot learning ,020901 industrial engineering & automation ,Artificial Intelligence ,Computer vision ,Object Model ,Social robot ,business.industry ,Mechanical Engineering ,010401 analytical chemistry ,Robotics ,Mobile robot ,Mobile robot navigation ,0104 chemical sciences ,Computer Science Applications ,Robot control ,Human-Computer Interaction ,Robotteknik och automation ,Control and Systems Engineering ,Object model ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business - Abstract
In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios. QC 20160411 STRANDS
- Published
- 2016
188. Object detection and mapping for service robot tasks
- Author
-
Danica Kragic, Staffan Ekvall, and Patric Jensfelt
- Subjects
Service robot ,business.industry ,Computer science ,General Mathematics ,Mobile robot ,Simultaneous localization and mapping ,Object (computer science) ,Object detection ,Mobile robot navigation ,Computer Science Applications ,Control and Systems Engineering ,Robot ,Metric map ,Computer vision ,Artificial intelligence ,business ,Software - Abstract
SUMMARYThe problem studied in this paper is a mobile robot that autonomously navigates in a domestic environment, builds a map as it moves along and localizes its position in it. In addition, the robot detects predefined objects, estimates their position in the environment and integrates this with the localization module to automatically put the objects in the generated map. Thus, we demonstrate one of the possible strategies for the integration of spatial and semantic knowledge in a service robot scenario where a simultaneous localization and mapping (SLAM) and object detection recognition system work in synergy to provide a richer representation of the environment than it would be possible with either of the methods alone. Most SLAM systems build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. The novelty is the augmentation of this process with an object-recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve a certain object from a certain room. We present the results of map building and an extensive evaluation of the object detection algorithm performed in an indoor setting.
- Published
- 2007
- Full Text
- View/download PDF
189. Unsupervised learning of spatial-temporal models of objects in a long-term autonomy scenario
- Author
-
Johan Ekekrantz, Patric Jensfelt, John Folkesson, and Rares Ambrus
- Subjects
Matching (statistics) ,business.industry ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Mobile robot ,Robot ,Unsupervised learning ,Computer vision ,Artificial intelligence ,Graphical model ,business ,Cluster analysis ,Change detection - Abstract
We present a novel method for clustering segmented dynamic parts of indoor RGB-D scenes across repeated observations by performing an analysis of their spatial-temporal distributions. We segment areas of interest in the scene using scene differencing for change detection. We extend the Meta-Room method and evaluate the performance on a complex dataset acquired autonomously by a mobile robot over a period of 30 days. We use an initial clustering method to group the segmented parts based on appearance and shape, and we further combine the clusters we obtain by analyzing their spatial-temporal behaviors. We show that using the spatial-temporal information further increases the matching accuracy.
- Published
- 2015
- Full Text
- View/download PDF
190. Retrieval of Arbitrary 3D Objects From Robot Observations
- Author
-
John Folkesson, Nils Bore, and Patric Jensfelt
- Subjects
robotics ,vision ,Computer science ,business.industry ,Feature extraction ,Point cloud ,Context (language use) ,Robotics ,Object (computer science) ,computer.software_genre ,Set (abstract data type) ,Ranking ,Robotteknik och automation ,mobile robots ,point clouds ,Robot ,Computer vision ,Artificial intelligence ,Data mining ,mapping ,business ,computer ,retrieval - Abstract
We have studied the problem of retrieval of arbi-trary object instances from a large point cloud data set. Thecontext is autonomous robots operating for long periods of time,weeks up to months and regularly saving point cloud data. Theever growing collection of data is stored in a way that allowsranking candidate examples of any query object, given in theform of a single view point cloud, without the need to accessthe original data. The top ranked ones can then be compared ina second phase using the point clouds themselves. Our methoddoes not assume that the point clouds are segmented or that theobjects to be queried are known ahead of time. This means thatwe are able to represent the entire environment but it also posesproblems for retrieval. To overcome this our approach learnsfrom each actual query to improve search results in terms of theranking. This learning is automatic and based only on the queries.We demonstrate our system on data collected autonomously by arobot operating over 13 days in our building. Comparisons withother techniques and several variations of our method are shown. QC 20160318 STRANDS Project
- Published
- 2015
191. KTH-3D-TOTAL: A 3D dataset for discovering spatial structures for long-term autonomous learning
- Author
-
Rares Ambrus, Mayank Kumar Jha, Akshaya Thippur, Nishan Bhavanishankar Shetty, Janardhan Haryadi Ramesh, Malepati Bala Siva Sai Akhil, Patric Jensfelt, John Folkesson, Gaurav Agrawal, and Adria Gallart del Burgo
- Subjects
Structure (mathematical logic) ,Engineering ,business.industry ,Context (language use) ,Machine learning ,computer.software_genre ,Object (computer science) ,Term (time) ,Data modeling ,Activity recognition ,Spatial relation ,Anomaly detection ,Artificial intelligence ,business ,computer - Abstract
Long-term autonomous learning of human environments entails modelling and generalizing over distinct variations in: object instances in different scenes, and different scenes with respect to space and time. It is crucial for the robot to recognize the structure and context in spatial arrangements and exploit these to learn models which capture the essence of these distinct variations. Table-tops posses a typical structure repeatedly seen in human environments and are identified by characteristics of being personal spaces of diverse functionalities and dynamically changing due to human interactions. In this paper, we present a 3D dataset of 20 office table-tops manually observed and scanned 3 times a day as regularly as possible over 19 days (461 scenes) and subsequently, manually annotated with 18 different object classes, including multiple instances. We analyse the dataset to discover spatial structures and patterns in their variations. The dataset can, for example, be used to study the spatial relations between objects and long-term environment models for applications such as activity recognition, context and functionality estimation and anomaly detection.
- Published
- 2014
- Full Text
- View/download PDF
192. Meta-rooms : Building and Maintaining Long Term Spatial Models in a Dynamic World
- Author
-
Patric Jensfelt, Rares Ambrus, Nils Bore, and John Folkesson
- Subjects
Engineering ,Dynamic elements ,business.industry ,Static structures ,Control engineering ,Robotics ,Static structure ,Autonomous robot ,Intelligent robots ,Office environments ,Term (time) ,Dynamic objects ,Spatial models ,Environment change ,Robotteknik och automation ,Artificial intelligence ,business ,Occluded objects ,Point clusters ,Simulation - Abstract
We present a novel method for re-creating the static structure of cluttered office environments -which we define as the " meta-room" -from multiple observations collected by an autonomous robot equipped with an RGB-D depth camera over extended periods of time. Our method works directly with point clusters by identifying what has changed from one observation to the next, removing the dynamic elements and at the same time adding previously occluded objects to reconstruct the underlying static structure as accurately as possible. The process of constructing the meta-rooms is iterative and it is designed to incorporate new data as it becomes available, as well as to be robust to environment changes. The latest estimate of the meta-room is used to differentiate and extract clusters of dynamic objects from observations. In addition, we present a method for re-identifying the extracted dynamic objects across observations thus mapping their spatial behaviour over extended periods of time. QC 20141205 Strands
- Published
- 2014
193. Modeling motion patterns of dynamic objectsby IOHMM
- Author
-
John Folkesson, Patric Jensfelt, Rares Ambrus, and Zhan Wang
- Subjects
Occupancy grid mapping ,Stationary process ,Computer science ,business.industry ,Dynamics (mechanics) ,Robotics ,Motion (physics) ,Robotteknik och automation ,11. Sustainability ,Computer vision ,Artificial intelligence ,Representation (mathematics) ,Hidden Markov model ,business ,Algorithm - Abstract
This paper presents a novel approach to model motion patterns of dynamic objects, such as people and vehicles, in the environment with the occupancy grid map representation. Corresponding to the ever-changing nature of the motion pattern of dynamic objects, we model each occupancy grid cell by an IOHMM, which is an inhomogeneous variant of the HMM. This distinguishes our work from existing methods which use the conventional HMM, assuming motion evolving according to a stationary process. By introducing observations of neighbor cells in the previous time step as input of IOHMM, the transition probabilities in our model are dependent on the occurrence of events in the cell's neighborhood. This enables our method to model the spatial correlation of dynamics across cells. A sequence processing example is used to illustrate the advantage of our model over conventional HMM based methods. Results from the experiments in an office corridor environment demonstrate that our method is capable of capturing dynamics of such human living environments. QC 20141205 STrands
- Published
- 2014
194. Exploiting and modeling local 3D structure for predicting object locations
- Author
-
Patric Jensfelt and Alper Aydemir
- Subjects
Context model ,Computer science ,business.industry ,Distributed object ,Machine learning ,computer.software_genre ,Object (computer science) ,Object detection ,Object-class detection ,Method ,Object model ,Viola–Jones object detection framework ,Computer vision ,Artificial intelligence ,business ,computer - Abstract
In this paper, we argue that there is a strong correlation between local 3D structure and object placement in everyday scenes. We call this the 3D context of the object. In previous work, this is typically hand-coded and limited to flat horizontal surfaces. In contrast, we propose to use a more general model for 3D context and learn the relationship between 3D context and different object classes. This way, we can capture more complex 3D contexts without implementing specialized routines. We present extensive experiments with both qualitative and quantitative evaluations of our method for different object classes. We show that our method can be used in conjunction with an object detection algorithm to reduce the rate of false positives. Our results support that the 3D structure surrounding objects in everyday scenes is a strong indicator of their placement and that it can give significant improvements in the performance of, for example, an object detection system. For evaluation, we have collected a large dataset of Microsoft Kinect frames from five different locations, which we also make publicly available.
- Published
- 2012
- Full Text
- View/download PDF
195. Large-scale semantic mapping and reasoning with heterogeneous modalities
- Author
-
Patric Jensfelt and Andrzej Pronobis
- Subjects
Knowledge representation and reasoning ,Probabilistic latent semantic analysis ,business.industry ,Computer science ,Probabilistic logic ,Spatial intelligence ,Machine learning ,computer.software_genre ,Semantics ,Semantic mapping ,Semantic computing ,Artificial intelligence ,Graphical model ,business ,computer - Abstract
This paper presents a probabilistic framework combining heterogeneous, uncertain, information such as object observations, shape, size, appearance of rooms and human input for semantic mapping. It abstracts multi-modal sensory information and integrates it with conceptual common-sense knowledge in a fully probabilistic fashion. It relies on the concept of spatial properties which make the semantic map more descriptive, and the system more scalable and better adapted for human interaction. A probabilistic graphical model, a chaingraph, is used to represent the conceptual information and perform spatial reasoning. Experimental results from online system tests in a large unstructured office environment highlight the system's ability to infer semantic room categories, predict existence of objects and values of other spatial properties as well as reason about unexplored space.
- Published
- 2012
- Full Text
- View/download PDF
196. What can we learn from 38,000 rooms? : Reasoning about unexplored space in indoor environments
- Author
-
Alper Aydemir, Patric Jensfelt, and John Folkesson
- Subjects
Engineering ,Exploit ,business.industry ,Property (programming) ,Robotics ,Mobile robot ,Graph theory ,Floor plan ,Machine learning ,computer.software_genre ,Software ,Robotteknik och automation ,Intelligent systems ,Robot ,Artificial intelligence ,business ,computer - Abstract
Many robotics tasks require the robot to predict what lies in the unexplored part of the environment. Although much work focuses on building autonomous robots that operate indoors, indoor environments are neither well understood nor analyzed enough in the literature. In this paper, we propose and compare two methods for predicting both the topology and the categories of rooms given a partial map. The methods are motivated by the analysis of two large annotated floor plan data sets corresponding to the buildings of the MIT and KTH campuses. In particular, utilizing graph theory, we discover that local complexity remains unchanged for growing global complexity in real-world indoor environments, a property which we exploit. In total, we analyze 197 buildings, 940 floors and over 38,000 real-world rooms. Such a large set of indoor places has not been investigated before in the previous work. We provide extensive experimental results and show the degree of transferability of spatial knowledge between two geographically distinct locations. We also contribute the KTH data set and the software tools to with it. QC 20130129
- Published
- 2012
197. Functional topological relations for qualitative spatial representation
- Author
-
Andrzej Pronobis, Patric Jensfelt, and Kristoffer Sjöö
- Subjects
robotics ,spatial relations ,Containment (computer programming) ,Knowledge representation and reasoning ,Computer Sciences ,business.industry ,qualitative reasoning ,media_common.quotation_subject ,knowledge representation ,Probabilistic logic ,Axiomatic system ,Mobile robot ,Space (commercial competition) ,Datavetenskap (datalogi) ,Perception ,Artificial intelligence ,Computational linguistics ,business ,media_common ,Mathematics - Abstract
In this paper, a framework is proposed for representing knowledge about 3-D space in terms of the functional support and containment relationships, corresponding approximately to the prepositions ``on'' and ``in''. A perceptual model is presented which allows for appraising these qualitative relations given the geometries of objects; also, an axiomatic system for reasoning with the relations is put forward. We implement the system on a mobile robot and show how it can use uncertain visual input to infer a coherent qualitative evaluation of a scene, in terms of these functional relations. QC 20140915 CogX
- Published
- 2011
- Full Text
- View/download PDF
198. Home alone: Autonomous extension and correction of spatial representations
- Author
-
Jack Hargreaves, Nick Hawes, Marc Hanheide, Ben Page, Patric Jensfelt, and Hendrik Zender
- Subjects
Computer science ,business.industry ,Embodied cognition ,Mobile robot ,Cognition ,Artificial intelligence ,Extension (predicate logic) ,Non-monotonic logic ,business - Abstract
In this paper we present an account of the problems faced by a mobile robot given an incomplete tour of an unknown environment, and introduce a collection of techniques which can generate successful behaviour even in the presence of such problems. Underlying our approach is the principle that an autonomous system must be motivated to act to gather new knowledge, and to validate and correct existing knowledge. This principle is embodied in Dora, a mobile robot which features the aforementioned techniques: shared representations, non-monotonic reasoning, and goal generation and management. To demonstrate how well this collection of techniques work in real-world situations we present a comprehensive analysis of the Dora system's performance over multiple tours in an indoor environment. In this analysis Dora successfully completed 18 of 21 attempted runs, with all but 3 of these successes requiring one or more of the integrated techniques to recover from problems.
- Published
- 2011
- Full Text
- View/download PDF
199. Search in the real world: Active visual object search based on spatial relations
- Author
-
Patric Jensfelt, John Folkesson, Alper Aydemir, Andrzej Pronobis, and Kristoffer Sjöö
- Subjects
Service robots ,Computer science ,business.industry ,Robotics ,Space (commercial competition) ,Object (computer science) ,Object detection ,Field (computer science) ,Three dimensional displays ,Spatial relation ,Robotteknik och automation ,Semantic mapping ,Robot sensing systems ,Search problems ,Humans ,Robot ,Artificial intelligence ,business - Abstract
Objects are integral to a robot’s understandingof space. Various tasks such as semantic mapping, pick-andcarrymissions or manipulation involve interaction with objects.Previous work in the field largely builds on the assumption thatthe object in question starts out within the ready sensory reachof the robot. In this work we aim to relax this assumptionby providing the means to perform robust and large-scaleactive visual object search. Presenting spatial relations thatdescribe topological relationships between objects, we thenshow how to use these to create potential search actions. Weintroduce a method for efficiently selecting search strategiesgiven probabilities for those relations. Finally we performexperiments to verify the feasibility of our approach. © 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. QC 20111216 EU FP7 project CogX and the Swedish Research Council, contract 621-2006-4520
- Published
- 2011
- Full Text
- View/download PDF
200. Global robot localization with random finite set statistics
- Author
-
Patric Jensfelt and Adrian N. Bishop
- Subjects
business.industry ,Monte Carlo localization ,Statistical model ,Mobile robot ,Machine learning ,computer.software_genre ,Set (abstract data type) ,Moment (mathematics) ,Robot ,Artificial intelligence ,Spurious relationship ,business ,Finite set ,computer ,Algorithm ,Mathematics - Abstract
We re-examine the problem of global localization of a robot using a rigorous Bayesian framework based on the idea of random finite sets. Random sets allow us to naturally develop a complete model of the underlying problem accounting for the statistics of missed detections and of spurious/erroneously detected (potentially unmodeled) features along with the statistical models of robot hypothesis disappearance and appearance. In addition, no explicit data association is required which alleviates one of the more difficult sub-problems. Following the derivation of the Bayesian solution, we outline its first-order statistical moment approximation, the so called probability hypothesis density filter. We present a statistical estimation algorithm for the number of potential robot hypotheses consistent with the accumulated evidence and we show how such an estimate can be used to aid in re-localization of kidnapped robots. We discuss the advantages of the random set approach and examine a number of illustrative simulations.
- Published
- 2010
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.