116 results on '"Henrik I. Christensen"'
Search Results
2. Auto-calibration Method Using Stop Signs for Urban Autonomous Driving Applications
- Author
-
Yunhai Han, Henrik I. Christensen, Yuhan Liu, and David Paz
- Subjects
FOS: Computer and information sciences ,business.industry ,Calibration (statistics) ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Astrophysics::Instrumentation and Methods for Astrophysics ,Computer Science - Computer Vision and Pattern Recognition ,Semantics ,Geometry estimation ,Improved performance ,Computer Science - Robotics ,Robustness (computer science) ,Convergence (routing) ,Computer vision ,Artificial intelligence ,business ,Robotics (cs.RO) ,Auto calibration - Abstract
Calibration of sensors is fundamental to robust performance for intelligent vehicles. In natural environments, disturbances can easily challenge calibration. One possibility is to use natural objects of known shape to recalibrate sensors. An approach based on recognition of traffic signs, such as stop signs, and use of them for recalibration of cameras is presented. The approach is based on detection, geometry estimation, calibration, and recursive updating. Results from natural environments are presented that clearly show convergence and improved performance., 7 pages, 7 figures, 1 table, Accepted to ICRA 2021
- Published
- 2020
3. Computer Vision Systems : 13th International Conference, ICVS 2021, Virtual Event, September 22-24, 2021, Proceedings
- Author
-
Markus Vincze, Timothy Patten, Henrik I Christensen, Lazaros Nalpantidis, Ming Liu, Markus Vincze, Timothy Patten, Henrik I Christensen, Lazaros Nalpantidis, and Ming Liu
- Subjects
- Computer vision, Artificial intelligence, Computer engineering, Computer networks, Pattern recognition systems
- Abstract
This book constitutes the refereed proceedings of the 13th International Conference on Computer Vision Systems, ICVS 2021, held in September 2021. Due to COVID-19 pandemic the conference was held virtually. The 20 papers presented were carefully reviewed and selected from 29 submissions. cover a broad spectrum of issues falling under the wider scope of computer vision in real-world applications, including among others, vision systems for robotics, autonomous vehicles, agriculture and medicine. In this volume, the papers are organized into the sections: attention systems; classification and detection; semantic interpretation; video and motion analysis; computer vision systems in agriculture.
- Published
- 2021
4. RGB-D object pose estimation in unstructured environments
- Author
-
Changhyun Choi and Henrik I. Christensen
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,General Mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Geometric shape ,3D pose estimation ,Computer Science Applications ,Hough transform ,law.invention ,020901 industrial engineering & automation ,Control and Systems Engineering ,law ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,RGB color model ,020201 artificial intelligence & image processing ,Segmentation ,Computer vision ,Artificial intelligence ,business ,Pose ,Software ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
We present an object pose estimation approach exploiting both geometric depth and photometric color information available from an RGB-D sensor. In contrast to various efforts relying on object segmentation with a known background structure, our approach does not depend on the segmentation and thus exhibits superior performance in unstructured environments. Inspired by a voting-based approach employing an oriented point pair feature, we present a voting-based approach which further incorporates color information from the RGB-D sensor and which exploits parallel power of the modern parallel computing architecture. The proposed approach is extensively evaluated with three state-of-the-art approaches on both synthetic and real datasets, and our approach outperforms the other approaches in terms of both computation time and accuracy. A point pair feature describing both geometric shape and photometric color and its application to a voting-based 6-DOF object pose estimation.Parallel algorithms significantly accelerating the voting process on modern GPU architectures.Effective in unstructured scenes in which the prevailing segmentation-based approaches may not be applicable.
- Published
- 2016
- Full Text
- View/download PDF
5. Custom soft robotic gripper sensor skins for haptic object visualization
- Author
-
Dylan Drotman, Caleb Christianson, Michael T. Tolley, Zhaoyuan Huo, Benjamin Shih, Henrik I. Christensen, and Ruffin White
- Subjects
0209 industrial biotechnology ,business.industry ,Computer science ,Soft robotics ,02 engineering and technology ,Construct (python library) ,Modular design ,021001 nanoscience & nanotechnology ,Object (computer science) ,020901 industrial engineering & automation ,Grippers ,Robot ,Computer vision ,Artificial intelligence ,0210 nano-technology ,business ,Actuator ,ComputingMethodologies_COMPUTERGRAPHICS ,Haptic technology - Abstract
Robots are becoming increasingly prevalent in our society in forms where they are assisting or interacting with humans in a variety of environments, and thus they must have the ability to sense and detect objects by touch. An ongoing challenge for soft robots has been incorporating flexible sensors that can recognize complex motions and close the loop for tactile sensing. We present sensor skins that enable haptic object visualization when integrated on a soft robotic gripper that can twist an object. First, we investigate how the design of the actuator modules impact bend angle and motion. Each soft finger is molded using a silicone elastomer, and consists of three pneumatic chambers which can be inflated independently to achieve a range of complex motions. Three fingers are combined to form a soft robotic gripper. Then, we manufacture and attach modular, flexible sensory skins on each finger to measure deformation and contact. These sensor measurements are used in conjunction with an analytical model to construct 2D and 3D tactile object models. Our results are a step towards soft robot grippers capable of a complex range of motions and proprioception, which will help future robots better understand the environments with which they interact, and has the potential to increase physical safety in human-robot interaction. Please see the accompanying video for additional details.
- Published
- 2017
- Full Text
- View/download PDF
6. Multi Robot Object-Based SLAM
- Author
-
John G. Rogers, Luca Carlone, Siddharth Choudhary, Zhen Liu, Carlos Nieto, Frank Dellaert, and Henrik I. Christensen
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Point cloud ,Rendezvous ,02 engineering and technology ,Object (computer science) ,Object detection ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Noise (video) ,business ,Pose ,Information exchange - Abstract
We propose a multi robot SLAM approach that uses 3D objects as landmarks for localization and mapping. The approach is fully distributed in that the robots only communicate during rendezvous and there is no centralized server gathering the data. Moreover, it leverages local computation at each robot (e.g., object detection and object pose estimation) to reduce the communication burden. We show that object-based representations reduce the memory requirements and information exchange among robots, compared to point-cloud-based representations; this enables operation in severely bandwidth-constrained scenarios. We test the approach in simulations and field tests, demonstrating its advantages over related techniques: our approach is as accurate as a centralized method, scales well to large teams, and is resistant to noise.
- Published
- 2017
- Full Text
- View/download PDF
7. Active planning based extrinsic calibration of exteroceptive sensors in unknown environments
- Author
-
Carlos Nieto, Varun Murali, Siddharth Choudhary, and Henrik I. Christensen
- Subjects
0209 industrial biotechnology ,Engineering ,Robot calibration ,business.industry ,Process (engineering) ,Human error ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Simultaneous localization and mapping ,Extrinsic calibration ,020901 industrial engineering & automation ,ComputerApplications_MISCELLANEOUS ,0202 electrical engineering, electronic engineering, information engineering ,Trajectory ,Calibration ,Robot ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business - Abstract
Existing Simultaneous Localization and Mapping systems require an extensive manual pre-calibration process. Non-manual calibration procedures use manipulators to create known patterns in order to estimate the unknown calibration. Calibration is often time-consuming and involves humans performing repetitive tasks such as aligning a known calibration target at different poses with respect to the sensor. We propose an algorithm that plans a trajectory which actively reduces the uncertainty of the robot's calibration given a rough initial calibration estimate. Calibration is performed autonomously in a previously unknown environment by maintaining the belief over landmarks, poses, and the calibration parameters. We present experimental results to demonstrate the approach's ability to autonomously calibrate the exteroceptive sensor in simulated and real environments. We show that even a greedy approach can reduce the effort needed to perform calibration every time the robot is reconfigured for autonomous tasks and mitigates the possibility of human error added into the calibration.
- Published
- 2016
- Full Text
- View/download PDF
8. Robust 3D visual tracking using particle filtering on the special Euclidean group: A combined approach of keypoint and edge features
- Author
-
Henrik I. Christensen and Changhyun Choi
- Subjects
business.industry ,Applied Mathematics ,Mechanical Engineering ,Euclidean group ,Cognitive neuroscience of visual object recognition ,Initialization ,Pattern recognition ,Real image ,Autoregressive model ,Artificial Intelligence ,Robustness (computer science) ,Modeling and Simulation ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Particle filter ,business ,Pose ,Software ,Mathematics - Abstract
We present a 3D model-based visual tracking approach using edge and keypoint features in a particle filtering framework. Recently, particle-filtering-based approaches have been proposed to integrate multiple pose hypotheses and have shown good performance, but most of the work has made an assumption that an initial pose is given. To ameliorate this limitation, we employ keypoint features for initialization of the filter. Given 2D–3D keypoint correspondences, we randomly choose a set of minimum correspondences to calculate a set of possible pose hypotheses. Based on the inlier ratio of correspondences, the set of poses are drawn to initialize particles. After the initialization, edge points are employed to estimate inter-frame motions. While we follow a standard edge-based tracking, we perform a refinement process to improve the edge correspondences between sampled model edge points and image edge points. For better tracking performance, we employ a first-order autoregressive state dynamics, which propagates particles more effectively than Gaussian random walk models. The proposed system re-initializes particles by itself when the tracked object goes out of the field of view or is occluded. The robustness and accuracy of our approach is demonstrated using comparative experiments on synthetic and real image sequences.
- Published
- 2012
- Full Text
- View/download PDF
9. Detecting Region Transitions for Human-Augmented Mapping
- Author
-
Henrik I. Christensen and Elin Anna Topp
- Subjects
Knowledge representation and reasoning ,Computer science ,business.industry ,Human–robot interaction ,Computer Science Applications ,Visualization ,Semantic mapping ,Control and Systems Engineering ,Feature (computer vision) ,Computer vision ,Segmentation ,Artificial intelligence ,Electrical and Electronic Engineering ,User interface ,business ,Representation (mathematics) - Abstract
In this paper, we describe a concise method for the feature-based representation of regions in an indoor environment and show how it can also be applied for door-passage-independent detection of transitions between regions to improve communication with a human user.
- Published
- 2010
- Full Text
- View/download PDF
10. From sensors to human spatial concepts: An annotated data set
- Author
-
Z. Zivkovic, Ben Kröse, Elin Anna Topp, O. Booij, Henrik I. Christensen, and Amsterdam Machine Learning lab (IVI, FNWI)
- Subjects
business.industry ,Computer science ,Spatial database ,Mobile robot ,Sonar ,Human–robot interaction ,Computer Science Applications ,Data set ,Set (abstract data type) ,Odometry ,Control and Systems Engineering ,Robot ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
An annotated data set is presented meant to help researchers in developing, evaluating, and comparing various approaches in robotics for building space representations appropriate for communicating with humans. The data consist of omnidirectional images, laser range scans, sonar readings, and robot odometry. A set of base-level human spatial concepts is used to annotate the data.
- Published
- 2008
11. Graphical SLAM for Outdoor Applications
- Author
-
John Folkesson and Henrik I. Christensen
- Subjects
Set (abstract data type) ,Engineering ,ComputingMethodologies_SIMULATIONANDMODELING ,GeneralLiterature_INTRODUCTORYANDSURVEY ,Control and Systems Engineering ,business.industry ,Computer graphics (images) ,Computer vision ,Robotics ,Artificial intelligence ,business ,Computer Science Applications - Abstract
Application of SLAM outdoors is challenged by complexity, handling of non-linearities and flexible integration of a diverse set of features. A graphical approach to SLAM is introduced that enables ...
- Published
- 2007
- Full Text
- View/download PDF
12. Online Camera Registration for Robot Manipulation
- Author
-
Heni Ben Amor, Neil T. Dantam, Mike Stilman, and Henrik I. Christensen
- Subjects
020301 aerospace & aeronautics ,0209 industrial biotechnology ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Visual servoing ,Tracking (particle physics) ,020901 industrial engineering & automation ,Software ,0203 mechanical engineering ,Camera auto-calibration ,Computer graphics (images) ,Robot ,Computer vision ,Artificial intelligence ,Smart camera ,business ,Dual quaternion ,Camera resectioning - Abstract
We demonstrate that millimeter-level manipulation accuracy can be achieved without the static camera registration typically required for visual servoing. We register the camera online, converging in seconds, by visually tracking features on the robot and filtering the result. This online registration handles cases such as perturbed camera positions, wear and tear on camera mounts, and even a camera held by a human. We implement the approach on a Schunk LWA4 manipulator and Logitech C920 camera, servoing to target and pre-grasp configurations. Our filtering software is available under a permissive license (Software available at http://github.com/golems/reflex).
- Published
- 2015
- Full Text
- View/download PDF
13. Exploiting symmetries and extrusions for grasping household objects
- Author
-
Mike Stilman, Ana Huaman Quispe, Benoit Milville, Marco Antonio Gutiérrez, Henrik I. Christensen, Heni Ben Amor, and Can Erdogan
- Subjects
Identification (information) ,business.industry ,Homogeneous space ,Point cloud ,Robot ,Computer vision ,Artificial intelligence ,Symmetry (geometry) ,business ,Object (computer science) ,Humanoid robot ,Mathematics ,Image (mathematics) - Abstract
In this paper we present an approach for creating complete shape representations from a single depth image for robot grasping. We introduce algorithms for completing partial point clouds based on the analysis of symmetry and extrusion patterns in observed shapes. Identified patterns are used to generate a complete mesh of the object, which is, in turn, used for grasp planning. The approach allows robots to predict the shape of objects and include invisible regions into the grasp planning step. We show that the identification of shape patterns, such as extrusions, can be used for fast generation and optimization of grasps. Finally, we present experiments performed with our humanoid robot executing pick-up tasks based on single depth images and discuss the applications and shortcomings of our approach.
- Published
- 2015
- Full Text
- View/download PDF
14. Information-based reduced landmark SLAM
- Author
-
Frank Dellaert, Henrik I. Christensen, Siddharth Choudhary, and Vadim Indelman
- Subjects
Reduction (complexity) ,Landmark ,Linear programming ,GeneralLiterature_INTRODUCTORYANDSURVEY ,Computer science ,business.industry ,Trajectory ,Graph (abstract data type) ,Robot ,Computer vision ,Artificial intelligence ,Simultaneous localization and mapping ,business - Abstract
In this paper, we present an information-based approach to select a reduced number of landmarks and poses for a robot to localize itself and simultaneously build an accurate map. We develop an information theoretic algorithm to efficiently reduce the number of landmarks and poses in a SLAM estimate without compromising the accuracy of the estimated trajectory. We also propose an incremental version of the reduction algorithm which can be used in SLAM framework resulting in information based reduced landmark SLAM. The results of reduced landmark based SLAM algorithm are shown on Victoria park dataset and a Synthetic dataset and are compared with standard graph SLAM (SAM [6]) algorithm. We demonstrate a reduction of 40–50% in the number of landmarks and around 55% in the number of poses with minimal estimation error as compared to standard SLAM algorithm.
- Published
- 2015
- Full Text
- View/download PDF
15. Learning non-holonomic object models for mobile manipulation
- Author
-
Martin Levihn, Jonathan Scholz, Henrik I. Christensen, Charles L. Isbell, and Mike Stilman
- Subjects
Engineering ,business.industry ,Holonomic ,Mobile manipulator ,Mobile robot ,Kinematics ,Object (computer science) ,Physical structure ,Human–computer interaction ,Trajectory ,Robot ,Computer vision ,Artificial intelligence ,business - Abstract
For a mobile manipulator to interact with large everyday objects, such as office tables, it is often important to have dynamic models of these objects. However, as it is infeasible to provide the robot with models for every possible object it may encounter, it is desirable that the robot can identify common object models autonomously. Existing methods for addressing this challenge are limited by being either purely kinematic, or inefficient due to a lack of physical structure. In this paper, we present a physics-based method for estimating the dynamics of common non-holonomic objects using a mobile manipulator, and demonstrate its efficiency compared to existing approaches.
- Published
- 2015
- Full Text
- View/download PDF
16. Vision for robotic object manipulation in domestic settings
- Author
-
Jan-Olof Eklundh, Mårten Björkman, Danica Kragic, and Henrik I. Christensen
- Subjects
Machine vision ,Computer science ,business.industry ,General Mathematics ,3D single-object recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Cognitive neuroscience of visual object recognition ,3D pose estimation ,Articulated body pose estimation ,Computer Science Applications ,Control and Systems Engineering ,Peripheral vision ,Segmentation ,Computer vision ,Artificial intelligence ,Depth perception ,business ,Pose ,Software - Abstract
In this paper, we present a vision system for robotic object manipulation tasks in natural, domestic environments. Given complex fetch-and-carry robot tasks, the issues related to the whole detect-approach-grasp loop are considered. Our vision system integrates a number of algorithms using monocular and binocular cues to achieve robustness in realistic settings. The cues are considered and used in connection to both foveal and peripheral vision to provide depth information, segmentation of the object(s) of interest, object recognition, tracking and pose estimation. One important property of the system is that the step from object recognition to pose estimation is completely automatic combining both appearance and geometric models. Experimental evaluation is performed in a realistic indoor environment with occlusions, clutter, changing lighting and background conditions.
- Published
- 2005
- Full Text
- View/download PDF
17. Robust Visual Servoing
- Author
-
Danica Kragic and Henrik I. Christensen
- Subjects
0209 industrial biotechnology ,Control level ,business.industry ,Computer science ,Applied Mathematics ,Mechanical Engineering ,Cue integration ,Control engineering ,02 engineering and technology ,Visual servoing ,020901 industrial engineering & automation ,Artificial Intelligence ,Robustness (computer science) ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Robot ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software - Abstract
For service robots operating in domestic environments, it is not enough to consider only control level robustness; it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. In this paper we present an effort towards the development of robust visual techniques used to guide robots in various tasks. Given a task at hand, we argue that different levels of complexity should be considered; this also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two- and three-dimensional techniques. In the former case, we are interested in providing coarse information about the object position/velocity in the image plane. In particular, a set of simple visual features (cues) is employed in an integrated framework where voting is used for fusing the responses from individual cues. The experimental evaluation shows the system performance for three different cases of camera-robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two- dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model-based system where a wire-frame model of the object is used to estimate its pose. Since a number of similar systems have been proposed in the literature, we concentrate on the particular part of the system usually neglected—automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured, everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real-word environment—a living room.
- Published
- 2003
- Full Text
- View/download PDF
18. Occlusion-Aware Object Localization, Segmentation and Pose Estimation
- Author
-
Henrik I. Christensen, Samarth Brahmbhatt, and Heni Ben Amor
- Subjects
FOS: Computer and information sciences ,business.industry ,Segmentation-based object categorization ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Scale-space segmentation ,Pattern recognition ,Image segmentation ,3D pose estimation ,Object detection ,Active appearance model ,Segmentation ,Computer vision ,Artificial intelligence ,business ,Pose ,Mathematics - Abstract
We present a learning approach for localization and segmentation of objects in an image in a manner that is robust to partial occlusion. Our algorithm produces a bounding box around the full extent of the object and labels pixels in the interior that belong to the object. Like existing segmentation aware detection approaches, we learn an appearance model of the object and consider regions that do not fit this model as potential occlusions. However, in addition to the established use of pairwise potentials for encouraging local consistency, we use higher order potentials which capture information at the level of im- age segments. We also propose an efficient loss function that targets both localization and segmentation performance. Our algorithm achieves 13.52% segmentation error and 0.81 area under the false-positive per image vs. recall curve on average over the challenging CMU Kitchen Occlusion Dataset. This is a 42.44% decrease in segmentation error and a 16.13% increase in localization performance compared to the state-of-the-art. Finally, we show that the visibility labelling produced by our algorithm can make full 3D pose estimation from a single image robust to occlusion., Comment: British Machine Vision Conference 2015 (poster)
- Published
- 2015
- Full Text
- View/download PDF
19. Visually guided manipulation tasks
- Author
-
Henrik I. Christensen, Lars Petersson, and Danica Kragic
- Subjects
Computer science ,business.industry ,General Mathematics ,GRASP ,Visual servoing ,Computer Science Applications ,Control and Systems Engineering ,Robustness (computer science) ,Control system ,Human visual system model ,Robot ,Eye tracking ,Computer vision ,Artificial intelligence ,business ,Software - Abstract
In this paper, we present a framework for a robotic system with the ability to perform real-world manipulation tasks. The complexity of such tasks determines the precision and freedoms controlled which also affects the robustness and the flexibility of the system. The aspect is on the development of visual system and visual tracking techniques in particular. Since precise tracking and control of a full pose of the object to be manipulated is usually less robust and computationally expensive, we integrate vision and control system where the objectives are to provide the discrete state information required to switch between control modes of different complexity. For this purpose, an integration of simple visual algorithms is used to provide a robust input to the control loop. Consensus theory is investigated as the integration strategy. In addition, a general purpose framework for integration of processes is used to implement the system on a real robot. The proposed approach results in a system which can robustly locate and grasp a door handle and then open the door.
- Published
- 2002
- Full Text
- View/download PDF
20. Online multi-camera registration for bimanual workspace trajectories
- Author
-
Mike Stilman, Neil T. Dantam, Henrik I. Christensen, and Heni Ben Amor
- Subjects
Computer science ,business.industry ,Workspace ,Object (computer science) ,Visual servoing ,Tracking (particle physics) ,Visualization ,Computer graphics (images) ,Trajectory ,Robot ,Computer vision ,Artificial intelligence ,Quaternion ,business - Abstract
We demonstrate that millimeter-level bimanual manipulation accuracy can be achieved without the static camera registration typically required for visual servoing. We register multiple cameras online, converging in seconds, by visually tracking features on the robot hands and filtering the result. Then, we compute and track continuous-velocity relative workspace trajectories for the end-effector. We demonstrate the approach using Schunk LWA4 and SDH manipulators and Logitech C920 cameras, showing accurate relative positioning for pen-capping and object hand-off tasks. Our filtering software is available under a permissive license.1
- Published
- 2014
- Full Text
- View/download PDF
21. Locally optimal navigation among movable obstacles in unknown environments
- Author
-
Martin Levihn, Henrik I. Christensen, and Mike Stilman
- Subjects
Robot kinematics ,Computer science ,business.industry ,Distributed computing ,Object (computer science) ,Mobile robot navigation ,Domain (software engineering) ,Variety (cybernetics) ,Range (mathematics) ,Computer vision ,Artificial intelligence ,business ,Humanoid robot ,Optimal decision - Abstract
Mobile manipulators and humanoid robots should be able to utilize their manipulation capabilities to move obstacles out of their way. This concept is captured within the domain of Navigation Among Movable Obstacles (NAMO). While a variety of NAMO algorithms exists, they typically assume full world knowledge. In contrast, real robot systems only have limited sensor range and partial environment knowledge. In this work we present the first NAMO system for unknown environments capable of handling a large set of possible object motions and arbitrary object shapes while guaranteeing optimal decision making for the given knowledge. We demonstrate empirical results with up to 70 obstacles.
- Published
- 2014
- Full Text
- View/download PDF
22. SLAM with object discovery, modeling and mapping
- Author
-
Siddharth Choudhary, Alexander J. B. Trevor, Frank Dellaert, and Henrik I. Christensen
- Subjects
Computer science ,business.industry ,Object model ,Robotics ,Computer vision ,Artificial intelligence ,Simultaneous localization and mapping ,Object (computer science) ,business - Published
- 2014
- Full Text
- View/download PDF
23. Object guided autonomous exploration for mobile robots in indoor environments
- Author
-
John G. Rogers, Jeffrey N. Twigg, Varun Murali, Henrik I. Christensen, Siddharth Choudhary, and Carlos Nieto-Granda
- Subjects
Intersection ,Semantic mapping ,business.industry ,Robustness (computer science) ,Process (engineering) ,Computer science ,Robot ,Object model ,Computer vision ,Mobile robot ,Artificial intelligence ,Object (computer science) ,business - Abstract
Autonomous mobile robotic teams are increasingly used in exploration of indoor environments. Accurate modeling of the world around the robot and describing the interaction of the robot with the world greatly increases the ability of the robot to act autonomously. This paper demonstrates the ability of autonomous robotic teams to find objects of interest. A novel feature of our approach is the object discovery and the use of it to augment the mapping and navigation process. The generated map can then be decomposed into into semantic regions while also considering the distance and line of sight to anchor points. The advantage of this approach is that the robot can return a dense map of the region around an object of interest. The robustness of this approach is demonstrated in indoor environments with multiple platforms with the objective of discovering objects of interest. 1. MOTIVATION Robots have shown a lot of potential in exploration, mapping and navigation of indoor environments. Robots are ideal for applications such as discovery of Improvised Explosive Devices (IEDs). Robotic teams can achieve this task faster, provide dense maps of regions around an object of interest and a model of the discovered object. The model of the discovered object can then be viewed by a human user and take the necessary actions. Semantic mapping and object discovery have gained interest independently. The intersection of these areas provides an interesting insight into using robots to discover objects of interest in indoor environments. Given the intended application, an object guided approach to indoor navigation is ideal. Robots can use knowledge of the environment and the semantic labeling to guide their ability to autonomously navigate and discover objects of interest. To achieve this, we propose an algorithm that combines the knowledge of discovered objects into the mapping process. This can provide an insight into the environment and aid the process of modeling the discovered objects.
- Published
- 2014
- Full Text
- View/download PDF
24. Guidance for human navigation using a vibro-tactile belt interface and robot-like motion planning
- Author
-
E. Akin Sisbot, Henrik I. Christensen, and Akansel Cosgun
- Subjects
Robot kinematics ,Engineering ,Robot calibration ,business.industry ,Mobile robot ,Mobile robot navigation ,Robot control ,Robot ,Computer vision ,Artificial intelligence ,Motion planning ,Guidance system ,business ,Simulation - Abstract
We present a navigation guidance system that guides a human to a goal point with a tactile belt interface and a stationary laser scanner. We make use of ROS local navigation planner to find an obstacle-free path by modeling the human as a non-holonomic robot. Linear and angular velocities to keep the ’robot’ on the path are dynamically calculated, which are then converted to vibrations and applied by the tactile belt. We define directional and rotational vibration patterns and evaluate which ones are suitable for guiding humans. Continuous patterns for representing directions had the least average angular error with 8.4◦, whereas rotational patterns were recognized with near-perfect accuracy. All patterns had a reaction time slightly more than 1 seconds. The person is tracked in laser scans by fitting an ellipse to the torso. Average tracking error is found to be 5cm in position and 14◦ in orientation in our experiments with 23 people. Best tracking results was achieved when the person is 2.5m away and is facing the sensor or the opposite way. The human control system as a whole is successfully demonstrated in a navigation guidance scenario.
- Published
- 2014
- Full Text
- View/download PDF
25. Pose tracking using laser scanning and minimalistic environmental models
- Author
-
Patric Jensfelt and Henrik I. Christensen
- Subjects
Engineering ,Laser scanning ,business.industry ,Feature extraction ,Mobile robot ,Kalman filter ,Control and Systems Engineering ,Robustness (computer science) ,Outlier ,Clutter ,Computer vision ,Pattern matching ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
Keeping track of the position and orientation over time using sensor data, i.e., pose tracking, is a central component in many mobile robot systems. In this paper, we present a Kalman filter-based approach utilizing a minimalistic environmental model. By continuously updating the pose, matching the sensor data to the model is straightforward and outliers can be filtered out effectively by validation gates. The minimalistic model paves the way for a low-complexity algorithm with a high degree of robustness and accuracy. Robustness here refers both to being able to track the pose for a long time, but also handling changes and clutter in the environment. This robustness is gained by the minimalistic model only capturing the stable and large scale features of the environment. The effectiveness of the pose tracking is demonstrated through a number of experiments, including a run of 90 min., which clearly establishes the robustness of the method.
- Published
- 2001
- Full Text
- View/download PDF
26. Cue integration for visual servoing
- Author
-
Henrik I. Christensen and Danica Kragic
- Subjects
Computer science ,business.industry ,Tracking system ,Visual servoing ,Sensor fusion ,Fuzzy logic ,Control and Systems Engineering ,Robustness (computer science) ,Robot ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,Robust control ,business ,Sensory cue - Abstract
The robustness and reliability of vision algorithms is, nowadays, the key issue in robotic research and industrial applications. To control a robot in a closed-loop fashion, different tracking systems have been reported in the literature. A common approach to increased robustness of a tracking system is the use of different models (CAD model of the object, motion model) known a priori. Our hypothesis is that fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. A particular application is the estimation of a robot's end-effector position in a sequence of images. The research investigates the following two different approaches to cue integration: 1) voting and 2) fuzzy logic-based fusion. The two approaches have been tested in association with scenes of varying complexity. Experimental results clearly demonstrate that fusion of cues results in a tracking system with a robust performance. The robustness is in particular evident for scenes with multiple moving objects and partial occlusion of the tracked object.
- Published
- 2001
- Full Text
- View/download PDF
27. Localization and navigation of a mobile robot using natural point landmarks extracted from sonar data
- Author
-
O. Wijk and Henrik I. Christensen
- Subjects
Landmark ,Matching (graph theory) ,business.industry ,Computer science ,General Mathematics ,Mobile robot ,Sonar ,Mobile robot navigation ,Computer Science Applications ,Control and Systems Engineering ,Compass ,Robot ,Computer vision ,Point (geometry) ,Artificial intelligence ,business ,Software - Abstract
In this paper we present a new technique for on-line extraction of natural point landmarks in an indoor environment. The landmarks are filtered out from sonar data taken from a mobile platform. By storing point landmark positions in a reference map, we can achieve absolute robot localization by doing matching between recently collected landmarks and the reference map. The matching procedure also takes advantage of compass readings to increase computational speed and make the performance more robust. The technique is easily extended to also encompass local robot localization, which is used when the robot navigates autonomously. The position accuracy of the robot as well as its capability to navigate using natural landmarks is illustrated in a number of real world experiments.
- Published
- 2000
- Full Text
- View/download PDF
28. Triangulation-based fusion of sonar data with application in robot pose tracking
- Author
-
O. Wijk and Henrik I. Christensen
- Subjects
Landmark ,Computer science ,business.industry ,Mobile robot ,Kalman filter ,Sensor fusion ,Sonar ,Extended Kalman filter ,Odometry ,Control and Systems Engineering ,Robot ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business - Abstract
In this paper a sensor fusion scheme, called triangulation-based fusion (TBF) of sonar data, is presented. This algorithm delivers stable natural point landmarks, which appear in practically all indoor environments, i.e., vertical edges like door posts, table legs, and so forth. The landmark precision is in most cases within centimeters. The TBF algorithm is implemented as a voting scheme, which groups sonar measurements that are likely to have hit the same object in the environment. The algorithm has low complexity and is sufficiently fast for most mobile robot applications. As a case study, we apply the TBF algorithm to robot pose tracking. The pose tracker is implemented as a classic extended Kalman filter, which use odometry readings for the prediction step and TBF data for measurement updates. The TBF data is matched to pre-recorded reference maps of landmarks in order to measure the robot pose. In corridors, complementary TBF data measurements from the walls are used to improve the orientation and position estimate. Experiments demonstrate that the pose tracker is robust enough for handling kilometer distances in a large scale indoor environment containing a sufficiently dense landmark set.
- Published
- 2000
- Full Text
- View/download PDF
29. [Untitled]
- Author
-
Guido Zunino, K. Lange, Henrik I. Christensen, and M. Simoncelli
- Subjects
business.industry ,Computer science ,Mobile robot ,Kalman filter ,Optimal control ,Sonar ,Task (computing) ,Tree traversal ,Artificial Intelligence ,Robot ,Computer vision ,Ultrasonic sensor ,Artificial intelligence ,business - Abstract
Cleaning is a major problem associated with pools. Since the manual cleaning is tedious and boring there is an interest in automating the task. This paper presents methods for autonomous localization and navigation for a pool cleaner to enable full coverage of pools. Path following cannot be ensured through use of internal position estimation methods alones therefore sensing is needed. Sensor based estimation enable automatic correction of slippage. For this application we use ultrasonic sonars. Based on an analysis of the overall task and performance of the system a strategy for cleaning/navigation is developed. For the automatic localization a Kalman filtering technique is proposed: the Kalman filter uses sonar measurements and a dynamic model of the robot to provide estimates of the pose of the pool cleaner. Using this localization method we derive an optimal control strategy for traversal of a pool. The system has been implemented and successfully tested on the “WEDA B400” pool cleaner.
- Published
- 2000
- Full Text
- View/download PDF
30. Evaluation of rotational and directional vibration patterns on a tactile belt for guiding visually impaired people
- Author
-
Henrik I. Christensen, Akansel Cosgun, and E. Akin Sisbot
- Subjects
Vibration ,Navigation assistance ,business.industry ,Visually impaired ,Computer science ,Encoding (memory) ,Pattern recognition (psychology) ,Rotation around a fixed axis ,Usability ,Computer vision ,Artificial intelligence ,business ,Haptic technology - Abstract
We present the design of a vibro tactile belt and an evaluation study of intuitive vibration patterns for providing navigation assistance to blind people. Encoding directions with haptic interfaces is a common practice for outdoor navigation assistance but it is insufficient in cluttered indoor environments where fine maneuvers are needed. We consider rotational motions in addition to directional in our application. In a usability study with 15 subjects, we evaluate the recognition accuracy and reaction times of vibration patterns that include 1) directional and 2) rotational motion. Our results show that the directional pattern of two intermittent pulses was preferred by most subjects, even though it had slightly more recognition error than patterns with continuous vibrations. Rotational patterns were recognized by subjects with almost perfect accuracy. Average reaction time to all vibration patterns varied between 1 and 2 seconds. Our tactile belt design was found comfortable, but it was also found slightly noisy.
- Published
- 2014
- Full Text
- View/download PDF
31. Effects of Sensory Precision on Mobile Robot Localization and Mapping
- Author
-
John G. Rogers, Alexander Cunningham, Carlos Nieto-Granda, Vijay Kumar, Alexander J. B. Trevor, Manohar Paluri, Nathan Michael, Frank Dellaert, and Henrik I. Christensen
- Subjects
Laser scanning ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Mobile robot ,Sensory system ,Simultaneous localization and mapping ,Range (statistics) ,Robot ,Structure from motion ,Computer vision ,Noise (video) ,Artificial intelligence ,business ,Simulation - Abstract
This paper will explore the relationship between sensory accuracy and Simultaneous Localization and Mapping (SLAM) performance. As inexpensive robots are developed with commodity components, the relationship between performance level and accuracy will need to be determined. Experiments are presented in this paper which compare various aspects of sensor performance such as maximum range, noise, angular precision, and viewable angle. In addition, mapping results from three popular laser scanners (Hokuyo’s URG and UTM30, as well as SICK’s LMS291) are compared.
- Published
- 2014
- Full Text
- View/download PDF
32. Active Object Recognition Integrating Attention and Viewpoint Control
- Author
-
Sven Dickinson, Göran Olofsson, Henrik I. Christensen, and John K. Tsotsos
- Subjects
Hierarchy (mathematics) ,Computer science ,business.industry ,3D single-object recognition ,Probabilistic logic ,Cognitive neuroscience of visual object recognition ,Object (computer science) ,Image (mathematics) ,Set (abstract data type) ,Method ,Feature (computer vision) ,Signal Processing ,Pattern recognition (psychology) ,Feature (machine learning) ,Object model ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Control (linguistics) ,business ,Software - Abstract
We present an active object recognition strategy which combines the use of an attention mechanism for focusing the search for a 3-D object in a 2-D image, with a viewpoint control strategy for disambiguating recovered object features. The attention mechanism consists of a probabilistic search through a hierarchy of predicted feature observations, taking objects into a set of regions classified according to the shapes of their bounding contours. We motivate the use of image regions as a focus-feature and compare their uncertainty in inferring objects with the uncertainty of more commonly used features such as lines or corners. If the features recovered during the attention phase do not provide a unique mapping to the mbox{3-D} object being searched, the probabilistic feature hierarchy can be used to guide the camera to a new viewpoint from where the object can be disambiguated. The power of the underlying representation is its ability to unify these object recognition behaviors within a single framework. We present the approach in detail and evaluate its performance in the context of a project providing robotic aids for the disabled.
- Published
- 1997
- Full Text
- View/download PDF
33. Advances in robot vision
- Author
-
Henrik I. Christensen and Danica Kragic
- Subjects
Personal robot ,Computer science ,business.industry ,General Mathematics ,Information and Computer Science ,Mobile robot ,Robot learning ,Mobile robot navigation ,Computer Science Applications ,Control and Systems Engineering ,Robot vision ,Computer vision ,Artificial intelligence ,business ,Software - Published
- 2005
- Full Text
- View/download PDF
34. RGB-D edge detection and edge-based registration
- Author
-
Changhyun Choi, Henrik I. Christensen, and Alexander J. B. Trevor
- Subjects
business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Point cloud ,Image registration ,Graph theory ,Pattern recognition ,Edge (geometry) ,Edge detection ,Computer Science::Graphics ,Image texture ,Computer Science::Computer Vision and Pattern Recognition ,RGB color model ,Computer vision ,Artificial intelligence ,business ,Pose ,ComputingMethodologies_COMPUTERGRAPHICS ,MathematicsofComputing_DISCRETEMATHEMATICS ,Mathematics - Abstract
We present a 3D edge detection approach for RGB-D point clouds and its application in point cloud registration. Our approach detects several types of edges, and makes use of both 3D shape information and photometric texture information. Edges are categorized as occluding edges, occluded edges, boundary edges, high-curvature edges, and RGB edges. We exploit the organized structure of the RGB-D image to efficiently detect edges, enabling near real-time performance. We present two applications of these edge features: edge-based pair-wise registration and a pose-graph SLAM approach based on this registration, which we compare to state-of-the-art methods. Experimental results demonstrate the performance of edge detection and edge-based registration both quantitatively and qualitatively.
- Published
- 2013
- Full Text
- View/download PDF
35. RGB-D object tracking: A particle filter approach on GPU
- Author
-
Changhyun Choi and Henrik I. Christensen
- Subjects
Computer science ,business.industry ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Rendering (computer graphics) ,Computer Science::Graphics ,Image texture ,Texture mapping unit ,Computer Science::Computer Vision and Pattern Recognition ,Video tracking ,RGB color model ,Object model ,Computer vision ,Artificial intelligence ,business ,Particle filter ,Pose ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
This paper presents a particle filtering approach for 6-DOF object pose tracking using an RGB-D camera. Our particle filter is massively parallelized in a modern GPU so that it exhibits real-time performance even with several thousand particles. Given an a priori 3D mesh model, the proposed approach renders the object model onto texture buffers in the GPU, and the rendered results are directly used by our parallelized likelihood evaluation. Both photometric (colors) and geometric (3D points and surface normals) features are employed to determine the likelihood of each particle with respect to a given RGB-D scene. Our approach is compared with a tracker in the PCL both quantitatively and qualitatively in synthetic and real RGB-D sequences, respectively.
- Published
- 2013
- Full Text
- View/download PDF
36. Robot planning with a semantic map
- Author
-
John G. Rogers and Henrik I. Christensen
- Subjects
Social robot ,Computer science ,business.industry ,Mobile robot ,Context (language use) ,Object (computer science) ,Robot control ,Human–computer interaction ,Identity (object-oriented programming) ,Robot ,Computer vision ,Motion planning ,Artificial intelligence ,business - Abstract
Context is an important factor for domestic service robots to consider when interpreting their environments to perform tasks. In people's homes, rooms are laid out in a specific arrangement to enable comfortable and efficient living; for example, the living room is central to the house, and the dining room is adjacent to the kitchen. The identity of the objects in a room are a strong cue for determining that room's purpose. This paper will present a planner for an autonomous mobile robot system which uses room connectivity topology and object understanding as context for an object search task in a domestic environment.
- Published
- 2013
- Full Text
- View/download PDF
37. Autonomous person following for telepresence robots
- Author
-
Akansel Cosgun, Henrik I. Christensen, and Dinei Florencio
- Subjects
Ubiquitous robot ,Personal robot ,Engineering ,Telerobotics ,Social robot ,business.industry ,Mobile robot ,Robot learning ,Mobile robot navigation ,Robot control ,Human–computer interaction ,Computer vision ,Artificial intelligence ,business - Abstract
We present a method for a mobile robot to follow a person autonomously where there is an interaction between the robot and human during following. The planner takes into account the predicted trajectory of the human and searches future trajectories of the robot for the path with the highest utility. Contrary to traditional motion planning, instead of determining goal points close to the person, we introduce a task dependent goal function which provides a map of desirable areas for the robot to be at, with respect to the person. The planning framework is flexible and allows encoding of different social situations with the help of the goal function. We implemented our approach on a telepresence robot and conducted a controlled user study to evaluate the experiences of the users on the remote end of the telepresence robot. The user study compares manual teleoperation to our autonomous method for following a person while having a conversation. By designing a behavior specific to a flat screen telepresence robot, we show that the person following behavior is perceived as safe and socially acceptable by remote users. All 10 participants preferred our autonomous following method over manual teleoperation.
- Published
- 2013
- Full Text
- View/download PDF
38. 3D pose estimation of daily objects using an RGB-D camera
- Author
-
Changhyun Choi and Henrik I. Christensen
- Subjects
Computer science ,business.industry ,Feature extraction ,Cognitive neuroscience of visual object recognition ,Pattern recognition ,Image segmentation ,Object (computer science) ,3D pose estimation ,Feature (computer vision) ,Object model ,Computer vision ,Artificial intelligence ,business ,Pose - Abstract
In this paper, we present an object pose estimation algorithm exploiting both depth and color information. While many approaches assume that a target region is cleanly segmented from background, our approach does not rely on that assumption, and thus it can estimate pose of a target object in heavy clutter. Recently, an oriented point pair feature was introduced as a low dimensional description of object surfaces. The feature has been employed in a voting scheme to find a set of possible 3D rigid transformations between object model and test scene features. While several approaches using the pair features require an accurate 3D CAD model as training data, our approach only relies on several scanned views of a target object, and hence it is straightforward to learn new objects. In addition, we argue that exploiting color information significantly enhances the performance of the voting process in terms of both time and accuracy. To exploit the color information, we define a color point pair feature, which is employed in a voting scheme for more effective pose estimation. We show extensive quantitative results of comparative experiments between our approach and a state-of-the-art.
- Published
- 2012
- Full Text
- View/download PDF
39. 3D textureless object detection and tracking: An edge-based approach
- Author
-
Henrik I. Christensen and Changhyun Choi
- Subjects
business.industry ,Initialization ,Pattern recognition ,3D pose estimation ,Object detection ,Edge detection ,Image texture ,Video tracking ,Object model ,Computer vision ,Artificial intelligence ,business ,Pose ,Mathematics - Abstract
This paper presents an approach to textureless object detection and tracking of the 3D pose. Our detection and tracking schemes are coherently integrated in a particle filtering framework on the special Euclidean group, SE(3), in which the visual tracking problem is tackled by maintaining multiple hypotheses of the object pose. For textureless object detection, an efficient chamfer matching is employed so that a set of coarse pose hypotheses is estimated from the matching between 2D edge templates of an object and a query image. Particles are then initialized from the coarse pose hypotheses by randomly drawing based on costs of the matching. To ensure the initialized particles are at or close to the global optimum, an annealing process is performed after the initialization. While a standard edge-based tracking is employed after the annealed initialization, we employ a refinement process to establish improved correspondences between projected edge points from the object model and edge points from an input image. Comparative results for several image sequences with clutter are shown to validate the effectiveness of our approach.
- Published
- 2012
- Full Text
- View/download PDF
40. Planar surface SLAM with 3D and 2D sensors
- Author
-
Henrik I. Christensen, John G. Rogers, and Alexander J. B. Trevor
- Subjects
Physics ,Surface (mathematics) ,business.industry ,Feature extraction ,Point cloud ,Simultaneous localization and mapping ,Laser ,law.invention ,Planar ,Optics ,law ,Trajectory ,Feature based ,Computer vision ,Artificial intelligence ,business - Abstract
We present an extension to our feature based mapping technique that allows for the use of planar surfaces such as walls, tables, counters, or other planar surfaces as landmarks in our mapper. These planar surfaces are measured both in 3D point clouds, as well as 2D laser scans. These sensing modalities compliment each other well, as they differ significantly in their measurable fields of view and maximum ranges. We present experiments to evaluate the contributions of each type of sensor.
- Published
- 2012
- Full Text
- View/download PDF
41. Model-driven vision for in-door navigation
- Author
-
N. O. S. Kirkeby, Henrik I. Christensen, Erik Granum, Lars F. Knudsen, and Steen Kristensen
- Subjects
Computer science ,business.industry ,General Mathematics ,Navigation system ,Mobile robot ,Robotics ,Mobile robot navigation ,Computer Science Applications ,Control and Systems Engineering ,Component (UML) ,Computer vision ,Motion planning ,Artificial intelligence ,Active vision ,business ,Software - Abstract
For navigation in a partially known environment it is possible to provide a model that may be used for guidance in the navigation and as a basis for selective sensing. In this paper a navigation system for an autonomous mobile robot is presented. Both navigation and sensing is built around a graphics model, which enables prediction of the expected scene content. The model is used directly for prediction of line segments which, through matching, allow estimation of position and orientation. In addition, the model is used as a basis for a hierarchical stereo matching that enables dynamic updating of the model with unmodelled objects in the environment. For short-term path planning a set of reactive behaviours is used. The reactive behaviours include use of inverse perspective mapping for generation of occupancy grids, a sonar system and simple gaze holding for monitoring of dynamic obstacles. The full system and its component processes are described and initial experiments with the system are briefly outlined.
- Published
- 1994
- Full Text
- View/download PDF
42. Simultaneous localization and mapping with learned object recognition and semantic data association
- Author
-
John G. Rogers, Alexander J. B. Trevor, Carlos Nieto-Granda, and Henrik I. Christensen
- Subjects
business.industry ,Computer science ,Feature extraction ,Cognitive neuroscience of visual object recognition ,Pattern recognition ,Simultaneous localization and mapping ,Semantic data model ,Support vector machine ,Semantic mapping ,Feature (computer vision) ,Robot ,Computer vision ,Artificial intelligence ,business ,Classifier (UML) - Abstract
Complex and structured landmarks like objects have many advantages over low-level image features for semantic mapping. Low level features such as image corners suffer from occlusion boundaries, ambiguous data association, imaging artifacts, and viewpoint dependance. Artificial landmarks are an unsatisfactory alternative because they must be placed in the environment solely for the robot's benefit. Human environments contain many objects which can serve as suitable landmarks for robot navigation such as signs, objects, and furniture. Maps based on high level features which are identified by a learned classifier could better inform tasks such as semantic mapping and mobile manipulation. In this paper we present a technique for recognizing door signs using a learned classifier as one example of this approach, and demonstrate their use in a graphical SLAM framework with data association provided by reasoning about the semantic meaning of the sign.
- Published
- 2011
- Full Text
- View/download PDF
43. Distributed autonomous mapping of indoor environments
- Author
-
Alexander Cunningham, Manohar Paluri, Jeremy Ma, Jeffrey L. Rogers, Larry Matthies, Henrik I. Christensen, Nathan Michael, and Vijay Kumar
- Subjects
Landmark ,Laser scanning ,Computer science ,business.industry ,Robot ,Computer vision ,Artificial intelligence ,Sensor fusion ,business ,Frame of reference ,Mobile robot navigation - Abstract
This paper describes the results of a Joint Experiment performed on behalf of the MAST CTA. The system developed for the Joint Experiment makes use of three robots which work together to explore and map an unknown environment. Each of the robots used in this experiment is equipped with a laser scanner for measuring walls and a camera for locating doorways. Information from both of these types of structures is concurrently incorporated into each robot's local map using a graph based SLAM technique. A Distributed-Data-Fusion algorithm is used to efficiently combine local maps from each robot into a shared global map. Each robot computes a compressed local feature map and transmits it to neighboring robots, which allows each robot to merge its map with the maps of its neighbors. Each robot caches the compressed maps from its neighbors, allowing it to maintain a coherent map with a common frame of reference. The robots utilize an exploration strategy to efficiently cover the unknown environment which allows collaboration on an unreliable communications channel. As each new branching point is discovered by a robot, it broadcasts the information about where this point is along with the robot's path from a known landmark to the other robots. When the next robot reaches a dead-end, new branching points are allocated by auction. In the event of communication interruption, the robot which observed the branching point will eventually explore it; therefore, the exploration is complete in the face of communication failure.
- Published
- 2011
- Full Text
- View/download PDF
44. Robust 3D visual tracking using particle filtering on the SE(3) group
- Author
-
Changhyun Choi and Henrik I. Christensen
- Subjects
Autoregressive model ,business.industry ,Robustness (computer science) ,Video tracking ,Eye tracking ,Initialization ,Computer vision ,Artificial intelligence ,business ,Particle filter ,Pose ,Mathematics ,Visualization - Abstract
In this paper, we present a 3D model-based object tracking approach using edge and keypoint features in a particle filtering framework. Edge points provide 1D information for pose estimation and it is natural to consider multiple hypotheses. Recently, particle filtering based approaches have been proposed to integrate multiple hypotheses and have shown good performance, but most of the work has made an assumption that an initial pose is given. To remove this assumption, we employ keypoint features for initialization of the filter. Given 2D-3D keypoint correspondences, we choose a set of minimum correspondences to calculate a set of possible pose hypotheses. Based on the inlier ratio of correspondences, the set of poses are drawn to initialize particles. For better performance, we employ an autoregressive state dynamics and apply it to a coordinate-invariant particle filter on the SE(3) group. Based on the number of effective particles calculated during tracking, the proposed system re-initializes particles when the tracked object goes out of sight or is occluded. The robustness and accuracy of our approach is demonstrated via comparative experiments.
- Published
- 2011
- Full Text
- View/download PDF
45. A LOW-COST ROBOT CAMERA HEAD
- Author
-
Henrik I. Christensen
- Subjects
Computer science ,Machine vision ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Mobile robot navigation ,Geometric design ,Artificial Intelligence ,Robot ,Head (vessel) ,Robot vision ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Active vision ,business ,Software - Abstract
Active vision involving the exploitation of controllable cameras and camera heads is an area which has received increased attention over the last few years. At LIA/AUC a binocular robot camera head has been constructed for use in geometric modelling and interpretation. In this manuscript the basic design of the head is outlined and a first prototype is described in some detail. Detailed specifications for the components used are provided together with a section on lessons learned from construction and initial use of this prototype.
- Published
- 1993
- Full Text
- View/download PDF
46. SLAM with Expectation Maximization for moveable object tracking
- Author
-
John G. Rogers, Henrik I. Christensen, Carlos Nieto-Granda, and Alexander J. B. Trevor
- Subjects
Landmark ,business.industry ,Video tracking ,Expectation–maximization algorithm ,Posterior probability ,Computer vision ,Mobile robot ,Graph theory ,Artificial intelligence ,Simultaneous localization and mapping ,business ,Object detection ,Mathematics - Abstract
The goal of simultaneous localization and mapping (SLAM) is to compute the posterior distribution over landmark poses. Typically, this is made possible through the static world assumption - the landmarks remain in the same location throughout the mapping procedure. Some prior work has addressed this assumption by splitting maps into static and dynamic sets, or by recognizing moving landmarks and tracking them. In contrast to previous work, we apply an Expectation Maximization technique to a graph based SLAM approach and allow landmarks to be dynamic. The batch nature of this operation enables us to detect moveable landmarks and factor them out of the map. We demonstrate the performance of this algorithm with a series of experiments with moveable landmarks in a structured environment.
- Published
- 2010
- Full Text
- View/download PDF
47. Semantic map partitioning in indoor environments using regional analysis
- Author
-
John G. Rogers, Carlos Nieto-Granda, Alexander J. B. Trevor, and Henrik I. Christensen
- Subjects
Robot kinematics ,Computer science ,business.industry ,Mobile robot ,computer.software_genre ,Semantics ,Semantic mapping ,Semantic computing ,Robot ,Computer vision ,Topological map ,Data mining ,Artificial intelligence ,business ,computer - Abstract
Classification of spatial regions based on semantic information in an indoor environment enables robot tasks such as navigation or mobile manipulation to be spatially aware. The availability of contextual information can significantly simplify operation of a mobile platform. We present methods for automated recognition and classification of spaces into separate semantic regions and use of such information for generation of a topological map of an environment. The association of semantic labels with spatial regions is based on Human Augmented Mapping. The methods presented in this paper are evaluated both in simulation and on real data acquired from an office environment.
- Published
- 2010
- Full Text
- View/download PDF
48. Real-time 3D model-based tracking using edge and keypoint features for robotic manipulation
- Author
-
Changhyun Choi and Henrik I. Christensen
- Subjects
business.industry ,Computer science ,Feature extraction ,Cognitive neuroscience of visual object recognition ,Initialization ,Visual servoing ,Robustness (computer science) ,Salient ,Video tracking ,Motion estimation ,Computer vision ,Artificial intelligence ,business ,Pose - Abstract
We propose a combined approach for 3D real-time object recognition and tracking, which is directly applicable to robotic manipulation. We use keypoints features for the initial pose estimation. This pose estimate serves as an initial estimate for edge-based tracking. The combination of these two complementary methods provides an efficient and robust tracking solution. The main contributions of this paper includes: 1) While most of the RAPiD style tracking methods have used simplified CAD models or at least manually well designed models, our system can handle any form of polygon mesh model. To achieve the generality of object shapes, salient edges are automatically identified during an offline stage. Dull edges usually invisible in images are maintained as well for the cases when they constitute the object boundaries. 2) Our system provides a fully automatic recognition and tracking solution, unlike most of the previous edge-based tracking that require a manual pose initialization scheme. Since the edge-based tracking sometimes drift because of edge ambiguity, the proposed system monitors the tracking results and occasionally re-initialize when the tracking results are inconsistent. Experimental results demonstrate our system's efficiency as well as robustness.
- Published
- 2010
- Full Text
- View/download PDF
49. Overview of the ImageCLEF@ICPR 2010 robot vision track
- Author
-
Barbara Caputo, Andrzej Pronobis, and Henrik I. Christensen
- Subjects
Standard test image ,Computer science ,business.industry ,Computer Science (all) ,Intelligent decision support system ,Mobile robot ,Place recognition ,Track (rail transport) ,robot vision ,Task (project management) ,Theoretical Computer Science ,robot localization ,Information system ,Computer vision ,Artificial intelligence ,business ,Stereo camera - Abstract
This paper describes the robot vision track that has been proposed to the ImageCLEF@ICPR2010 participants. The track addressed the problem of visual place classification. Participants were asked to classify rooms and areas of an office environment on the basis of image sequences captured by a stereo camera mounted on a mobile robot, under varying illumination conditions. The algorithms proposed by the participants had to answer the question "where are you?" (I am in the kitchen, in the corridor, etc) when presented with a test sequence imaging rooms seen during training (from different viewpoints and under different conditions), or additional rooms that were not imaged in the training sequence. The participants were asked to solve the problem separately for each test image (obligatory task). Additionally, results could also be reported for algorithms exploiting the temporal continuity of the image sequences (optional task). A total of eight groups participated to the challenge, with 25 runs submitted to the obligatory task, and 5 submitted to the optional task. The best result in the obligatory task was obtained by the Computer Vision and Geometry Laboratory, ETHZ, Switzerland, with an overall score of 3824.0. The best result in the optional task was obtained by the Intelligent Systems and Data Mining Group, University of Castilla-La Mancha, Albacete, Spain, with an overall score of 3881.0.
- Published
- 2010
50. Visual Place Categorization: Problem, dataset, and algorithm
- Author
-
Henrik I. Christensen, Jianxin Wu, and James M. Rehg
- Subjects
Ground truth ,Computer science ,business.industry ,Video camera ,Mobile robot ,Machine learning ,computer.software_genre ,law.invention ,Visualization ,Categorization ,law ,Feature (computer vision) ,Histogram ,Robot ,Computer vision ,Artificial intelligence ,business ,computer - Abstract
In this paper we describe the problem of Visual Place Categorization (VPC) for mobile robotics, which involves predicting the semantic category of a place from image measurements acquired from an autonomous platform. For example, a robot in an unfamiliar home environment should be able to recognize the functionality of the rooms it visits, such as kitchen, living room, etc. We describe an approach to VPC based on sequential processing of images acquired with a conventional video camera. We identify two key challenges: Dealing with non-characteristic views and integrating restricted-FOV imagery into a holistic prediction. We present a solution to VPC based upon a recently-developed visual feature known as CENTRIST (CENsus TRansform hISTogram). We describe a new dataset for VPC which we have recently collected and are making publicly available. We believe this is the first significant, realistic dataset for the VPC problem. It contains the interiors of six different homes with ground truth labels. We use this dataset to validate our solution approach, achieving promising results.
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.