16 results on '"Farzad Husain"'
Search Results
2. Combining Semantic and Geometric Features for Object Class Segmentation of Indoor Scenes.
- Author
-
Farzad Husain, Hannes Schulz, Babette Dellen, Carme Torras, and Sven Behnke
- Published
- 2017
- Full Text
- View/download PDF
3. Action Recognition Based on Efficient Deep Feature Learning in the Spatio-Temporal Domain.
- Author
-
Farzad Husain, Babette Dellen, and Carme Torras
- Published
- 2016
- Full Text
- View/download PDF
4. Recognizing Point Clouds Using Conditional Random Fields.
- Author
-
Farzad Husain, Babette Dellen, and Carme Torras
- Published
- 2014
- Full Text
- View/download PDF
5. Realtime tracking and grasping of a moving object from range video.
- Author
-
Farzad Husain, Adria Colome, Babette Dellen, Guillem Alenyà, and Carme Torras
- Published
- 2014
- Full Text
- View/download PDF
6. Joint Segmentation and Tracking of Object Surfaces in Depth Movies along Human/Robot Manipulations.
- Author
-
Babette Dellen, Farzad Husain, and Carme Torras
- Published
- 2013
7. Consistent Depth Video Segmentation Using Adaptive Surface Models.
- Author
-
Farzad Husain, Babette Dellen, and Carme Torras
- Published
- 2015
- Full Text
- View/download PDF
8. Robust surface tracking in range image sequences.
- Author
-
Farzad Husain, Babette Dellen, and Carme Torras
- Published
- 2014
- Full Text
- View/download PDF
9. Contributors
- Author
-
Taiwo Adetiloye, Sondipon Adhikari, Ibrahim Aljarah, Senjian An, Serdar Aslan, Anjali Awasthi, Ashish Bakshi, Mohammed Bennamoun, Selami Beyhan, Vimal Bhatia, Gautam Bhattacharya, Alirezah Bosaghzadeh, Farid Boussaid, Dieu Tien Bui, Kien-Trinh Thi Bui, Quang-Thanh Bui, Anusheema Chakraborty, Tanmoy Chatterjee, Rajib Chowdhury, Alan Crosky, Sarat Kumar Das, Pradipta K. Dash, Rajashree Dash, Babette Dellen, Serge Demidenko, Vahdettin Demir, Murat Diker, Erdem Dilmen, Chinh Van Doan, Fadi Dornaika, Nikoo Fakhari, Hossam Faris, Robert B. Fisher, Amir H. Gandomi, Raoof Gholami, Kuntal Ghosh, Nhat-Duc Hoang, Renae Hovey, Farzad Husain, Ioanna Ilia, Peng Jiang, Pawan K. Joshi, Taskin Kavzoglu, Gary Kendrick, Ozgur Kisi, Ye Chow Kuang, Sajad Madadi, Mojtaba Maghrebi, Ammar Mahmood, Manish Mandloi, Mohamed Arezki Mellal, Youssef El Merabet, Subhadeep Metya, Seyedali Mirjalili, Behnam Mohammadi-Ivatloo, Ranajeet Mohanty, Abdelmalik Moujahid, Aparajita Mukherjee, V. Mukherjee, Tanmoy Mukhopadhyay, J. Mukund Nilakantan, Morteza Nazari-Heris, Peter Nielsen, Stavros Ntalampiras, Melanie Po-Leen Ooi, Ashalata Panigrahi, Manas R. Patra, S.G. Ponnambalam, Dharmbir Prasad, Yassine Ruichek, Kamna Sachdeva, Mohamed G. Sahab, Houssam Salmane, Serkan Saydam, Jalal Shiri, Ferdous Sohel, Hong Kuan Sok, Shakti Suman, Vassili V. Toropov, Carme Torras, Paraskevas Tsangaratos, Edward J. Williams, Selim Yilmaz, and Milad Zamani-Gargari
- Published
- 2017
- Full Text
- View/download PDF
10. Semantic segmentation priors for object discovery
- Author
-
Simone Frintrop, Farzad Husain, Germán Martín García, Sven Behnke, Hannes Schulz, Carme Torras, Institut de Robòtica i Informàtica Industrial, and Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
- Subjects
Informàtica::Automàtica i control [Àrees temàtiques de la UPC] ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,computer vision ,object recognition ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,Segmentation-based object categorization ,business.industry ,Deep learning ,Cognitive neuroscience of visual object recognition ,deep learning ,020207 software engineering ,Pattern recognition ,object detection ,Image segmentation ,Object (computer science) ,robot vision ,Semantic segmentation ,Object detection ,service robots ,RGB color model ,020201 artificial intelligence & image processing ,Viola–Jones object detection framework ,Artificial intelligence ,object discovery ,business ,computer ,Pattern recognition::Computer vision [Classificació INSPEC] - Abstract
Trabajo presentado a la 23rd International Conference on Pattern Recognition, celebrada en Cancún (México) del 5 al 8 de diciembre de 2016., Reliable object discovery in realistic indoor scenes is a necessity for many computer vision and service robot applications. In these scenes, semantic segmentation methods have made huge advances in recent years. Such methods can provide useful prior information for object discovery by removing false positives and by delineating object boundaries. We propose a novel method that combines bottom-up object discovery and semantic priors for producing generic object candidates in RGB-D images. We use a deep learning method for semantic segmentation to classify colour and depth superpixels into meaningful categories. Separately for each category, we use saliency to estimate the location and scale of objects, and superpixels to find their precise boundaries. Finally, object candidates of all categories are combined and ranked. We evaluate our approach on the NYU Depth V2 dataset and show that we outperform other state-of-the-art object discovery methods in terms of recall.
- Published
- 2016
11. Action recognition based on efficient deep feature learning in the spatio-temporal domain
- Author
-
Babette Dellen, Farzad Husain, Carme Torras, Consejo Superior de Investigaciones Científicas (España), Ministerio de Economía y Competitividad (España), Institut de Robòtica i Informàtica Industrial, and Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
- Subjects
0209 industrial biotechnology ,Control and Optimization ,Informàtica::Automàtica i control [Àrees temàtiques de la UPC] ,Computer science ,Feature extraction ,Biomedical Engineering ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,computer vision ,Data modeling ,Pattern recognition [Classificació INSPEC] ,020901 industrial engineering & automation ,pattern classification ,Artificial Intelligence ,Motion estimation ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Visual learning ,Computer vision for automation ,Network model ,business.industry ,Mechanical Engineering ,Pattern recognition ,artificial intelligence ,Computer Science Applications ,Human-Computer Interaction ,Recognition ,Control and Systems Engineering ,Domain knowledge ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Feature learning ,Pattern recognition::Computer vision [Classificació INSPEC] - Abstract
Hand-crafted feature functions are usually designed based on the domain knowledge of a presumably controlled environment and often fail to generalize, as the statistics of real-world data cannot always be modeled correctly. Data-driven feature learning methods, on the other hand, have emerged as an alternative that often generalize better in uncontrolled environments. We present a simple, yet robust, 2-D convolutional neural network extended to a concatenated 3-D network that learns to extract features from the spatio-temporal domain of raw video data. The resulting network model is used for content-based recognition of videos. Relying on a 2-D convolutional neural network allows us to exploit a pretrained network as a descriptor that yielded the best results on the largest and challenging ILSVRC-2014 dataset. Experimental results on commonly used benchmarking video datasets demonstrate that our results are state-of-the-art in terms of accuracy and computational time without requiring any preprocessing (e.g., optic flow) or a priori knowledge on data capture (e.g., camera motion estimation), which makes it more general and flexible than other approaches. Our implementation is made available., This research is partially funded by the CSIC project TextilRob (201550E028), and the project RobInstruct (TIN2014-58178-R).
- Published
- 2016
12. Combining semantic and geometric features for object class segmentation of indoor scenes
- Author
-
Babette Dellen, Carme Torras, Hannes Schulz, Farzad Husain, Sven Behnke, Institut de Robòtica i Informàtica Industrial, Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI, Consejo Superior de Investigaciones Científicas (España), and Ministerio de Economía y Competitividad (España)
- Subjects
0209 industrial biotechnology ,Control and Optimization ,Informàtica::Automàtica i control [Àrees temàtiques de la UPC] ,Computer science ,Feature extraction ,Biomedical Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,semantic scene understanding ,Semantic scene understanding ,computer vision ,Pattern recognition [Classificació INSPEC] ,Segmentation ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,business.industry ,Color image ,Mechanical Engineering ,Deep learning ,pattern recognition ,segmentation ,Pattern recognition ,Image segmentation ,categorization ,Computer Science Applications ,Human-Computer Interaction ,Categorization ,Control and Systems Engineering ,Feature (computer vision) ,Pattern recognition (psychology) ,RGB color model ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Feature learning ,Pattern recognition::Computer vision [Classificació INSPEC] - Abstract
Scene understanding is a necessary prerequisite for robots acting autonomously in complex environments. Low-cost RGB-D cameras such as Microsoft Kinect enabled new methods for analyzing indoor scenes and are now ubiquitously used in indoor robotics. We investigate strategies for efficient pixelwise object class labeling of indoor scenes that combine both pretrained semantic features transferred from a large color image dataset and geometric features, computed relative to the room structures, including a novel distance-from-wall feature, which encodes the proximity of scene points to a detected major wall of the room. We evaluate our approach on the popular NYU v2 dataset. Several deep learning models are tested, which are designed to exploit different characteristics of the data. This includes feature learning with two different pooling sizes. Our results indicate that combining semantic and geometric features yields significantly improved results for the task of object class segmentation., This research is partially funded by the CSIC project MANIPlus (201350E102), and the project RobInstruct (TIN2014-58178-R).
- Published
- 2016
13. Robust surface tracking in range image sequences
- Author
-
Babette Dellen, Farzad Husain, Carme Torras, Consejo Superior de Investigaciones Científicas (España), European Commission, Ministerio de Ciencia e Innovación (España), Institut de Robòtica i Informàtica Industrial, and Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
- Subjects
Computer science ,Surface fitting ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Tracking (particle physics) ,Segmentation ,registration ,Artificial Intelligence ,Computer vision ,Electrical and Electronic Engineering ,Cluster analysis ,object tracking ,Range video ,business.industry ,Applied Mathematics ,Tracking ,Estimator ,visual tracking ,Automation::Robots::Robot vision [Classificació INSPEC] ,Computational Theory and Mathematics ,EM ,Region growing ,Video tracking ,Signal Processing ,A priori and a posteriori ,Eye tracking ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Statistics, Probability and Uncertainty ,Informàtica::Robòtica [Àrees temàtiques de la UPC] ,business ,3D - Abstract
A novel robust method for surface tracking in range-image sequences is presented which combines a clustering method based on surface models with a particle-filter-based 2-D affine-motion estimator. Segmented regions obtained at previous time steps are used to create seed areas by comparing measured depth values with those obtained from surface-model fitting. The seed areas are further refined using a motion-probability region estimated by the particle-filter-based tracker through prediction of future states. This helps resolving ambiguities that arise when surfaces belonging to different objects are in physical contact with each other, for example during hand-object manipulations. Region growing allows recovering the complete segment area. The obtained segmented regions are then used to improve the predictions of the tracker for the next frame. The algorithm runs in quasi real-time and uses on-line learning, eliminating the need to have a prioriknowledge about the surface being tracked. We apply the method to in-house depth videos acquired with both time-of-flight and structured-light sensors, demonstrating object tracking in real-world scenarios, and we compare the results with those of an ICP-based tracker., This work received support from the CSIC project MVOD no. 201250E028, the EU project IntellAct FP7-269959, the project PAU+ DPI2011-27510 and the project CINNOVA 201150E088. B. Dellen was supported by the Spanish Ministry for Science and Innovation via a Ramon y Cajal fellowship RYC-2009-05324.
- Published
- 2014
14. Realtime tracking and grasping of a moving object from range video
- Author
-
Babette Dellen, Farzad Husain, Adrià Colomé, Guillem Alenyà, Carme Torras, European Commission, Ministerio de Ciencia e Innovación (España), Consejo Superior de Investigaciones Científicas (España), Institut de Robòtica i Informàtica Industrial, and Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
- Subjects
Robot kinematics ,Informàtica::Automàtica i control [Àrees temàtiques de la UPC] ,Computer science ,business.industry ,robot kinematics ,GRASP ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Workspace ,Object (computer science) ,Tracking (particle physics) ,computer vision ,Computer vision ,Noise (video) ,Artificial intelligence ,Manipulator ,Particle filter ,business ,Robotic arm ,manipulators ,Pattern recognition::Computer vision [Classificació INSPEC] - Abstract
Presentado al ICRA 2014 celebrado en Hong Kong del 31 de mayo al 7 de junio., In this paper we present an automated system that is able to track and grasp a moving object within the workspace of a manipulator using range images acquired with a Microsoft Kinect sensor. Realtime tracking is achieved by a geometric particle filter on the affine group. Based on the tracked output, the pose of a 7-DoF WAM robotic arm is continuously updated using dynamic motor primitives until a distance measure between the tracked object and the gripper mounted on the arm is below a threshold. Then, it closes its three fingers and grasps the object. The tracker works in real-time and is robust to noise and partial occlusions. Using only the depth data makes our tracker independent of texture which is one of the key design goals in our approach. An experimental evaluation is provided along with a comparison of the proposed tracker with state-of-the-art approaches, including the OpenNI-tracker. The developed system is integrated with ROS and made available as part of IRI's ROS stack., This work was supported by the EU project IntellAct FP7-269959, the project PAU+ DPI2011-27510 and the project CINNOVA 201150E088. B. Dellen was supported by the Spanish Ministry for Science and Innovation via a Ramon y Cajal fellowship.
- Published
- 2014
15. Scene understanding using deep learning
- Author
-
Babette Dellen, Carme Torras, Farzad Husain, Institut de Robòtica i Informàtica Industrial, and Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
- Subjects
Informàtica::Automàtica i control [Àrees temàtiques de la UPC] ,Computer science ,media_common.quotation_subject ,Feature extraction ,Scene understanding ,computer.software_genre ,ENCODE ,Machine perception ,computer vision ,object recognition ,Action recognition ,Pattern recognition [Classificació INSPEC] ,Perception ,Segmentation ,media_common ,business.industry ,Deep learning ,feature extraction ,Cognitive neuroscience of visual object recognition ,Representation (systemics) ,Semantic segmentation ,robot vision ,Semantic labeling ,learning (artificial intelligence) ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
Deep learning is a type of machine perception method that attempts to model high-level abstractions in data and encode them into a compact and robust representation. Such representations have found immense usage in applications related to computer vision. In this chapter we introduce two such applications, i.e., semantic segmentation of images and action recognition in videos. These applications are of fundamental importance for human-centered environment perception.
16. Recognizing point clouds using conditional random fields
- Author
-
Babette Dellen, Carme Torras, Farzad Husain, Institut de Robòtica i Informàtica Industrial, Universitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI, Consejo Superior de Investigaciones Científicas (España), European Commission, and Ministerio de Ciencia e Innovación (España)
- Subjects
Conditional random field ,business.industry ,Computer science ,3D single-object recognition ,Cognitive neuroscience of visual object recognition ,Point cloud ,object detection ,object recognition ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Object detection ,computer vision ,Pattern recognition [Classificació INSPEC] ,Proceedings 22nd International Conference on Pattern Recognition ICPR 2014 24–28 August 2014 Stockholm, Sweden ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer vision ,Viola–Jones object detection framework ,Artificial intelligence ,business ,Informàtica::Robòtica [Àrees temàtiques de la UPC] ,0105 earth and related environmental sciences - Abstract
Trabajo presentado a la 22nd International Conference on Pattern Recognition (ICPR-2014), celebrada en Estocolmo (Suecia) del 24 al 28 de agosto., Detecting objects in cluttered scenes is a necessary step for many robotic tasks and facilitates the interaction of the robot with its environment. Because of the availability of efficient 3D sensing devices as the Kinect, methods for the recognition of objects in 3D point clouds have gained importance during the last years. In this paper, we propose a new supervised learning approach for the recognition of objects from 3D point clouds using Conditional Random Fields, a type of discriminative, undirected probabilistic graphical model. The various features and contextual relations of the objects are described by the potential functions in the graph. Our method allows for learning and inference from unorganized point clouds of arbitrary sizes and shows significant benefit in terms of computational speed during prediction when compared to a state-of-the-art approach based on constrained optimization., This work was supported by the EU project (IntellAct FP7-269959), the project PAU+ (DPI2011-27510), and the CSIC project CINNOVA (201150E088). B. Dellen was supported by the Spanish Ministry for Science and Innovation via a Ramon y Cajal fellowship.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.