7 results on '"Piasco, Nathan"'
Search Results
2. A survey on Visual-Based Localization: On the benefit of heterogeneous data
- Author
-
Piasco, Nathan, Sidibé, Désiré, Demonceaux, Cédric, and Gouet-Brunet, Valérie
- Published
- 2018
- Full Text
- View/download PDF
3. CROSSFIRE: Camera Relocalization On Self-Supervised Features from an Implicit Representation
- Author
-
Moreau, Arthur, Piasco, Nathan, Bennehar, Moussab, Tsishkou, Dzmitry, Stanciulescu, Bogdan, and de La Fortelle, Arnaud
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Beyond novel view synthesis, Neural Radiance Fields are useful for applications that interact with the real world. In this paper, we use them as an implicit map of a given scene and propose a camera relocalization algorithm tailored for this representation. The proposed method enables to compute in real-time the precise position of a device using a single RGB camera, during its navigation. In contrast with previous work, we do not rely on pose regression or photometric alignment but rather use dense local features obtained through volumetric rendering which are specialized on the scene with a self-supervised objective. As a result, our algorithm is more accurate than competitors, able to operate in dynamic outdoor environments with changing lightning conditions and can be readily integrated in any volumetric neural renderer.
- Published
- 2023
4. LENS: Localization enhanced by NeRF synthesis
- Author
-
Moreau, Arthur, Piasco, Nathan, Tsishkou, Dzmitry, Stanciulescu, Bogdan, de La Fortelle, Arnaud, Centre de Robotique (CAOR), MINES ParisTech - École nationale supérieure des mines de Paris, Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL), and HUAWEI Technologies France (HUAWEI)
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Robotics ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,Robotics (cs.RO) ,Machine Learning (cs.LG) - Abstract
Neural Radiance Fields (NeRF) have recently demonstrated photo-realistic results for the task of novel view synthesis. In this paper, we propose to apply novel view synthesis to the robot relocalization problem: we demonstrate improvement of camera pose regression thanks to an additional synthetic dataset rendered by the NeRF class of algorithm. To avoid spawning novel views in irrelevant places we selected virtual camera locations from NeRF internal representation of the 3D geometry of the scene. We further improved localization accuracy of pose regressors using synthesized realistic and geometry consistent images as data augmentation during training. At the time of publication, our approach improved state of the art with a 60% lower error on Cambridge Landmarks and 7-scenes datasets. Hence, the resulting accuracy becomes comparable to structure-based methods, without any architecture modification or domain adaptation constraints. Since our method allows almost infinite generation of training data, we investigated limitations of camera pose regression depending on size and distribution of data used for training on public benchmarks. We concluded that pose regression accuracy is mostly bounded by relatively small and biased datasets rather than capacity of the pose regression model to solve the localization task., Comment: Accepted at CoRL 2021
- Published
- 2021
5. Perspective-n-Learned-Point: Pose Estimation from Relative Depth
- Author
-
Piasco, Nathan, Sidibé, Désiré, Demonceaux, Cédric, Gouet-Brunet, Valérie, Equipe VIBOT - VIsion pour la roBOTique [ImViA EA7535 - ERL CNRS 6000] (VIBOT), Centre National de la Recherche Scientifique (CNRS)-Imagerie et Vision Artificielle [Dijon] (ImViA), Université de Bourgogne (UB)-Université de Bourgogne (UB), Laboratoire des Sciences et Technologies de l'Information Géographique (LaSTIG), École nationale des sciences géographiques (ENSG), Institut National de l'Information Géographique et Forestière [IGN] (IGN)-Institut National de l'Information Géographique et Forestière [IGN] (IGN), ANR-15-CE23-0010,pLaTINUM,Cartographie Long Terme pour la Mobilité Urbaine(2015), Piasco, Nathan, and Cartographie Long Terme pour la Mobilité Urbaine - - pLaTINUM2015 - ANR-15-CE23-0010 - AAPG2015 - VALID
- Subjects
[INFO.INFO-TI] Computer Science [cs]/Image Processing [eess.IV] ,[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG] ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,[INFO.INFO-RB] Computer Science [cs]/Robotics [cs.RO] ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO] ,[INFO]Computer Science [cs] ,[INFO.INFO-LG] Computer Science [cs]/Machine Learning [cs.LG] ,[INFO] Computer Science [cs] - Abstract
International audience; In this paper we present an online camera pose estimation method that combines Content-Based Image Retrieval (CBIR) and pose refinement based on a learned representation of the scene geometry extracted from monocular images. Our pose estimation method is two-step, we first retrieve an initial 6 Degrees of Freedom (DoF) location of an unknown-pose query by retrieving the most similar candidate in a pool of geo-referenced images. In a second time, we refine the query pose with a Perspective-n-Point (PnP) algorithm where the 3D points are obtained thanks to a generated depth map from the retrieved image candidate. We make our method fast and lightweight by using a common neural network architecture to generate the image descriptor for image indexing and the depth map used to create the 3D points required in the PnP pose refinement step. We demonstrate the effectiveness of our proposal through extensive experimentation on both indoor and outdoor scenes, as well as generalisation capability of our method to unknown environment. Finally, we show how to deploy our system even if geometric information is missing to train our monocular-image-to-depth neural networks.
- Published
- 2019
6. Localisation basée vision à partir de caractéristiques discriminantes issues de données visuelles hétérogènes
- Author
-
Piasco, Nathan, Equipe VIBOT - VIsion pour la roBOTique [ImViA EA7535 - ERL CNRS 6000] (VIBOT), Centre National de la Recherche Scientifique (CNRS)-Imagerie et Vision Artificielle [Dijon] (ImViA), Université de Bourgogne (UB)-Université de Bourgogne (UB), Laboratoire sciences et technologies de l'information géographique (LaSTIG), Ecole des Ingénieurs de la Ville de Paris (EIVP)-École nationale des sciences géographiques (ENSG), Institut National de l'Information Géographique et Forestière [IGN] (IGN)-Université Gustave Eiffel-Institut National de l'Information Géographique et Forestière [IGN] (IGN)-Université Gustave Eiffel, Université Bourgogne Franche-Comté, Demonceaux Cédric, Gouet-Brunet Valérie, and DEMONCEAUX, Cedric
- Subjects
[SPI]Engineering Sciences [physics] ,localisation ,[SPI] Engineering Sciences [physics] ,Estimation de pose de caméra ,pose estimation ,image indexing ,localization - Abstract
Visual-based Localization (VBL) consists in retrieving the location of a visual image within a known space. VBL is involved in several present-day practical applications, such as indoor and outdoor navigation, 3D reconstruction, etc. The main challenge in VBL comes from the fact that the visual input to localize could have been taken at a different time than the reference database. Visual changes may occur on the observed environment during this period of time, especially for outdoor localization. Recent approaches use complementary information in order to address these visually challenging localization scenarios, like geometric information or semantic information. However geometric or semantic information are not always available or can be costly to obtain. In order to get free of any extra modalities used to solve challenging localization scenarios, we propose to use a modality transfer model capable of reproducing the underlying scene geometry from a monocular image. At first, we cast the localization problem as a Content-based Image Retrieval (CBIR) problem and we train a CNN image descriptor with radiometry to dense geometry transfer as side training objective. Once trained, our system can be used on monocular images only to construct an expressive descriptor for localization in challenging conditions. Secondly, we introduce a new relocalization pipeline to improve the localization given by our initial localization step. In a same manner as our global image descriptor, the relocalization is aided by the geometric information learned during an offline stage. The extra geometric information is used to constrain the final pose estimation of the query. Through comprehensive experiments, we demonstrate the effectiveness of our proposals for both indoor and outdoor localization., La localisation basée vision consiste à déterminer l'emplacement d'une requête visuelle par rapport à un espace de référence connu. Le principal défi de la localisation visuelle réside dans le fait que la requête peut avoir été acquise à un moment diffèrent de celui de la base de données. On pourra alors observer des changements visuels entre l'environnement actuel et celui de la base de référence, en particulier lors d'application de localisation en extérieur. Les approches récentes utilisent des informations complémentaires afin de répondre à ces scenarios de localisation visuellement ambigu, comme la géométrie ou la sémantique. Cependant, ces modalités auxiliaires ne sont pas toujours disponibles ou peuvent être couteuse à obtenir. Afin de s'affranchir de l'utilisation modalité supplémentaire pour faire face à ces scénarios de localisation difficiles, nous proposons d'utiliser un modèle de transfert de modalité capable de reproduire la géométrie d'une scène à partir d'une image monoculaire. Dans un premier temps, nous présentons le problème de localisation comme un problème d'indexation d'images et nous entrainons un réseau de neurones convolutif pour la description globale d'image en introduisant le transfert de modalité radiométrie vers géométrie comme objectif secondaire. Une fois entrainé, notre modèle peut être appliqué sur des images monoculaires pour construire un descripteur efficace pour la localisation en conditions difficiles. Dans un second temps, nous introduisons une nouvelle méthode de raffinement de pose pour améliorer la localisation donnée par notre première étape. De la même manière que notre descripteur d'image globale, la relocalisation est facilitée par les informations géométriques apprises lors d'une étape préalable. L'information géométrique supplémentaire est utilisée pour contraindre l'estimation finale de la pose de la requête. Grâce des expériences approfondies, nous démontrons l'efficacité de nos propositions pour la localisation en intérieur et en extérieur.
- Published
- 2019
7. Collaborative localization and formation flying using distributed stereo-vision.
- Author
-
Piasco, Nathan, Marzat, Julien, and Sanfourche, Martial
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.