Lois Mignard-Debise, Fabrice Harms, Charlotte Herzog, Elena Longo, Ombeline de La Rochefoucauld, Philippe Zeitoun, Xavier Levecq, Guillaume Dovillaire, Xavier Granier, Laboratoire Photonique, Numérique et Nanosciences (LP2N), Université de Bordeaux (UB)-Institut d'Optique Graduate School (IOGS)-Centre National de la Recherche Scientifique (CNRS), Melting the frontiers between Light, Shape and Matter (MANAO), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire Photonique, Numérique et Nanosciences (LP2N), Université de Bordeaux (UB)-Institut d'Optique Graduate School (IOGS)-Centre National de la Recherche Scientifique (CNRS)-Institut d'Optique Graduate School (IOGS)-Centre National de la Recherche Scientifique (CNRS), Imagine Optic, Laboratoire d'optique appliquée (LOA), École Nationale Supérieure de Techniques Avancées (ENSTA Paris)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS), European Project: 665207,H2020,H2020-FETOPEN-2014-2015-RIA,VOXEL(2015), and Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Université de Bordeaux (UB)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB)-Centre National de la Recherche Scientifique (CNRS)-Inria Bordeaux - Sud-Ouest
International audience; Plenoptic cameras provide single-shot 3D imaging capabilities, based on the acquisition of the Light-Field, which corresponds to a spatial and directional sampling of all the rays of a scene reaching a detector. Specific algorithms applied on raw Light-Field data allow for the reconstruction of an object at different depths of the scene.Two different plenoptic imaging geometries have been reported, associated with two reconstruction algorithms: the traditional or unfocused plenoptic camera, also known as plenoptic camera 1.0, and the focused plenoptic camera, also called plenoptic camera 2.0. Both systems use the same optical elements, but placed at different locations: a main lens, a microlens array and a detector. These plenoptic systems have been presented as independent. Here we show the continuity between them, by simply moving the position of an object. We also compare the two reconstruction methods. We theoretically show that the two algorithms are intrinsically based on the same principle and could be applied to any Light-Field data. However, the resulting images resolution and quality depend on the chosen algorithm.