16 results on '"RGB cameras"'
Search Results
2. Multi-drone Multi-object Tracking with RGB Cameras Using Spatio-Temporal Cues
- Author
-
Chen, Guanyin, Fang, Bohui, Fu, Wenxing, Yang, Tao, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Qu, Yi, editor, Gu, Mancang, editor, Niu, Yifeng, editor, and Fu, Wenxing, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Improving Shape Transformations for RGB Cameras Using Photometric Stereo.
- Author
-
Wahhab, H. I., Alanssari, A. N., Khalaf, Ahmed L., Sekhar, Ravi, Shah, Pritesh, and Tawfeq, Jamal F.
- Subjects
PHOTOMETRIC stereo ,COMPUTER vision ,HIGH resolution imaging ,COMPUTER simulation ,CAMERAS - Abstract
The emergence of low-cost red, green, and blue (RGB) cameras has significantly impacted various computer vision tasks. However, these cameras often produce depth maps with limited object details, noise, and missing information. These limitations can adversely affect the quality of 3D reconstruction and the accuracy of camera trajectory estimation. Additionally, existing depth refinement methods struggle to distinguish shape from complex albedo, leading to visible artifacts in the refined depth maps. In this paper, we address these challenges by proposing two novel methods based on the theory of photometric stereo. The first method, the RGB ratio model, tackles the nonlinearity problem present in previous approaches and provides a closed-form solution. The second method, the robust multi-light model, overcomes the limitations of existing depth refinement methods by accurately estimating shape from imperfect depth data without relying on regularization. Furthermore, we demonstrate the effectiveness of combining these methods with image super-resolution to obtain high-quality, high-resolution depth maps. Through quantitative and qualitative experiments, we validate the robustness and effectiveness of our techniques in improving shape transformations for RGB cameras. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Non-Contact Respiratory Monitoring Using an RGB Camera for Real-World Applications
- Author
-
Chiara Romano, Emiliano Schena, Sergio Silvestri, and Carlo Massaroni
- Subjects
breathing ,contactless monitoring systems ,respiratory monitoring ,RGB cameras ,Chemical technology ,TP1-1185 - Abstract
Respiratory monitoring is receiving growing interest in different fields of use, ranging from healthcare to occupational settings. Only recently, non-contact measuring systems have been developed to measure the respiratory rate (fR) over time, even in unconstrained environments. Promising methods rely on the analysis of video-frames features recorded from cameras. In this work, a low-cost and unobtrusive measuring system for respiratory pattern monitoring based on the analysis of RGB images recorded from a consumer-grade camera is proposed. The system allows (i) the automatized tracking of the chest movements caused by breathing, (ii) the extraction of the breathing signal from images with methods based on optical flow (FO) and RGB analysis, (iii) the elimination of breathing-unrelated events from the signal, (iv) the identification of possible apneas and, (v) the calculation of fR value every second. Unlike most of the work in the literature, the performances of the system have been tested in an unstructured environment considering user-camera distance and user posture as influencing factors. A total of 24 healthy volunteers were enrolled for the validation tests. Better performances were obtained when the users were in sitting position. FO method outperforms in all conditions. In the fR range 6 to 60 breaths/min (bpm), the FO allows measuring fR values with bias of −0.03 ± 1.38 bpm and −0.02 ± 1.92 bpm when compared to a reference wearable system with the user at 2 and 0.5 m from the camera, respectively.
- Published
- 2021
- Full Text
- View/download PDF
5. Deep segmentation leverages geometric pose estimation in computer-aided total knee arthroplasty
- Author
-
Pedro Rodrigues, Michel Antunes, Carolina Raposo, Pedro Marques, Fernando Fonseca, and Joao P. Barreto
- Subjects
orthopaedics ,surgery ,image registration ,bone ,medical image processing ,diseases ,pose estimation ,prosthetics ,image segmentation ,learning (artificial intelligence) ,neural nets ,knee arthritis ,joint disease ,computed tomography scan ,magnetic resonance imaging ,navigation system ,surgical flow ,computer-aided system ,depth cameras ,deep learning approach ,bone surface ,navigation sensor ,preoperative 3d model ,computer-aided total knee arthroplasty ,deep segmentation ,geometric pose estimation ,rgb cameras ,Medical technology ,R855-855.5 - Abstract
Knee arthritis is a common joint disease that usually requires a total knee arthroplasty. There are multiple surgical variables that have a direct impact on the correct positioning of the implants, and an optimal combination of all these variables is the most challenging aspect of the procedure. Usually, preoperative planning using a computed tomography scan or magnetic resonance imaging helps the surgeon in deciding the most suitable resections to be made. This work is a proof of concept for a navigation system that supports the surgeon in following a preoperative plan. Existing solutions require costly sensors and special markers, fixed to the bones using additional incisions, which can interfere with the normal surgical flow. In contrast, the authors propose a computer-aided system that uses consumer RGB and depth cameras and do not require additional markers or tools to be tracked. They combine a deep learning approach for segmenting the bone surface with a recent registration algorithm for computing the pose of the navigation sensor with respect to the preoperative 3D model. Experimental validation using ex-vivo data shows that the method enables contactless pose estimation of the navigation sensor with the preoperative model, providing valuable information for guiding the surgeon during the medical procedure.
- Published
- 2019
- Full Text
- View/download PDF
6. Vegetation Extraction Using Visible-Bands from Openly Licensed Unmanned Aerial Vehicle Imagery
- Author
-
Athos Agapiou
- Subjects
vegetation indices ,RGB cameras ,unmanned aerial vehicle (UAV) ,empirical line method ,Green leaf index ,open aerial map ,Motor vehicles. Aeronautics. Astronautics ,TL1-4050 - Abstract
Red–green–blue (RGB) cameras which are attached in commercial unmanned aerial vehicles (UAVs) can support remote-observation small-scale campaigns, by mapping, within a few centimeter’s accuracy, an area of interest. Vegetated areas need to be identified either for masking purposes (e.g., to exclude vegetated areas for the production of a digital elevation model (DEM) or for monitoring vegetation anomalies, especially for precision agriculture applications. However, while detection of vegetated areas is of great importance for several UAV remote sensing applications, this type of processing can be quite challenging. Usually, healthy vegetation can be extracted at the near-infrared part of the spectrum (approximately between 760–900 nm), which is not captured by the visible (RGB) cameras. In this study, we explore several visible (RGB) vegetation indices in different environments using various UAV sensors and cameras to validate their performance. For this purposes, openly licensed unmanned aerial vehicle (UAV) imagery has been downloaded “as is” and analyzed. The overall results are presented in the study. As it was found, the green leaf index (GLI) was able to provide the optimum results for all case studies.
- Published
- 2020
- Full Text
- View/download PDF
7. Non-Contact Respiratory Monitoring Using an RGB Camera for Real-World Applications
- Author
-
Sergio Silvestri, Emiliano Schena, Chiara Romano, and Carlo Massaroni
- Subjects
Respiratory rate ,breathing ,Computer science ,Optical flow ,Wearable computer ,TP1-1185 ,Respiratory monitoring ,RGB cameras ,01 natural sciences ,Biochemistry ,Signal ,contactless monitoring systems ,Article ,Analytical Chemistry ,03 medical and health sciences ,0302 clinical medicine ,Respiratory Rate ,Humans ,Computer vision ,Electrical and Electronic Engineering ,Instrumentation ,Monitoring, Physiologic ,business.industry ,Chemical technology ,Respiration ,010401 analytical chemistry ,Ranging ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,Breathing ,RGB color model ,Artificial intelligence ,business ,respiratory monitoring ,030217 neurology & neurosurgery - Abstract
Respiratory monitoring is receiving growing interest in different fields of use, ranging from healthcare to occupational settings. Only recently, non-contact measuring systems have been developed to measure the respiratory rate (fR) over time, even in unconstrained environments. Promising methods rely on the analysis of video-frames features recorded from cameras. In this work, a low-cost and unobtrusive measuring system for respiratory pattern monitoring based on the analysis of RGB images recorded from a consumer-grade camera is proposed. The system allows (i) the automatized tracking of the chest movements caused by breathing, (ii) the extraction of the breathing signal from images with methods based on optical flow (FO) and RGB analysis, (iii) the elimination of breathing-unrelated events from the signal, (iv) the identification of possible apneas and, (v) the calculation of fR value every second. Unlike most of the work in the literature, the performances of the system have been tested in an unstructured environment considering user-camera distance and user posture as influencing factors. A total of 24 healthy volunteers were enrolled for the validation tests. Better performances were obtained when the users were in sitting position. FO method outperforms in all conditions. In the fR range 6 to 60 breaths/min (bpm), the FO allows measuring fR values with bias of −0.03 ± 1.38 bpm and −0.02 ± 1.92 bpm when compared to a reference wearable system with the user at 2 and 0.5 m from the camera, respectively.
- Published
- 2021
8. Using UAV Borne, Multi-Spectral Imaging for the Field Phenotyping of Shoot Biomass, Leaf Area Index and Height of West African Sorghum Varieties under Two Contrasted Water Conditions
- Author
-
Boubacar Gano, Delphine Luquet, Alain Audebert, Joseph Sékou B. Dembele, Grégory Beurier, Adama P. Ndour, Diaga Diouf, Institut Sénégalais de Recherches Agricoles [Dakar] (ISRA), Université Cheikh Anta Diop [Dakar, Sénégal] (UCAD), Amélioration génétique et adaptation des plantes méditerranéennes et tropicales (UMR AGAP), Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad)-Centre international d'études supérieures en sciences agronomiques (Montpellier SupAgro)-Institut national d’études supérieures agronomiques de Montpellier (Montpellier SupAgro), Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement (Institut Agro)-Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement (Institut Agro)-Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (INRAE), Département Systèmes Biologiques (Cirad-BIOS), Centre de Coopération Internationale en Recherche Agronomique pour le Développement (Cirad), Deutscher Akademischer Austausch Dienst (DAAD), and sgt terra gates project - Bill and Melinda Gates foundation
- Subjects
0106 biological sciences ,Tolérance à la sécheresse ,010504 meteorology & atmospheric sciences ,UAV platform ,Phénotype ,unmanned aerial vehicles [EN] ,drought tolerance ,Facteur climatique ,RGB cameras ,01 natural sciences ,Imagerie multispectrale ,Mathematics ,2. Zero hunger ,Biomass (ecology) ,education.field_of_study ,biology ,Indice de surface foliaire ,food and beverages ,Agriculture ,Vegetation ,vegetation indices ,multi-spectral ,phenotyping ,F60 - Physiologie et biochimie végétale ,Drought tolerance ,Population ,Context (language use) ,Normalized Difference Vegetation Index ,West Africa ,[SDV.BV]Life Sciences [q-bio]/Vegetal Biology ,Leaf area index ,education ,0105 earth and related environmental sciences ,15. Life on land ,Sorghum ,biology.organism_classification ,Agronomy ,sorghum ,U30 - Méthodes de recherche ,Agronomy and Crop Science ,010606 plant biology & botany ,Index de végétation - Abstract
Meeting food demand for the growing population will require an increase to crop production despite climate changes and, more particularly, severe drought episodes. Sorghum is one of the cereals most adapted to drought that feed millions of people around the world. Valorizing its genetic diversity for crop improvement can benefit from extensive phenotyping. The current methods to evaluate plant biomass, leaves area and plants height involve destructive sampling and are not practical in breeding. Phenotyping relying on drone based imagery is a powerful approach in this context. The objective of this study was to develop and validate a high throughput field phenotyping method of sorghum growth traits under contrasted water conditions relying on drone based imagery. Experiments were conducted in Bambey (Senegal) in 2018 and 2019, to test the ability of multi-spectral sensing technologies on-board a UAV platform to calculate various vegetation indices to estimate plants characteristics. In total, ten (10) contrasted varieties of West African sorghum collection were selected and arranged in a randomized complete block design with three (3) replicates and two (2) water treatments (well-watered and drought stress). This study focused on plant biomass, leaf area index (LAI) and the plant height that were measured weekly from emergence to maturity. Drone flights were performed just before each destructive sampling and images were taken by multi-spectral and visible cameras. UAV-derived vegetation indices exhibited their capacity of estimating LAI and biomass in the 2018 calibration data set, in particular: normalized difference vegetative index (NDVI), corrected transformed vegetation index (CTVI), seconded modified soil-adjusted vegetation index (MSAVI2), green normalize difference vegetation index (GNDVI), and simple ratio (SR) (r2 of 0.8 and 0.6 for LAI and biomass, respectively). Developed models were validated with 2019 data, showing a good performance (r2 of 0.92 and 0.91 for LAI and biomass accordingly). Results were also promising regarding plant height estimation (RMSE = 9.88 cm). Regression plots between the image-based estimation and the measured plant height showed a r2 of 0.83. The validation results were similar between water treatments. This study is the first successful application of drone based imagery for phenotyping sorghum growth and development in a West African context characterized by severe drought occurrence. The developed approach could be used as a decision support tool for breeding programs and as a tool to increase the throughput of sorghum genetic diversity characterization for adaptive traits.
- Published
- 2021
- Full Text
- View/download PDF
9. ИМИТАЦИОННО УПРАВЛЯЕМЫЕ КОМПЬЮТЕРНЫЕ СИСТЕМЫ В ХИРУРГИЧЕСКОЙ ПРАКТИКЕ
- Subjects
Leap Motion ,АМТ и CNN ,Kinect ,AMT and CNN ,COTS ,RGB cameras ,RGB-камеры - Abstract
Работа хирургов в операционной наиболее требовательна к соблюдению правил асептики и антисептики. Руки хирурга должны быть как можно меньше подвержены заселению микроорганизмов. В настоящее время при хирургических манипуляциях используются компьютерные системы, которые необходимы для ведения проведения врачебных манипуляций. Устройства, например, клавиатуры, мыши, сенсорные экраны и световые приспособления являются потенциальными источниками инфекции или заражения в операционных. Решение данной проблемы представлено применением имитационно управляемых врачом компьютерных систем, таких как COTS, Kinect, Leap Motion, RGB-камеры, АМТ и CNN. Новое поколение инструментов, известных как коммерческие готовые устройства (COTS), обеспечивающие бесконтактное жестовое взаимодействие человека и приборов, изучается в хирургических средах. Одной из ведущей технологий для имитационной реабилитации является Microsoft Kinect, основанной на видео с использованием инфракрасного излучения для отслеживания движений тела пользователя. Контроллер Leap Motion чувствует естественное направление рук. AMT применяется в качестве модифицированного адаптивного мультипространственного преобразования. Двухпоточные CNN используются для распознавания действий на основе видео. В статье мы обобщили информацию о способах работы и применения данных компьютерных систем. Актуальность данного обзора подчеркивается отсутствием аналогичного на русском языке., The work of surgeons in the operating room is most demanding towards compliance with the rules of asepsis and antiseptics. The surgeon's hands should be susceptible to the colonization of microorganisms as little as possible. Currently, during surgery, computer systems are used, which are necessary for conducting medical maneuvers. Devices such as keyboards, PC mice, touch screens, and lighting devices are potential sources of infection in operating rooms. The solution to this problem involves the use of man-operated computer systems, such as COTS, Kinect, Leap Motion, RGB cameras, AMT and CNN. A new generation of instruments known as commercial off-the-shelf devices (COTS), that provide non-contact gesture interaction between a person and devices, is currently being studied for surgical purposes. One of the leading technologies for simulation rehabilitation is Microsoft Kinect based on video using infrared radiation to track the movements of the user's body. The Leap Motion controller senses the natural direction of the hands. AMT is used as a modified adaptive multi-dimensional transformation. Two-threaded CNNs are used to recognize actions based on video. The authors of the article summarize data on the ways of operation and application of these computer systems. The relevance of this review is due to the lack of research in this field that is available in Russian., Международный научно-исследовательский журнал, Выпуск 7 (109) 2021, Pages 96-99
- Published
- 2021
- Full Text
- View/download PDF
10. Integration of Computer Vision and Wireless Networks to Provide Indoor Positioning
- Author
-
Duque Domingo, Jaime and Duque Domingo, Jaime
- Abstract
Producción Científica, This work presents an integrated Indoor Positioning System which makes use of WiFi signals and RGB cameras, such as surveillance cameras, to track and identify people navigating in complex indoor environments. Previous works have often been based on WiFi, but accuracy is limited. Other works use computer vision, but the problem of identifying concrete persons relies on such techniques as face recognition, which are not useful if there are many unknown people, or where the robustness decreases when individuals are seen from different points of view. The solution presented in this paper is based on an accurate combination of smartphones along with RGB cameras, such as those used in surveillance infrastructures. WiFi signals from smartphones allow the persons present in the environment to be identified uniquely, while the data coming from the cameras allow the precision of location to be improved. The system is nonintrusive, and biometric data about subjects is not required. In this paper, the proposed method is fully described and experiments performed to test the system are detailed along with the results obtained., Ministerio de Ciencia, Innovación y Universidades (grant RTI2018-096652-B-I00), Junta de Castilla y León (grant VA233P18), Ministerio de Economía, Industria y Competitividad (project DPI2016-77677-P), Comunidad de Madrid (project S2018/NMT-4331)
- Published
- 2019
11. Integration of Computer Vision and Wireless Networks to Provide Indoor Positioning
- Author
-
Enrique Valero, Jaime Gómez-García-Bermejo, Carlos Cerrada, Jaime Duque Domingo, and Eduardo Zalama
- Subjects
Indoor positioning ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,IPS ,Posicionamiento en interiores ,02 engineering and technology ,RGB cameras ,01 natural sciences ,Biochemistry ,Facial recognition system ,Article ,computer vision ,Analytical Chemistry ,Indoor positioning system ,Robustness (computer science) ,WPS ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,Electrical and Electronic Engineering ,Wireless networks ,Instrumentation ,fingerprint map ,Wireless network ,business.industry ,WiFi ,010401 analytical chemistry ,indoor positioning ,020206 networking & telecommunications ,Atomic and Molecular Physics, and Optics ,0104 chemical sciences ,Cámaras RGB ,trajectory ,RGB color model ,Artificial intelligence ,business ,Redes inalámbricas - Abstract
Producción Científica, This work presents an integrated Indoor Positioning System which makes use of WiFi signals and RGB cameras, such as surveillance cameras, to track and identify people navigating in complex indoor environments. Previous works have often been based on WiFi, but accuracy is limited. Other works use computer vision, but the problem of identifying concrete persons relies on such techniques as face recognition, which are not useful if there are many unknown people, or where the robustness decreases when individuals are seen from different points of view. The solution presented in this paper is based on an accurate combination of smartphones along with RGB cameras, such as those used in surveillance infrastructures. WiFi signals from smartphones allow the persons present in the environment to be identified uniquely, while the data coming from the cameras allow the precision of location to be improved. The system is nonintrusive, and biometric data about subjects is not required. In this paper, the proposed method is fully described and experiments performed to test the system are detailed along with the results obtained., Ministerio de Ciencia, Innovación y Universidades (grant RTI2018-096652-B-I00), Junta de Castilla y León (grant VA233P18), Ministerio de Economía, Industria y Competitividad (project DPI2016-77677-P), Comunidad de Madrid (project S2018/NMT-4331)
- Published
- 2019
12. Non-Contact Respiratory Monitoring Using an RGB Camera for Real-World Applications.
- Author
-
Romano, Chiara, Schena, Emiliano, Silvestri, Sergio, and Massaroni, Carlo
- Subjects
- *
VENTILATION monitoring , *SITTING position , *IMAGE analysis , *UNOBTRUSIVE measures , *OPTICAL flow , *CAMERAS , *FOUR-dimensional imaging , *WEARABLE technology - Abstract
Respiratory monitoring is receiving growing interest in different fields of use, ranging from healthcare to occupational settings. Only recently, non-contact measuring systems have been developed to measure the respiratory rate ( f R ) over time, even in unconstrained environments. Promising methods rely on the analysis of video-frames features recorded from cameras. In this work, a low-cost and unobtrusive measuring system for respiratory pattern monitoring based on the analysis of RGB images recorded from a consumer-grade camera is proposed. The system allows (i) the automatized tracking of the chest movements caused by breathing, (ii) the extraction of the breathing signal from images with methods based on optical flow (FO) and RGB analysis, (iii) the elimination of breathing-unrelated events from the signal, (iv) the identification of possible apneas and, (v) the calculation of f R value every second. Unlike most of the work in the literature, the performances of the system have been tested in an unstructured environment considering user-camera distance and user posture as influencing factors. A total of 24 healthy volunteers were enrolled for the validation tests. Better performances were obtained when the users were in sitting position. FO method outperforms in all conditions. In the f R range 6 to 60 breaths/min (bpm), the FO allows measuring f R values with bias of −0.03 ± 1.38 bpm and −0.02 ± 1.92 bpm when compared to a reference wearable system with the user at 2 and 0.5 m from the camera, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. Using UAV Borne, Multi-Spectral Imaging for the Field Phenotyping of Shoot Biomass, Leaf Area Index and Height of West African Sorghum Varieties under Two Contrasted Water Conditions.
- Author
-
Gano, Boubacar, Dembele, Joseph Sékou B., Ndour, Adama, Luquet, Delphine, Beurier, Gregory, Diouf, Diaga, Audebert, Alain, and Stahl, Andreas
- Subjects
- *
SORGHUM , *LEAF area index , *MULTISPECTRAL imaging , *BIOMASS , *GENETIC variation , *AGRICULTURAL productivity - Abstract
Meeting food demand for the growing population will require an increase to crop production despite climate changes and, more particularly, severe drought episodes. Sorghum is one of the cereals most adapted to drought that feed millions of people around the world. Valorizing its genetic diversity for crop improvement can benefit from extensive phenotyping. The current methods to evaluate plant biomass, leaves area and plants height involve destructive sampling and are not practical in breeding. Phenotyping relying on drone based imagery is a powerful approach in this context. The objective of this study was to develop and validate a high throughput field phenotyping method of sorghum growth traits under contrasted water conditions relying on drone based imagery. Experiments were conducted in Bambey (Senegal) in 2018 and 2019, to test the ability of multi-spectral sensing technologies on-board a UAV platform to calculate various vegetation indices to estimate plants characteristics. In total, ten (10) contrasted varieties of West African sorghum collection were selected and arranged in a randomized complete block design with three (3) replicates and two (2) water treatments (well-watered and drought stress). This study focused on plant biomass, leaf area index (LAI) and the plant height that were measured weekly from emergence to maturity. Drone flights were performed just before each destructive sampling and images were taken by multi-spectral and visible cameras. UAV-derived vegetation indices exhibited their capacity of estimating LAI and biomass in the 2018 calibration data set, in particular: normalized difference vegetative index (NDVI), corrected transformed vegetation index (CTVI), seconded modified soil-adjusted vegetation index (MSAVI2), green normalize difference vegetation index (GNDVI), and simple ratio (SR) (r2 of 0.8 and 0.6 for LAI and biomass, respectively). Developed models were validated with 2019 data, showing a good performance (r2 of 0.92 and 0.91 for LAI and biomass accordingly). Results were also promising regarding plant height estimation (RMSE = 9.88 cm). Regression plots between the image-based estimation and the measured plant height showed a r2 of 0.83. The validation results were similar between water treatments. This study is the first successful application of drone based imagery for phenotyping sorghum growth and development in a West African context characterized by severe drought occurrence. The developed approach could be used as a decision support tool for breeding programs and as a tool to increase the throughput of sorghum genetic diversity characterization for adaptive traits. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Learning Deep NBNN Representations for Robust Place Categorization
- Author
-
Massimiliano Mancini, Barbara Caputo, Elisa Ricci, and Samuel Rota Bulò
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,visual learning ,02 engineering and technology ,RGB cameras ,Convolutional neural network ,020901 industrial engineering & automation ,image colour analysis ,convolutional neural networks ,0202 electrical engineering, electronic engineering, information engineering ,robot place recognition tasks ,image representation ,robust place categorization ,Contextual image classification ,feature extraction ,Semantics ,Computer Science Applications ,feedforward neural nets ,learning deep NBNN representation ,Categorization ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Robotics (cs.RO) ,CNN ,semantic place categorization ,Control and Optimization ,cameras ,image classification ,learning (artificial intelligence) ,robot vision ,local deep representations ,naive Bayes near-neighbor model ,Computer architecture ,Feature extraction ,Robot sensing systems ,Robustness ,Training ,Recognition ,semantic scene understanding ,Biomedical Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Robotics ,Artificial Intelligence ,Robustness (computer science) ,business.industry ,Mechanical Engineering ,Pattern recognition ,Human-Computer Interaction ,Control and Systems Engineering ,Robot ,RGB color model ,Artificial intelligence ,business ,Visual learning - Abstract
This paper presents an approach for semantic place categorization using data obtained from RGB cameras. Previous studies on visual place recognition and classification have shown that, by considering features derived from pre-trained Convolutional Neural Networks (CNNs) in combination with part-based classification models, high recognition accuracy can be achieved, even in presence of occlusions and severe viewpoint changes. Inspired by these works, we propose to exploit local deep representations, representing images as set of regions applying a Na\"{i}ve Bayes Nearest Neighbor (NBNN) model for image classification. As opposed to previous methods where CNNs are merely used as feature extractors, our approach seamlessly integrates the NBNN model into a fully-convolutional neural network. Experimental results show that the proposed algorithm outperforms previous methods based on pre-trained CNN models and that, when employed in challenging robot place recognition tasks, it is robust to occlusions, environmental and sensor changes.
- Published
- 2017
15. Integration of Computer Vision and Wireless Networks to Provide Indoor Positioning.
- Author
-
Duque Domingo J, Gómez-García-Bermejo J, Zalama E, Cerrada C, and Valero E
- Abstract
This work presents an integrated Indoor Positioning System which makes use of WiFi signals and RGB cameras, such as surveillance cameras, to track and identify people navigating in complex indoor environments. Previous works have often been based on WiFi, but accuracy is limited. Other works use computer vision, but the problem of identifying concrete persons relies on such techniques as face recognition, which are not useful if there are many unknown people, or where the robustness decreases when individuals are seen from different points of view. The solution presented in this paper is based on an accurate combination of smartphones along with RGB cameras, such as those used in surveillance infrastructures. WiFi signals from smartphones allow the persons present in the environment to be identified uniquely, while the data coming from the cameras allow the precision of location to be improved. The system is nonintrusive, and biometric data about subjects is not required. In this paper, the proposed method is fully described and experiments performed to test the system are detailed along with the results obtained.
- Published
- 2019
- Full Text
- View/download PDF
16. Deep segmentation leverages geometric pose estimation in computer-aided total knee arthroplasty.
- Author
-
Rodrigues P, Antunes M, Raposo C, Marques P, Fonseca F, and Barreto JP
- Abstract
Knee arthritis is a common joint disease that usually requires a total knee arthroplasty. There are multiple surgical variables that have a direct impact on the correct positioning of the implants, and an optimal combination of all these variables is the most challenging aspect of the procedure. Usually, preoperative planning using a computed tomography scan or magnetic resonance imaging helps the surgeon in deciding the most suitable resections to be made. This work is a proof of concept for a navigation system that supports the surgeon in following a preoperative plan. Existing solutions require costly sensors and special markers, fixed to the bones using additional incisions, which can interfere with the normal surgical flow. In contrast, the authors propose a computer-aided system that uses consumer RGB and depth cameras and do not require additional markers or tools to be tracked. They combine a deep learning approach for segmenting the bone surface with a recent registration algorithm for computing the pose of the navigation sensor with respect to the preoperative 3D model. Experimental validation using ex-vivo data shows that the method enables contactless pose estimation of the navigation sensor with the preoperative model, providing valuable information for guiding the surgeon during the medical procedure.
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.